Do I even need camera calibration for length measurements with one camera?

For the types of cameras I work with (plastic S-mount lens holders with a through-hole PCB mount) that image center would be within range but on the high side. I googled that resolution of your sensor and most results were for Sony Alpha cameras. I would tend to expect the optical image center to be closer to the numerical center on a camera like that. I’m assuming the manufacturing tolerances would be much tighter on that type of camera and optics.

The reprojection error is pretty good. It’s worth taking a moment to consider what the reprojection error means (and what it doesn’t). It’s a measure of how well the calibration results (the model parameters) predict the mapping of the world points of the calibration target to the corresponding image points. Lower numbers are better, etc. BUT just as a high number isn’t a guarantee of a bad model (a few errant points can drive the score up even when the model itself is good), a low number doesn’t guarantee that you got a good model. It does mean that if you plug in the same world points used for calibration (along with the recovered pose/extrinsics for that calibration image) you will get a good estimate of the image point.

At the risk of sounding pedantic, this is important because you don’t just want the model to spit back the answers you already know, you want it to work in a variety of cases including for points that aren’t in your original data set. That’s why it’s important to feed the calibration process a range of images of the calibration target at different distances and (importantly) angles. Also the calibration target points (the chessboard corners) should cover as much of the image as possible - pay special attention to the corners / edges, because those are the hard ones to get. This is how you get a model that actually models what is physically going on with your camera, and not one that merely fits the data you gave it. For example, if you take pictures of a calibration target that doesn’t have much / any depth change, you might get a good score but your image center and focal length could be pretty far off.

How about posting the input images you are using for calibration?

As for not being sure if you really need to be fully calibrating your camera or not…well, it’s complicated.

Yes, it’s true that you don’t need to know your image center or your focal length in some specific cases. For example, if all of your points are going to fall on a plane, and your lens has negligible distortion, you can just use a homography to calibrate the plane to image mapping and you should get pretty good results as long as the things you are measuring is on that plane. In many cases this is good enough! In the cases where you actually need more, starting with a homography still might be the right choice, but at some point you might need a full calibration to make progress to the end objective.

In your case it sounds like you would like to keep things simple and therefore want to use a homography (or something equivalent). Since your lens has a lot of distortion, you need to be able to correct for that first (because homographies can’t model non-linear distortion), which pushes you down the camera calibration path. At a minimum you need to get a good estimate of the image center in order for the distortion to work (it’s a radially symmetric function about the optical image center, if your image center is off you are cooked.) I think it also uses the focal length, so you should probably just let it optimize that for you too, but (I think?) the focal length is only used for normalizing, so you can provide it / lock it down if you prefer, as long as you use it consistently. (I think that’s right, but not certain.)

The problem I see with this approach is that you aren’t really even getting started but you already seem interested in making measurements of points / objects that aren’t constrained to a plane, which means the homography approach isn’t a good fit. Maybe you can get away with a fully calibrated camera + some knowledge about the Z-distance of the different points you are measuring, or maybe you need multiple cameras. In either case I think you will probably need high quality calibration of all parameters in order to get good results.

As for your cylinder question, I’m afraid I’m not smart enough to answer that. My instinct says that if you know D, you might be able to deduce L base don the ellipse equation of the projected circle. But maybe that’s not enough information and you’d need something else?

1 Like