Do I even need camera calibration for length measurements with one camera?

My understanding is that f_x and f_y are allowed to vary independently because not all cameras use square pixels, and having different f_x and f_y parameters allows the model to fit the data. The vast majority of modern cameras do use square pixels, however, so it is common (and recommended) to use the CALIB_FIX_ASPECT_RATIO parameter when calibrating (this optimizes a single focal length parameter / doesn’t allow them to vary independently.)

You probably don’t need a skew parameter. See Hartley/Zisserman 6.2.4 for a discussion on skew.

I take x_0 and y_0 to be the image center (cx, cy). Those are necessary in order to handle the lens distortion because the distortion model is radially symmetric about the optical center of the image.

The camera calibration process in OpenCV accepts flags that let you control what it parameters it optimizes (for example CALIB_FIX_ASPECT_RATIO). You can supply a camera matrix to the calibration process which will serve as a starting point for the optimization (with the flag CALIB_USE_INTRINSIC_GUESS), but by passing the correct flags you can also force the optimization algorition to use the parameters you provide. For example CALIB_FIX_PRINCIPAL_POINT will not optimize the cx,cy parameters.

(there are many other flags, see the opencv documentation to learn about all of them)

Opencv Calibration docs v3.4

Assuming you are using the OpenCV calibration algorithm to get your radial distortion parameters, I would suggest letting it optimize your focal length and image center, too. If you think you know your image center and focal length better than what the optimizer will generate, then you are free to provide a camera matrix (along with the appropriate flags) to force the optimization to use your input. I would suggest trying it both ways and comparing the reprojection error - I suspect you will get better results letting OpenCV optimize all of the relevant parameters.

Depending on the nature of your lens distortion you might be better served using something other than the basic k1 k2 model. I find the rational model to behave very well for high-distortion lenses with the caveat that you have to provide data (chessboard corners) that cover the parts of the image you want to undistort accurately. It (the rational model) can have pretty wild behavior when extrapolating beyond the input data coverage.

As for your original question, my thoughts.

I’m assuming you want to measure lengths where all points are located in the same world plane. If you are trying to do something different, then you’ll need a different approach to what I suggest here.

I would do the following:
Calibrate the camera intrinsics using OpenCV’s calibrateCamera function. I’d fix the aspect ratio (lock the two focal lengths) unless I had reason to think the camera used non-square pixels. I’d evaluate the distortion of the lens and pick the appropriate distortion model. K1,K2 model might be just fine, but the rational model is there if you need it.

I’d use the Charuco calibration target and associated functions because you don’t have to see the full calibration target in the input images - this makes it much easier to get measurements near the edges / corners of the image.

Once the intrinsics are calibrated I would calculate a homography that maps undistorted imaged coordinates to 2D plane coordinates. Once you have this, calculating distances between image points becomes pretty straightforward.

Whether or not you can achieve the accuracy you desire depends on a number of factors, but with the correct optics and sensor I am confident you can do it - but that might mean having a smaller FOV than you want, etc.

1 Like