getOptimalNewCameraMatrix aspect ratio distortion

Edit: I’m updating this post because the results are likely out-of-date.
My testing / simulation was done with an older version of OpenCV, so some of my findings (particularly the tight tolerance on the target planarity) are not valid.

See Eduardo’s post addressing the question on calibration method here:
https://forum.opencv.org/t/cameracalibration-algorithms-in-calibratecamera/10345

Note that “newer versions” of OpenCV use this method https://elib.dlr.de/71888/1/strobl_2011iccv.pdf
which specifically addresses the issue of inaccurate calibration target geometry (including out-of plane).

So, if you are using a newer version of OpenCV, you can probably do just fine with lower quality calibration targets!

Yay for science, research, and the people who continue to advance the state of the art!

I wrote an OpenGL program that rendered some basic geometry texture-mapped with a calibration target image (Charuco baord). Since this was synthetic data, I had “perfect” knowledge of how it was rendered. I used a camera matrix that was close in FOV to the lens I wanted to use and generated a number of images of the calibration target and then ran the OpenCV camera calibration procedure on those images.

The purpose of this was for me to get some design guidelines for an automated calibration rig (put a camera in, press go, get calibration results.) I wanted to know how target flatness, scale, point location accuracy etc. affected my results. I also wanted to know what type of motion (rotation translation of the target) I needed, and also how many images I needed to capture.

For a baseline test I rendered 10 or so images of a “perfect” calibration target (totally flat, no scaling or other perturbations) form a variety of reasonable positions/orientations. As expected I got very good results (some reprojection error, but effectively zero).

I then tried a number of different variations to see what worked and what didn’t. This is from memory, but my take away was:

  1. Flatness matters more than I would have expected. I think I ended up with a tolerance target of +/- 0.020" (~0.5mm) across a target that is probably 24" x 12" in size. This wasn’t hard to achieve, but did cost a bit.
  2. Uniform scale matters in terms of the results I get, but for my specific application it wasn’t an important factor.
  3. Point-to-point accuracy is important for generating low scores, but (not surprisingly, I guess) if the perturbations follow a normal distribution, the actual end result accuracy is minimally affected.
  4. A single flat calibration target tilted toward the camera and rotated on an axis “parallel” to the camera axis, is sufficient for high-quality calibration. (No translation is needed)
  5. After about 7 images the calibration doesn’t change much. I think I ended up using 12-14 images, but I don’t think there is much justification to go beyond 10 for my application.
  6. A tilt angle of at least 15 degrees.

I used these findings to build a calibration rig, and use it on a range of lenses ranging from about 1.5mm focal length (and a lot of distortion) to about 6mm focal length, and for focus distances of about 5" to 12".

The flatness tolerance is pretty tight, but I really need high accuracy. Also as your calibration target size (and z-distance to the camera) grows, that tolerance grows too. As for the 15 degree tilt angle, somewhat more is probably better, and somewhat smaller angles worked, but (if memory serves) the results were less stable. In my case I was balancing between depth of field / focus issues (too much tilt / z-change caused things to be out of focus) and enough tilt to get reliably accurate results. Your needs are certainly different than mine, so you’ll have to experiment to find what works for you.

2 Likes