Question about camera calibration: findChessboardCorners given different orientation

when I’m using findChessboardCorners(), it gives random orientation (especially when I’m using the square chessboard, it has four possible directions). I wonder if it will have an effect when I do the calibrateCamera().

Another question is that when using calibrateCamera(), the object point given with horizontal sequences and vertical sequences give me a different camera matrix and distortion coeff. I wonder if it’s correct and why it happens.

I would suggest you not use a square chessboard. I would also suggest using the ChAruco board instead - you can use images that don’t contain the whole chessboard which makes it much easier to get samples near the edges / corner of the image (which in turn gives you distortion model that is correct over a larger portion of the image).

I’m not sure what you are asking in the second part of the question, but the results you get will vary somewhat from run to run. If you are getting significantly different results with “horizontal sequence” and “vertical sequence”, I would suggest trying to figure out which one is closest to your expected values and try to understand the difference between the two. They can’t both be correct if the results are significantly different. The calibration process is trying to estimate the physical properties of the camera, so in that sense there is one right answer. If you know the effective focal length of the lens you are using and the pixel size of the sensor, you can get a reasonable estimate of what the focal length should be for your camera matrix. (focal length is stored at (0,0) and (1,1) of your camera matrix).

For example, if the lens you are using has a 5mm efl, and your pixel size is 2.2 microns, your calibrated focal length should be approximately 5mm / 2.2 um = 2500.

If you are using a camera that has an auto-focus lens, I’d suggest getting a different camera / lens so that you can lock it down to a fixed focal length.

1 Like

I might not quite well describe my situation.

In the calibrateCamera() we need to provide object points. For example for a 3*3 chessboard. We have two options:

First is
[0,0 0,1 0,2 1,0 1,1 1,2 2,0 2,1 2,2]

Another is
[0,0 1,0 2,0 0,1 1,1 2,1 0,2 1,2 2,2]

In my opinion, it should not make a difference in the camera matrix result (should only affect rvec).

But in my test, the result of the camera matrix has slightly changed by using different object points. For example fx from 2549 to 2551 (just an example).

Another situation is that when using cv::fisheye::calibrate with different object points, it give me the same camera matrix.

So, I wonder is this behavior correct? If it’s correct, which object points I should use.

BTW, when I’m using findChessboardCornersSB() on a large image (for example, 2160 * 3840) it just can’t find corners, when downscaling the image, it works.

As the answer in c++ - FindChessboardCorners cannot detect chessboard on very large images by long focal length lens - Stack Overflow described. I think it’s the conv mask size issue. If I don’t want to lose the accuracy, what can I do?

Thanks for your patient reply.

Are you using the same input images for these two tests, or are you collecting new images for each test? If you are collecting new images, the difference is probably just due to the inherent variation in the input data and the optimization process. A focal length change of that magnitude isn’t very significant, and I wouldn’t worry about it too much.

If you are getting that difference when using the exact same input images, that would suggest some (small) problem with the algorithm. I would expect both object point orderings to give you the same results.

Another possibility is that you aren’t using the CALIB_FIX_ASPECT_RATIO flag when calibrating. This flag forces the Fx and Fy values to be the same (0,0 and 1,1 in the camera matrix). Without it they are free to be optimized independently, and typically they end up with pretty close (but not exactly the same) values. By changing the order of the object points, you are changing the rotation of the camera, which would (I think) cause the Fx and Fy values to swap. If you aren’t passing the CALIB_FIX_ASPECT_RATIO, I would suggest doing that…

1 Like

I haven’t encountered that problem, so I’m not sure I understand what is causing it. Have you tried using a calibration target with larger squares (or closer to the camera)?

I would also suggest trying the ChAruco calibration target / functions. It has other advantages that make it worth using, but also it might not implement the corner finding with fixed parameters in the way that the normal chessboard calibration does. (I don’t know that, but it might be worth a try.)

Thanks a lot for your help.

Blockquote
I would also suggest trying the ChAruco calibration target / functions. It has other advantages that make it worth using, but also it might not implement the corner finding with fixed parameters in the way that the normal chessboard calibration does. (I don’t know that, but it might be worth a try.)

It’s a pretty good idea to use ChAruco, I will definitely have a try.

Blockquote
Are you using the same input images for these two tests, or are you collecting new images for each test?

Yes, I’m using the same image sets for both vertical and horizontal object points. CALIB_FIX_ASPECT_RATIO Doesn’t help in my case. In my case, the fx fy should all be around 2400, but with 5000 when using k1 k2 p1 p2 k3 model (cv::calibrateCamera), and 1514 when using k1 k2 k3 k4 model (cv::fisheye::calibrate). None of them has a good undistortion result.

So I’m thinking about whether is it a problem caused by the rolling shutter camera. Because I’m holding the calibration board and moving pretty fast I think. (Although no significantly blur was observed)

After I capture the image which I move the calib board pretty slow, it gives me a descent result which does around 2400.

So, I wonder if OpenCV’s calibration method is not good at using moving object with rolling shutter camera, cause I find kalibr do have specifically talk about this situation, have you ever use kalibr? Is it worth a try? GitHub - ethz-asl/kalibr: The Kalibr visual-inertial calibration toolbox

Again, thanks a lot for your patient help, it still helps me learn a lot.

Rolling shutter is a plausible explanation, I think. I do all of my calibration with an automated fixture which moves the calibration target mechanically and then pauses before taking a picture. Since I have effectively eliminated movement, I don’t think I would have seen any problems caused by rolling shutter.

If you do some more tests, post the results here.