Extremely high reprojection error but good undistorted image

Hi,

I am running into something that is quite confusing to me.
Basically, I have found that the fisheye model of OpenCV is more appropriate to calibrate my camera.
Using the pinhole model, the reprojection error is quite low, between 0 and 1, but the undistorted result is not satisfying.

Using the fisheye model, I am getting a reprojection error of 94, which is high, but the image gets properly undistorted.

If it helps, this is how I use the calibration function:

rms, self.cameraMatrix, self.distCoeffs, _, _ = cv.fisheye.calibrate(self.objPoints, self.imgPoints, self.calibrationImgShape[::-1], None, None) 
print(f"rms: {rms}")

I am not sure if I am misunderstanding the function, but I would love some guidance on why this is the case. Thank you!

Some thoughts:

Undistorted image looks pretty good, but could be better. I saw about 4 pixels of curvature on the top line. That might be totally fine for your use case, but if you want better there are things you can do to improve the results. I like undistort the images and then augment them with a “world space” grid, projected to the (undistorted) image using the recovered intrinsics and extrinsics (pose) for the corresponding image. I have attached some images showing the input image and the undistorted + augmented image.

I work with lenses with significant distortion (more than what you are dealing with) and have been using the pinhole model and the RATIONAL distortion model. When I was originally developing the code the fisheye functions weren’t as stable / well developed, so I couldn’t rely on them. I presume they are better now so they might be the right choice for you, but you might consider using the standard calibration with the rational model. One caveat: the rational function doesn’t extrapolate very well so you need to get sample points close to the far edges / corners of the image to get good results over the whole image. The “trick” is to use a Charuco calibration pattern and calibration functions. You can use images that only see part of the calibration pattern, so it’s much easier to get samples in the corners of the image.

As for why the reprojection error is so high, I like to project the world-space chessboard corners to the image and then draw circles on the original image. Sometimes the error is structured and seeing the global behavior across a grid can give you clues to what is going on. Other times the reprojection error is high due to a few errant points - in these cases it’s helpful to augment the input image with circles (or whatever) where the calibration process detected the corners. Often most of the corners will be detected correctly, but a few will find a corner where there isn’t one - this can drive your reprojection error up very high, but the resulting calibration might still be pretty good. I always do something to detect outliers and then filter them, and then re-run the calibration on the inliers. The third image shows the inliers (red) and the outliers (green).

What I’m suggesting is that your “high reprojection score” might be due to a few bad points, which is why your resulting image looks pretty good in spite of the high score. Also I think the likely reason your pinhole model got a good score but unacceptable undistortion results is because (I suspect) you didn’t get enough samples close to the edges/corners of the image. Also you might be using a simple distortion model which isn’t suitable for the amount of distortion your lens has (which is why I suggested the rational model).

Note: If you augment the images this way, make sure to specify the image locations with subpixel units. The way OpenCV drawing commands handle subpixel drawing is a bit strange, at least with the C++ functions. You’ll want to understand how that works.



1 Like

I want to say thank you so much for the answer and all the insight, I will definitely take all of it into account. I tried using the rational model before taking anymore images and the reprojection error is back to being reasonable and the undistorted image looks better. However, for having worked with the rational model a bunch, do you have any idea of why this ring in the picture is there? Is it the calibration?

Yeah, the “ring” is because the rational model isn’t well behaved when you rely on extrapolation. If you were to plot the distortion function I think you would find a discontinuity near the “wrinkle” in the image. I have included an image that might help make it more clear.

I’d be curious to know what the calibrated focal length was on this calibration run. Any chance it was about 865 pixels?

The problem is that your calibration target doesn’t fill the whole image. If you want accurate / sane distortion correction for the whole image, you will need to collect data over the whole image. Again, the Charuco calibration process makes this much easier since you don’t need to see the full pattern. It’s worth noting that the fisheye calibration apparently handles extrapolation much more gracefully, so you can probably get away with images that don’t have samples near the corner. My experience with the fisheye calibration functions is from quite a long time ago (6 years or more), but I had trouble with the calibration algorithm not being repeatable (sometimes didn’t converge, sometimes worked really well, sometimes gave less-than-good results) and also the supporting functions (undistort, etc) didn’t seem as fully developed / supported. Additionally it was a lot easer to find documentation and forum support for the pinhole methods. I suspect the state of things is better now, so don’t let me talk you out of using the fisheye model if that’s what works for you.

That makes sense. fx and fy were around 1200 pixels, I have not calculated the focal length.
I am still looking at both models, but whatever calibration image set from this camera I use with the fisheye model, the reprojection error is really high, which makes me think the model is not suitable for this. For the rational model, the error is always more sensible and the ring has only appeared for one set of images, which makes me think it’s the right model, but needs lots of calibration images.
Thank you again for your responses, they have helped a lot.

I suggest projecting all of the 3D chessboard corner points (use 0 for Z coordinate) using the calibrated camera matrix, distortion model and the corresponding extrinsics (pose) and then draw them to the original image.

So, for example, If you have 5 chessboard images you are using for calibrateCamera, you will get back one camera matrix (intrinsics), one set of distortion coefficients, and 5 rvec and tvec values.

For image 1 use the camera matrix, distortion coefficients and rvec_1 and tvec_1 to project the world coordinates of the chessboard corners ( <X,Y,0> for each corner). This will give you distorted image coordinates which you can then directly draw (with cv::circle) to the image. Do this for both the fisheye and pinhole models and save the images.

When you look at the images with the circles drawn to them, you should be able to tell where your error is coming from. Maybe the fisheye model isn’t suitable, but maybe you just have some errant points that it’s detecting incorrectly.

I suggest doing this even if you don’t think you need it. Once you have the function to do it, you can just use it any time you want…and it can be really helpful in understanding why calibration sometimes produces good results and sometimes doesn’t.

To be very clear, the rational model doesn’t need “lots of calibration images” as much as it needs a set of calibration images that includes chessboard corners near the corners of the image. If all of your calibration images look similar to the one you posted, you will get a good fit in the central part of the image, but beyond that there aren’t any guarantees.

Also it bears repeating - in order to get chessboard corners near the edges/corners of the image, it’s vastly easier if you use the Charuco calibration pattern. Unless something has changed, the standard chessboard must be fully visible in each calibration image - partial images don’t work (which makes it hard to get points near the corners.)

Good luck.