I’m a bit confused on how the termination criteria work in cv2.calibrateCamera. As I understand, the criteria can be specified in the following format: criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, n, EPSILON).

In this case, the calibration process will terminate when one of the conditions is met. Either the number of iterations > n, or the error is smaller than EPSILON.

I understand in general the idea behind this criteria, but I’m struggling to see anything meaningful in practice. I set up my EPSILON = 2.5e-16. So that should take a long time to meet. I try different numbers of iterations from 1 … 1000. Surprisingly I get very close results ( up to 10th decimal) and very close computation times for each number of iterations. It makes me think it’s not working as I think it is.

So my question is what are exactly the number of iterations and the error when it considers interruption of the search?

So epsilon is the minimum change in the parameter vector relative to the parameter vector of the previous step.

Something like L2_Norm(Param_n - Param_n-1) / L2_Norm(Param_n-1)

So for calibrateCamera, the error is the change in the parameter vector, not reprojection error of the points (which is maybe what you were expecting.) Unfortunately the error measured in this way isn’t very intuitive. I think what you are seeing is that the optimization converges relatively quickly, and the algorithm never actually runs a large number of iterations, hence you see similar computation time (and results) for MAX_ITER = 50 or 500.

That’s my take on what is going on, but you should dig into the code yourself if you want to be sure.

@Steve_in_Denver , thank you for a thorough response. Interesting about measuring the change in the parameter vector. I guess, in this case Param_n is the parameter vector that improved the calibration from Param_n-1. So if the calibration is not improved they will most likely stay the same and hence trigger the criteria.

I wonder if there is a way to make the function search for more iterations. Or if the algorithm itself (LevMarq) does not make sense to run for more iterations, since it’ll just keep resulting in the same Param_n

There is no sense in continuing to iterate. You are probably hoping for better reprojection error with more iterations, but if the algorithm has already found the best set of parameters to model the input data, then searching further will yield nothing better. Think of this process as a higher-dimensional version of finding a line of best fit. If you have a collection of points that mostly follow a linear pattern (but with some noise / error), then you can find the line that fits the data better than any other line, but it will still have residual error (since the input data wasn’t perfect). There is no better fit, so no sense in searching further.

To improve your reprojection error you will have to improve your input data - the accuracy of the 3D points and / or the accuracy of the image points. If you have a lens with significant distortion, you might also consider using a different distortion model. I have found the rational model to be quite good at modeling a wide range of lenses, including those with significant distortion.

To be clear, the “can’t get any better” situation isn’t the fault of the Levenberg-Marquardt algorithm, it’s fundamental to the problem so there isn’t some better algorithm that would achieve better results outside of something that improves the data (or a model that is more reflective of the actual process you are trying to model). One thing you could try is to filter outliers and run calibrateCamera on the filtered data. This is justified if you have true outliers, but be careful not to just filter blindly ( because you can always throw away the worst point and achieve better scores, but the resulting accuracy might actually be diminished.)