I have an image with fisheye distortion and the corresponding matrices. When I undistort the image and some known points they shift relative to the image. The only related questions that I can find are from people who don’t provide the Knew/P matrices.

This is a snippet from the result. The top image is the original distorted image with distorted points. The bottom image is the undistorted image with undistorted points.
As you can see the points in the middle are fine, but they get worse further away from the principle point. What am I doing wrong?

I would suggest increasing your cv2.TERM_CRITERIA_COUNT value to 200 ( or more) and your TERM_CRITERIA_EPS to 0 - just for testing / understanding how they affect things. I have definitely encountered situations where I need a lot of iterations. I currently use 150 in some cases, but that isn’t necessarily a well honed value (more likely it was chosen out of frustration and left at a high level.) I suggest setting the EPS value to 0 to make sure the iterative process isn’t stopping prematurely. Maybe more iterations would help?

I last worked with the fisheye model 4+ years ago, so this info might be out of date. At the time I found the calibration process didn’t converge as reliably, and that the support functions for the fisheye model weren’t as robust / comprehensive. I don’t remember the details, but I decided that the standard model was a better bet for me.

Have you tried the standard calibration with the rational model? I found that it does a very good job with some lenses with pretty significant distortion.

I have wondered (but never tried) if it would be possible to compute an inverse distortion function that you could evaluate directly (instead of the iterative refinement approach). Surely it’s possible, but I don’t know if one would need to write separate functions, or if there is a clever way to get it to work within the OpenCV framework.

In any case, what you are trying to accomplish is definitely possible, so don’t give up!

Setting TERM_CRITERIA_EPS to 0 results in all points going to [-1000000, -1000000]. I tried with 500, 1e-6 and that gives the same results as the default. It was worth a try!

(and 3) In practice this is just a step for converting between world and image coordinates. The program can use pinhole as well (which works fine), but for some images (like the example) the pinhole model gives a very poor fit. projectPoints works well for world->image for both models. The reverse method doesnt exist in opencv so that is done using undistort + some math. This works fine for pinhole and for most of the fisheye images, except for cases like this

(actually 4) I dont really understand why it isnt. As far as I understand it it is a pretty straight forward mapping function, but it has been a while since I did math like this.

If all else fails I can always get the inverse with a look up table, but that would be terribly ugly. I just need x == undistort(distort(x)) to be true, which seems like a reasonable assumption to make? So either I am doing the call wrong somehow, or there is a bug in openCV.

even if you could assume monotonicity of the lens distortion polynomial, and that it’s nothing worse than a polynomial, the task is still finding roots (f(x) = y, solved for x). everyone once learned the closed form solution for quadratic equations and some people know there exist hard-to-remember and page-filling forms for 3rd and maybe 4th orders… but that’s about it.

don’t go looking for closed form solutions. at best, you could simplify/refactor the math and employ lookup tables, or just do a few rounds of iteration, which is still extremely cheap.