Fisheye undistortImage and undistortPoints appear inconsistent?

I have an image with fisheye distortion and the corresponding matrices. When I undistort the image and some known points they shift relative to the image. The only related questions that I can find are from people who don’t provide the Knew/P matrices.

This is a snippet from the result. The top image is the original distorted image with distorted points. The bottom image is the undistorted image with undistorted points.
As you can see the points in the middle are fine, but they get worse further away from the principle point. What am I doing wrong?

Here is the code I use for the undistortions:

undistorted_img = cv2.fisheye.undistortImage(
                distorted=img, K=K, D=d, Knew=K)

undistorted_ip = cv2.fisheye.undistortPoints(
                ip, K=K, D=d, P=K)

I already tried increasing the criteria to

criteria=(cv2.TERM_CRITERIA_COUNT | cv2.TERM_CRITERIA_EPS, 50, 0.03)

but that doesnt help (decreasing it does make it a lot worse).

Any help would be greatly appreciated!

A few comments:

  1. I would suggest increasing your cv2.TERM_CRITERIA_COUNT value to 200 ( or more) and your TERM_CRITERIA_EPS to 0 - just for testing / understanding how they affect things. I have definitely encountered situations where I need a lot of iterations. I currently use 150 in some cases, but that isn’t necessarily a well honed value (more likely it was chosen out of frustration and left at a high level.) I suggest setting the EPS value to 0 to make sure the iterative process isn’t stopping prematurely. Maybe more iterations would help?

  2. I last worked with the fisheye model 4+ years ago, so this info might be out of date. At the time I found the calibration process didn’t converge as reliably, and that the support functions for the fisheye model weren’t as robust / comprehensive. I don’t remember the details, but I decided that the standard model was a better bet for me.

  3. Have you tried the standard calibration with the rational model? I found that it does a very good job with some lenses with pretty significant distortion.

  4. I have wondered (but never tried) if it would be possible to compute an inverse distortion function that you could evaluate directly (instead of the iterative refinement approach). Surely it’s possible, but I don’t know if one would need to write separate functions, or if there is a clever way to get it to work within the OpenCV framework.

In any case, what you are trying to accomplish is definitely possible, so don’t give up! :slight_smile:

Thanks for your reply!

  1. Setting TERM_CRITERIA_EPS to 0 results in all points going to [-1000000, -1000000]. I tried with 500, 1e-6 and that gives the same results as the default. It was worth a try!

  2. (and 3) In practice this is just a step for converting between world and image coordinates. The program can use pinhole as well (which works fine), but for some images (like the example) the pinhole model gives a very poor fit. projectPoints works well for world->image for both models. The reverse method doesnt exist in opencv so that is done using undistort + some math. This works fine for pinhole and for most of the fisheye images, except for cases like this :frowning:

  3. (actually 4) I dont really understand why it isnt. As far as I understand it it is a pretty straight forward mapping function, but it has been a while since I did math like this.

If all else fails I can always get the inverse with a look up table, but that would be terribly ugly. I just need x == undistort(distort(x)) to be true, which seems like a reasonable assumption to make? So either I am doing the call wrong somehow, or there is a bug in openCV.

even if you could assume monotonicity of the lens distortion polynomial, and that it’s nothing worse than a polynomial, the task is still finding roots (f(x) = y, solved for x). everyone once learned the closed form solution for quadratic equations and some people know there exist hard-to-remember and page-filling forms for 3rd and maybe 4th orders… but that’s about it.

don’t go looking for closed form solutions. at best, you could simplify/refactor the math and employ lookup tables, or just do a few rounds of iteration, which is still extremely cheap.

Right, I see. I guess I will try to find the mistake in the opencv source.

I also found this post which seems to be related Odd result on cv::fisheye::undistortPoints and also the same thing on stackoverflow. Too bad noone has figured it out yet (or at least not posted it).

Dealing with the same problem right now. Have you made any progress since you posted this a year ago?

I’m having no luck with adjusting termination criteria for cv2.fisheye.undistortPoints()

Trying to use the XY map given to me by cv2.fisheye.initUndistortRectifyMap() - but in order to use that for undistorting points, I would have to invert it, which is not trivial, and none of the solutions online gave me usable results so far.

if you need to undistort points instead of images, there is a function that does this numerically:

https://docs.opencv.org/4.x/db/d58/group__calib3d__fisheye.html#ga5c5be479d6ff9304ed2298b314c361fc

if that gives you weird results, perhaps file an issue or find an existing issue and add your voice.

if you want to invert a map instead, that too can be done: