Camera calibration for fisheye lense

I have 120 FoV lens camera and I am trying to do camera calibration using 1.) cv2.cameracalibrate function and also using 2) cv2.fisheye.calibrate function. I see the results from the function 1) is much more reliable. Any guidelines on which function to choose under what scenarios

calibrations always need good data. calibration is very sensitive to bad data, or insufficient data.

more isn’t better. it has to be of the right quality. you probably have no data points (from a chess board) in the corners of your camera’s view.

I found calibrateCamera to converge more reliably than the fisheye counterpart, but the fisheye calibration was much better at extrapolation.

My solution with a similar FOV lens was to use the Charuco calibration target to get more data closer to the edges/corner of the image. I found the rational model to work very well when you have sufficient edge/corner data.

As crackwitz points out, the quality of the data is critical. In my case I had to extend the calibration process to iteratively find more calibration target points. Basically:

  1. Run the charuco calibration to get an initial result.
  2. Use the current calibration result to predict the image location for other nearby chessboard corners (that is, ones that were not included in the current calibration, but are adjacent to ones that were used)
  3. Use goodFeaturesToTrack / cornerSubPix to get a precise image position for the new points.
  4. Add these new points to your calibration data and re-run calibration.
  5. Repeat 2-4 until some stopping criteria is met

This might not normally be necessary, but I found that with high distortion the normal algorithm would fail to find points as you got closer to the edge. It seems like the estimate of where the chessboard corner is (based on the adjacent ArUco marker locations) wasn’t working well with high radial and perspective distortion, but I’m not sure.

3 Likes

I personally haven’t used “charuco” (“chessboard” aruco) but it sounds like a VERY good idea, since you can just wipe it around and not pay attention to all the points being in view, and still get points in all the corners of the picture… which is unlike the algorithm for regular chessboards and other patterns, which fails if any corner is out of view.

1 Like

I will try and calibrate with Chaurco marker. I am working on 4K input and my distance to operate camera images 6-8 meters. Will this have any impact on samples I should collect? Is it necessary to take far distance images for calibration.

I only work with fixed focus lenses (no zoom, no focusing), so everything is mechanically locked down. If you are using a camera with variable zoom or focus you might have problems with calibration unless you can disable those functions. Calibrating the lens at one zoom level and then using it at a different zoom level won’t work - the focal length and other parameters change. There might be a way to account for this, but I’ve never tried.

As for using the camera at 6-8 meters distance, my main suggestion would be to try to calibrate it at similar distances, or at least make sure the distance you use is still in focus - if it isn’t, your calibration accuracy will suffer. Unfortunately at those distances (Assuming a fairly normal lens) I would think the target would have to be fairly large. You don’t have to fill the full image (or even most of it) for any given image in the calibration sequence, but you do want to cover the whole image area with your input image sequence. That might mean a large calibration target, or moving the target around a lot, or moving the camera around a lot.

You want to get a fair amount of depth change in your data set, so some of the images should be with the target closer to the camera, some further (always in focus, though), and some at a small angle, some with a larger angle, etc. Again you want “paint” the whole image area with samples. You might be able to get away with 6 images, or you might need 10 or more. My guess is that a meter or so of depth change would be sufficient for a camera that is 6-7 meters away (and you might be able to get away with a lot less than that).

If you need very high accuracy you will want to control your calibration target and the image capture process. I have run some simulations using synthetic data, and as I recall the target flatness was pretty important. Positional and scale accuracy of the target is also important, and depending on how it is produced (printed, etc.) this could be a factor. If accuracy isn’t critical this might not matter.

When you take a picture the camera and target should be fixed / not moving. It’s possible to get reasonable calibration results with a hand-held target but if you want better results I suggest a fixed camera and fixed target (meaning on a tripod or mount that is rigid and can be re-positioned). Again, how much effort you put into this depends on what you need in terms of accuracy. If you are just trying to make your distorted image look better, a basic calibration will probably work fine. If you are trying to make measurements with your calibrated camera you might want to put more effort into the calibration process.

1 Like

Thanks Steve for a elobrate suggestion. I am fixing the zoom, and agree what you said. I have printed pretty big chart (checkerboard)for the calibration to those distance. I would say my calibration is not accurate but approximately right. Still work on those result.