Cannot get good camera calibration

Hello, I am trying to do a basic calibration of my two USB cameras, with little success so far. I am using a 9x14 corner chessboard, with each image at 1080p, and tried different sets of calibration data, but no matter what I try my undistorted image always looks a fair bit worse than the original, and the ret value given by cv2.calibrateCamera() is over 100, which if I understand correctly is quite large. At the same time, looking at the results of the cv2.drawChessboardCorners() function, all the corners appear to be found fairly accurately. I’m not sure what I’m doing wrong and would appreciate any pointers. My code is below:

import numpy as np
import cv2
import glob

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((9*14,3), np.float32)
objp[:,:2] = np.mgrid[0:14,0:9].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

images = glob.glob('L*.png')

for fname in images:
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, (9,14),None)

    # If found, add object points, image points (after refining them)
    if ret == True:
        objpoints.append(objp)

        corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
        imgpoints.append(corners2)

        # Draw and display the corners
        img = cv2.drawChessboardCorners(img, (9,14), corners2,ret)
        cv2.imshow('img',img)
        cv2.waitKey(1)

cv2.destroyAllWindows()

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
print(ret)
img = cv2.imread('LImage1.png')
h, w = img.shape[:2]
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (w, h), 1, (w, h))
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
x, y, w, h = roi
dst = dst[y:y + h, x:x + h]
cv2.imwrite('undistorted.png', dst)

Have you read this tutorial : interative calibration ?

Just finished reading it. As far as I can tell it’s not a tutorial so much as a description of an application for interactive calibration, and I can’t seem to find the application itself… I would be happy to try it if I can find it though.

please post some of those pictures you took

I’ll have to do multiple posts since it’s not letting me upload more than one image. Here is one of the calibration pictures:

Here is the same “undistorted” image, when using a small dataset for calibrateCamera (if I use my whole dataset, it looks way worse).

Finally, here is an image from the calibration with the corners found:

ok. I suspect that your dataset contains boards staying in the center of the view… you need to cover the whole view, especially the corners of the picture, with the corner points from that board. not in every picture but throughout the dataset.

Thanks for your response!
My dataset contains 64 boards with all kinds of views, including ones close to the edges, just not the very edges themselves (I am going for a stereo calibration, so if an image were to contain the very edges on one camera, the board could go offscreen on the other). If you think that poses a problem, I’ll try to capture a specific dataset for one camera where all the points are covered by corners, and report back what happens.

yes, the board needs to be completely in view in both views.

if you believe that’s not the issue, I have one more suggestion… take care that the chessboard corner finding API labels these points consistently between both views. see if you can find a pair of pictures where the color coding indicates mismatching order.

get a single live view, rotate the board around, observe at what orientations the algorithm flips its labeling order. make sure to stay away from that orientation.

oh, and make sure to hold your board very very still. your camera likely has rolling shutter, which can introduce significant distortions.

Thanks for the suggestion. I’ll check out whether corners between views are consistent, but was first hoping to at least get one camera working. As you suggested in your first comment, I made a new dataset which covered the edges as well as the middle, for one camera. I then tried undistorting an image, with the following results:


The main image is the original, while the top right corner is an undistorted ROI version. Again, the original image seems far superior to the undistorted one. In addition, my reprojection error (ret from cv2.calibrateCamera()) is 112. Could it be that the camera is doing its own procedure to undistort the image? It looks to me like straight lines are actually straight in the main picture, e.g. the shelf lines. Could this be throwing the algorithm off?

your camera doesn’t remove lens distortion. take a ruler to your picture, you’ll see lines curve slightly.

even if it did, calibrateCamera would work regardless and just say little to no distortion.

a reprojection error of 112 is severe.

you could provide data and code if you have custom code.

Using a straight line in paint, I actually can’t detect a curvature to any of the straight lines in that particular camera’s image. Moot point since you say the algorithm wouldn’t struggle with that.

I just tried a different type of USB camera, which does visibly distort a bit, and took 16 completely still pictures of the chessboard (neither the camera nor board was moving) to account for possible rolling shutter problems. My reprojection error did go down from about 105 for that camera with handheld board patterns to 78 for the still ones, but that’s still strange, as it implies the projected points are on average scattered a twentieth to a tenth of the image size from where the found points were.

My code is in the first post, it’s actually pretty much textbook code of the calibration procedure. How can I provide the data? This forum lets me upload a single picture at a time, and I’m sure spamming over a dozen posts would be frowned upon.

Edit: upon taking more still board images with the second camera, got 114 reprojection error again. I don’t think the 78 was indicative of improvement, just luck.

there is some curvature

reprojection error is calculated after applying the distortion model. if the lens conforms to the model at all, even if the lens distorts severely, the reprojection error should be tiny, in the order of a few pixels or less.

what you get is because the points do not make sense. we should find out what the points actually are.

for pictures, imgur works. for general/binary data, people also use google drive, dropbox, …

Makes sense. Here is my data for the second camera I tried: CalibrationData - Album on Imgur

Also, here is an image to undistort from the same camera:


As mentioned, the code is in the first post, and will work as long as the images beginning with the letter L, ending in .png, as in “L*.png” as well as a picture named “Image.png” are in the same folder with the python file. I used python3 to run the code. It gives a reprojection error of a bit over 78. One thing to note is that the algorithm does not find chessboard corners on some of the images, so if you have a slow machine it might take a while to run. I’ll try to clean up the dataset to get rid of the bad images.

Edit: here’s a cleaned up dataset that runs quickly: CalibrationData2 - Album on Imgur
It still gives a reprojection error of about 78, and a badly distorted “undistorted” image.

the order of model points and image points also needs to match. I’m fixing that right now and checking how that works.

findChessboardCorners also works better with cv2.CALIB_CB_FAST_CHECK | cv2.CALIB_CB_ADAPTIVE_THRESH

aaand it does. 0.3111048508286681

use objp[:,:2] = np.mgrid[0:9,0:14].T.reshape(-1,2)

you also have a bug at the end. this line needs to be dst = dst[y:y + h, x:x + w]

1 Like

If all your calibration photos look similar to those shown on Imgur, there is a huge chance the calibration algorithm is dealing with numerical instability due to the checkerboard facing the camera at only one angle. Make sure you change the orientation of the calibration target.
If you could share all your calibration pictures on a cloud drive I can see what I will get with them.

@crackwitz Not understanding that line of code fully was the problem all along I see… Thank you so much!

@Witek It’s possible, but I think @crackwitz nailed it, as the reprojection error I’m getting now is tiny. I have a larger calibration dataset for my first camera (with different board orientations facing the camera) that I’ll try now and report back if that is now giving a better result.

oh, while we’re at it: for stereo calibration, you’ll want to scale your object points. millimeters is a good idea. take a ruler to your board and measure lengths across multiple increments, in each direction separately. printers can be a little liberal about their outputs.

objp *= (15.10, 14.95, 0) # or something like that

Let me see if I understand correctly: after creating the objp:

objp = np.zeros((9*14,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:14].T.reshape(-1,2)

I should multiply them all by the number of millimeters in each of the square dimensions (e.g. for a 16mm tall, 15.9mm wide square):

objp *= (16, 15.9, 0)

Is that correct? I’m guessing it’s worth breaking a caliper out, since you mention hundredths of a millimeter in your example? Wish that information was in the tutorials!