Cannot get good camera calibration

I managed to get stereo calibration going, but can’t get cv2.undistortPoints to work. Always an assertion error of the sort:

(-215:Assertion failed) npoints >= 0 && src.isContinuous() && (depth == CV_32F || depth == CV_64F) in function 'undistortPoints'

This is despite checking the proper function signature, making sure the image is nonempty, etc. Is that function known to work? Online examples of its use are scarce.

correct, as far as I can see. be careful about which dimension is which. when you give 9x14, it means 9 wide 14 “tall”, i.e. “portrait” shaped. all the functions label the corners accordingly (row-wise like reading text). your objp are ordered the same, and the coordinates are x in first position, y in second position.

nah. regular ruler plus some guessing between marks is plenty good enough. I’m saying the printer could have caused a few percent of difference between directions, so the nominally square grid is really rather rectangularly spaced. and of course it might add some scaling too. you can easily improve your calibration by an order of magnitude by measuring. just measure across furthest corners and divide by number of steps. if you have 9 points across the short side, that’s 8 steps, and at 16 mm I’d expect 128 mm there. if you see 126 and a half mm, you get 126.5/8 = ~15.8 mm.

edit: this will give you distances (stereo baseline distance, etc) that are closer to reality but it doesn’t mean the reprojection error will go down. the reprojection error (also) depends on how the corners are localized. the APIs might use cornerSubPix but that assumes a linear color space, and most data isn’t in a linear color space. also pretty much all consumer cameras sharpen the picture and that introduces error as well.

1 Like

assertion covers several things.

  • negative number of points?? how could that even happen
  • data isn’t continous, which can be fixed with a .copy() of the points
  • points aren’t floats, which is fixed by .astype(np.float64) or float32.

see which it is. I’d avoid fixing things that don’t need a fix because the fixes do cost something.

You’re right, that’s a better way of measuring than measuring individual squares like I was doing earlier.

About the assertion, I actually thought it might be the npoints, as I’ve seen something similar before when some function didn’t have an image to work with (negative default value perhaps?). I tried both the .copy() and .astype(np.float64) fixes and the same error persists. This leads me to believe it’s the npoints, but I don’t know how to troubleshoot further after printing my input array and all the other arguments to make sure they exist.

how do you call undistortPoints? make the issue reproducible.

assertion comes from

I’m posting my code and data set (a small one for now, having trouble getting both cameras working at once) below. It attempts to perform a stereo calibration followed by undistortion, where it runs into the error.
Data set: Stereo_Dataset - Album on Imgur

Code:

import numpy as np
import cv2
import glob

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((9*14,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:14].T.reshape(-1,2)
objp *= (16.2, 16.2, 0)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints1 = [] # 2d points in image plane.
imgpoints2 = [] # 2d points in image plane.

images1 = sorted(glob.glob('L*.png'))
images2 = sorted(glob.glob('R*.png'))

for fname1, fname2 in zip(images1, images2):
    img1 = cv2.imread(fname1)
    gray1 = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
    img2 = cv2.imread(fname2)
    gray2 = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret1, corners1 = cv2.findChessboardCorners(gray1, (9,14), cv2.CALIB_CB_FAST_CHECK | cv2.CALIB_CB_ADAPTIVE_THRESH)
    ret2, corners2 = cv2.findChessboardCorners(gray2, (9,14), cv2.CALIB_CB_FAST_CHECK | cv2.CALIB_CB_ADAPTIVE_THRESH)

    # If found, add object points, image points (after refining them)
    if ret1 and ret2:
        objpoints.append(objp)
        
        corners1 = cv2.cornerSubPix(gray1,corners1,(11,11),(-1,-1),criteria)
        corners2 = cv2.cornerSubPix(gray2,corners2,(11,11),(-1,-1),criteria)
        imgpoints1.append(corners1)
        imgpoints2.append(corners2)

cv2.destroyAllWindows()

ret1, mtx1, dist1, rvecs1, tvecs1 = cv2.calibrateCamera(objpoints, imgpoints1, gray1.shape[::-1], None, None)
ret2, mtx2, dist2, rvecs2, tvecs2 = cv2.calibrateCamera(objpoints, imgpoints2, gray2.shape[::-1], None, None)

h,  w = gray1.shape[:2]
newcameramtx1, roi1 = cv2.getOptimalNewCameraMatrix(mtx1, dist1, (w, h), 1, (w, h))
newcameramtx2, roi2 = cv2.getOptimalNewCameraMatrix(mtx2, dist2, (w, h), 1, (w, h))

ret, _, _, _, _, rmtx, tvec, _, _ = cv2.stereoCalibrate(objpoints, imgpoints1, imgpoints2, mtx1, dist1, mtx2, dist2,
                                                        gray1.shape[::-1], None, None, None, None,
                                                        cv2.CALIB_FIX_INTRINSIC, criteria)
size = (h, w)

recL, recR, projL, projR, dispToDepthMap, leftROI, rightROI = cv2.stereoRectify(mtx1, dist1, mtx2, dist2, size, rmtx, tvec, None, None, None, None, None, cv2.CALIB_ZERO_DISPARITY)
leftMapX, leftMapY = cv2.initUndistortRectifyMap(mtx1, dist1, recL, projL, size, cv2.CV_32FC1)
rightMapX, rightMapY = cv2.initUndistortRectifyMap(mtx2, dist2, recR, projR, size, cv2.CV_32FC1)

img1 = cv2.imread('LImage0.png')
img2 = cv2.imread('RImage0.png')

dst = cv2.undistortPoints(img1, mtx1, dist1, recL, projL)
#dst = cv2.remap(img1, leftMapX, leftMapY, cv2.INTER_LINEAR)
x, y, w, h = leftROI
dst = dst[y:y + h, x:x + w]
cv2.imwrite('ImageL_undistort.png', dst)

dst = cv2.undistortPoints(img2, mtx2, dist2, recR, projR)
#dst = cv2.remap(img2, rightMapX, rightMapY, cv2.INTER_LINEAR)
x, y, w, h = rightROI
dst = dst[y:y + h, x:x + w]
cv2.imwrite('ImageR_undistort.png', dst)

The more I read, the more it seems the problem is my source matrix (img1 and img2). The problem is my knowledge of Numpy is next to none right now so I don’t know how to properly reshape it.

you do not use undistortPoints on images, only on points.

on images you use remap and the maps that come out of undistortRectifyMap. or use undistort, but that’s only for the monocular case.

I see. Found that example online where someone was supposedly using it on an image. Remap does work, but returns a shifted and very cropped version of the original images. I will attempt to work on this further, thanks for all the help!

Edit: If I may ask another question, when using initUndistortRectifyMap to get the left and right maps for undistorting and rectifying the images, is the rotation matrix given as the argument the same for both the left and right maps? If I understand correctly, only one of the images needs to be rotated, but again some online examples use R1 for the left image and R2 for the right one. Right now I’m using the rotation matrix from stereoCalibrate for both and I managed to get my remaps working partially but the remapped stereo images don’t align vertically.

those functions are horribly under-documented. I wish I knew. they perform some magic. you’ll get both camera matrices “adjusted” in some magical way. it probably helps to have handy whatever book or publication these functions were implemented on.

I see, I’ll search the literature for more info.

It seems my original approach of using the results of the stereoRectify function in initUndistortRectifyMap was actually correct, as documentation for the functions specifies. Plus the images, although mapped strangely (only a small portion of the original image remains visible) are in fact rectified, that is they align vertically. Now if I can just figure out why so much of the image is mapped outside of the visible window. Visual inspection of the results of the stereoRectify function don’t give anything I would find obviously suspicious.

some bugs in the code.

your size needs to be size = (w, h).

and that doesn’t fix it yet. your maps are flipped around in shape… I’m just giving this a cursory look so far.

mtx2’s cy doesn’t look too good. it should be closer to the center (540)

>>> np.set_printoptions(suppress=True)
>>> mtx1
array([[2615.06830245,    0.        , 1028.30453893],
       [   0.        , 2647.29984978,  395.43473923],
       [   0.        ,    0.        ,    1.        ]])
>>> mtx2
array([[2860.95278168,    0.        , 1179.43986443],
       [   0.        , 2865.14974333,  217.54450346],
       [   0.        ,    0.        ,    1.        ]])

your cameras need to be very well aligned. they need to point at infinity too. don’t angle them relative to each other.

L/Rimage1 in that album also look like their names are assigned wrong. they don’t match at all.

Thanks for pointing that out. Since I posted my code, I attempted to change the w,h ordering given to stereoRectify and initUndistortRectifyMap, though now I just get black images. I think mtx2 is indeed off, I notice that undistorting the image using it and undistort gives a terribly warped result. Perhaps my problem is a bad calibration of one of the cameras, which means I need a new and larger dataset. I will work on obtaining that asap.

As far as alignment goes, they are mounted on a 3d-printed fixture which is as flat and vertically aligned as I could make it. Inconsistencies in the camera manufacturing process I obviously can’t control.

You’re right, I already deleted those two but the problem hasn’t gone away. They’re also flipped I believe (that is, L and R images correspond to right and left cameras, respectively). I tried flipping those but the same bad results emerged.

I finally managed to get the two cameras working together on my system, and obtained a new calibration set, which in conjunction with correction of the bugs you caught gives me a good set of rectified images. I’m going to move on to attempting a depth map, but as far as this topic goes I believe everything is solved. Thanks a lot for your help!