Hi community, I’m trying to improve the results of my stereo vision system, and I’m encountering a strange result when evaluating my system by fitting my checker board pattern to a plane.
I’m evaluating my system with the following process.
- Capture a series of images (following best practices) and run a calibration. (Reprojection error around 0.15, avg. epipolar error around 3.3)
- Using a different set of checker board images, I find the corners and extract their coordinates in space via the disparity, pixel position, and Q matrix.
- I fit these corner coordinates to a plane using a least-squares approach, and extract the residuals for analysis.
My issue: I get a curved shape for my flat checker board. Across multiple different checker boards I get the same distribution of errors (residuals), so I can guarantee my board isn’t warped. See images 1 and 2 for the visualization.
The facts:
- I’ve been playing with my calibration options, and seem to get better results setting alpha to 0 in getOptimalNewCameraMatrix.
- My stereoCalibrate flags are: CALIB_FIX_INTRINSIC, CALIB_FIX_FOCAL_LENGTH
I do a calibration for each camera independently before doing a stereo calibration with CALIB_FIX_INTRINSIC. Since my cameras have the same lens and sensor I also have the CALIB_FIX_FOCAL_LENGTH flag set.
- I’m using 8mm focal length cameras and capturing at 1080p.
- I get the same pattern across multiple different calibration attempts, and with different checkerboards.
I can’t explain this curved behaviour, so I’m praying there’s a genius out there who has a deeper understanding of openCV’s stereovision than me.
I’ve edited the code down to only sections that might be relevant
*Since I’m new I can only attach one media item, I can follow up with an example evaluation image dataset and/or an averaged residuals heatmap across all my images. It echoes what’s showed by image 1.
Code file 1:
chessboardSize = (14,9)
frameSize = (1920,1080)
# termination criteria
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
...
for imgLeft, imgRight in zip(imagesLeft, imagesRight):
imgL = cv.imread(imgLeft)
imgR = cv.imread(imgRight)
grayL = cv.cvtColor(imgL, cv.COLOR_BGR2GRAY)
grayR = cv.cvtColor(imgR, cv.COLOR_BGR2GRAY)
# Find the chess board corners
retL, cornersL = cv.findChessboardCorners(grayL, chessboardSize, None)
retR, cornersR = cv.findChessboardCorners(grayR, chessboardSize, None)
...
############## CALIBRATION ####################################################
# Individual camera calibration using ALL available images for each camera
retL, cameraMatrixL, distL, rvecsL, tvecsL = cv.calibrateCamera(objpoints_left, imgpoints_left, frameSize, None, None)
heightL, widthL, channelsL = imgL.shape
newCameraMatrixL, roi_L = cv.getOptimalNewCameraMatrix(cameraMatrixL, distL, (widthL, heightL), 0, (widthL, heightL))
retR, cameraMatrixR, distR, rvecsR, tvecsR = cv.calibrateCamera(objpoints_right, imgpoints_right, frameSize, None, None)
heightR, widthR, channelsR = imgR.shape
newCameraMatrixR, roi_R = cv.getOptimalNewCameraMatrix(cameraMatrixR, distR, (widthR, heightR), 0, (widthR, heightR))
print(f'Reprojection error (Left): {retL}')
print(f'Reprojection error (Right): {retR}')
...
########## Stereo Vision Calibration #############################################
flags = 0
flags |= cv.CALIB_FIX_INTRINSIC
flags |= cv.CALIB_FIX_FOCAL_LENGTH
# fix the intrinsic camara matrixes so that only Rot, Trns, Emat and Fmat are calculated.
criteria_stereo= (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
retStereo, newCameraMatrixL, distL, newCameraMatrixR, distR, rot, trans, essentialMatrix, fundamentalMatrix = cv.stereoCalibrate(objpoints, imgpointsL, imgpointsR, newCameraMatrixL, distL, newCameraMatrixR, distR, grayL.shape[::-1], criteria_stereo, flags)
...
########## Stereo Rectification ##############################################
rectifyScale= 1
rectL, rectR, projMatrixL, projMatrixR, Q, roi_L, roi_R= cv.stereoRectify(newCameraMatrixL, distL, newCameraMatrixR, distR, grayL.shape[::-1], rot, trans, rectifyScale,(0,0))
stereoMapL = cv.initUndistortRectifyMap(newCameraMatrixL, distL, rectL, projMatrixL, grayL.shape[::-1], cv.CV_16SC2)
stereoMapR = cv.initUndistortRectifyMap(newCameraMatrixR, distR, rectR, projMatrixR, grayR.shape[::-1], cv.CV_16SC2)
print("Saving parameters!")
cv_file = cv.FileStorage(args.output, cv.FILE_STORAGE_WRITE)
cv_file.write('stereoMapL_x',stereoMapL[0])
cv_file.write('stereoMapL_y',stereoMapL[1])
cv_file.write('stereoMapR_x',stereoMapR[0])
cv_file.write('stereoMapR_y',stereoMapR[1])
cv_file.write('Q', Q)
cv_file.release()
Code file 2:
for i, (imgLeft, imgRight) in enumerate(zip(imagesLeft, imagesRight)):
# Load images
imgL = cv.imread(imgLeft)
imgR = cv.imread(imgRight)
# rectify and convert to grayscale
rectified_L = cv.remap(imgL, stereoMapL_x, stereoMapL_y, cv.INTER_LANCZOS4)
rectified_R = cv.remap(imgR, stereoMapR_x, stereoMapR_y, cv.INTER_LANCZOS4)
grayL = cv.cvtColor(rectified_L, cv.COLOR_BGR2GRAY)
grayR = cv.cvtColor(rectified_R, cv.COLOR_BGR2GRAY)
# Step 1: Extract chessboard corner locations in both images
retL, cornersL = cv.findChessboardCorners(grayL, chessboardSize, None)
retR, cornersR = cv.findChessboardCorners(grayR, chessboardSize, None)
# Refine corner locations
cornersL = cv.cornerSubPix(grayL, cornersL, (11,11), (-1,-1), criteria)
cornersR = cv.cornerSubPix(grayR, cornersR, (11,11), (-1,-1), criteria)
cornersL_rect = cornersL[:, 0, :]
cornersR_rect = cornersR[:, 0, :]
# Step 2: Convert corner correspondences to 3D points using stereo triangulation
# Calculate disparities for each corner pair
disparities = []
points_3d = []
for (xl, yl), (xr, yr) in zip(cornersL_rect, cornersR_rect):
# Disparity is the difference in x-coordinates (for rectified images)
disparity = xl - xr
if disparity > 0: # Valid disparity
# Use Q matrix to convert to 3D coordinates
# Q matrix transforms [u, v, disparity, 1] to [X, Y, Z, W]
point_2d = np.array([xl, yl, disparity, 1.0])
point_3d_h = Q.dot(point_2d)
if point_3d_h[3] != 0: # Valid homogeneous coordinate
point_3d = point_3d_h[:3] / point_3d_h[3]
points_3d.append(point_3d)
disparities.append(disparity)
points_3d = np.array(points_3d)
# Step 3a: Fit 3D points to a plane and calculate residuals
x_vals = points_3d[:, 0]
y_vals = points_3d[:, 1]
z_vals = points_3d[:, 2]
# Fit plane using least squares: z = ax + by + c
A = np.column_stack((x_vals, y_vals, np.ones(len(x_vals))))
C, residuals, _, _ = lstsq(A, z_vals)
# Calculate individual residuals
z_predicted = C[0]*x_vals + C[1]*y_vals + C[2]
individual_residuals = z_vals - z_predicted
Img 1: visualization of checker board corners vs plane. Notice the bulge in along the middle, and the dips on the edges. Exaggerated on a different scale it seems almost parabolic. All images exhibit this exact same curvature, with slightly different distance measurements, usually with a max of 4-8 mms from the plane.
