Wrong projection matrix from stereorectify on vertical stereo (cy is negativ and too small)

Hi everyone,

I have a problem trying to calibrate two webcams on a vertical setup for depth estimation.
Opencv version : 4.7.0
The code is globally

criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, error1)
        + cv2.CALIB_FIX_K3 + cv2.CALIB_FIX_K4 + cv2.CALIB_FIX_K5)

retL, mtxL, distL, rvecsL, tvecsL = cv2.calibrateCamera(obj_ptsL, img_ptsL, (w1,h1), None, None)
retR, mtxR, distR, rvecsR, tvecsR = cv2.calibrateCamera(obj_ptsR,img_ptsR,(w2,h2),None,None)

retS, mtxL, distL, mtxR, distR, Rot, Trns, Emat, Fmat = cv2.stereoCalibrate(obj_pts, img_ptsL, img_ptsR, mtxL, distL, mtxR, distR, (w1, h1), criteria_stereo, flags)

rect_l, rect_r, proj_mat_l, proj_mat_r, Q, roiL, roiR = cv2.stereoRectify(mtxL, distL, mtxR,  distR, (w1,h1),  Rot, Trns, flags = cv2.CALIB_ZERO_DISPARITY, alpha = 1, newImageSize = (0,0))

Left_Stereo_Map = cv2.initUndistortRectifyMap(mtxL, distL, rect_l, proj_mat_l, new_shape, cv2.CV_16SC2)

Right_Stereo_Map= cv2.initUndistortRectifyMap(mtxR, distR, rect_r, proj_mat_r, new_shape, cv2.CV_16SC2)

I tried different sizes and alpha for stereorectify and none works but it seems that the result image is shifted on the top out of the size of the image

the only work around i found is to change the cy values on projection matrix manually before initUndistortRectify. but don’t know how it affects my disparity map

proj_mat_l[1,2] = 0
proj_mat_r[1,2] = 0

Does any one knows where is the problem ?

show your calibration pictures please

Thx for your answer can’t share the calibration images (too much) I used like 160 pictures of a A4 7x4 chessboard (3.5cm per box). Different angles, different orientations and all the pictures are well detected. i verified each one.
I repeated this process like 5 or 6 times but always the same problem. The error of calibrations is under 0.4 and i also verified the correspondance of EpipolarLines with points on the two cameras and they are well found after the stereo calibration.

cx,cy should be in the center of the view, not 0

please show your calibration results. I am interested in the camera matrices and distortion coefficients, also extrinsics.

I never trust people’s assessment of their own calibration pictures. often that assessment is precisely where the issue happens. without getting to see the data, it’s still potentially questionable.

please test your code with known good calibration pictures. there surely are datasets out there. OpenCV itself has a dataset somewhere.

you’re right, and that’s the purpose of posting here : having a different view hahah

camera1 matrix :

array([[541.22819,   0.     , 313.1284 ],
       [  0.     , 540.71178, 244.25095],
       [  0.     ,   0.     ,   1.     ]])

camera1 distortion coefs

[0.13057017307967542, -0.4048554447467629, -0.0037518861468601463, 0.001841690830047624, 0.3715917820016595]

camera2 matrix:

array([[858.15124,   0.     , 340.9264 ],
       [  0.     , 860.16898, 238.02976],
       [  0.     ,   0.     ,   1.     ]])

camera2 dist coefs:

[-0.32350365514281226, 0.2843528394453492, -0.012729778001825589, 0.0004327555109636897, 0.5245621603562647]

all of that looks reasonable and I don’t see a need to fix up anything about that.

they have different focal lengths. different camera models?

both look like they run in 640x480 mode (OpenCV default). is that intentional?

Yes, it’s different models. i took what i had at home for a personal project. for the 640x480 because it’s cheap webcams and if i go further the framerate drops too much. But the problem is that if i’m not editing the cy value. the transformation is set out of image with alpha at 1. and for alpha at -1 i loose the left part of the image. for alpha = 0 the transformation is complettely zoomed in and i have to use a 6000 px image and it falls on the same probleme (left half part of the image is lost)
With the workaround i indicated in the first message i got the full “distorded image” and for now it’s the best i achieved

I’m gonna need to see the extrinsics too

Thank for the time you’re spending on this.
for the extrinsics, rot: 3x3

array([[ 0.99895,  0.04161,  0.01933],
       [-0.04048,  0.99763, -0.0556 ],
       [-0.0216 ,  0.05476,  0.99827]])

Trns 3x1:

array([ -1.27011,  -8.31156, -16.22945])

Emat 3x3:

array([[ -0.47738,  15.73588,  -9.19952],
       [-16.2398 ,  -0.6057 ,   0.95417],
       [  8.35422,  -0.92129,   0.23129]])

Fmat 3x3:

1.3584663111280490e-06    -4.4822307605912237e-05      2.4691355117830823e-02
4.6105113682846909e-05     1.7212276975740174e-06     -1.6323366812429723e-02
-3.1838843962498600e-02    1.7123370053808561e-02      1.


array([[ 0.     , -0.00004,  0.02469],
       [ 0.00005,  0.     , -0.01632],
       [-0.03184,  0.01712,  1.     ]])

projection mat left 3x4:

array([[  699.68971,     0.     ,   355.12427,     0.     ],
       [    0.     ,   699.68971, -2455.91861,     0.     ],
       [    0.     ,     0.     ,     1.     ,     0.     ]])

projection mat right 3x4:

array([[   699.68971,      0.     ,    355.12427,      0.     ],
       [     0.     ,    699.68971,  -2455.91861, -12789.02549],
       [     0.     ,      0.     ,      1.     ,      0.     ]])

I think that the problem is on the projection matreces on the [1,2] value => cy = -2455

your translation mat looks bad. for a regular stereo setup, I expect translation along one axis only.

can’t say much about E but F looks stomped on, ill-conditioned.

the following stuff also doesn’t fill me with confidence.

you really need to run this with a known good calibration picture set, or present yours. it’s a waste of time to read tea leaves from these results if there’s something fundamentally wrong with the inputs or the code.

1 Like

I’ll try to download a vertical stereo dataset to test on it. If i can’t figure it out. i’ll post the calibration frames on cloud. thanks a lot :slight_smile: