Unable to rectify stereo cameras correctly

You’re right, it sure happens to many. But I have found a great improvement by performing the calibration with a much larger pattern. Use the opencv “gen_pattern.py” tool and generate a chessboard with 28 rows and 17 columns. Being that I only calibrate it with 22 pairs of images, now my result is the following

Of course it is still warped at the corners. But it doesn’t zoom in as big as it used to. I guess I need to keep trying. I suspect that by using larger patterns, it has a greater number of reference points which helps a lot, but cameras with higher resolution would be needed (mine are 1920 x 1080) since when wanting to use a 41 x 25 pattern, it did not reach detect the corners of the squares. I will try after this to acquire 3840x2880 cameras to see if the calibration is indeed better.

I also read somewhere that circular patterns provide greater precision when calibrating.

One last question that I would like to clear up is, just as the image is, the points coincide in both the left and right cameras, so if I wanted to measure the depth of the object, I should only use the formula

Z= (f*b)/disparty

where “f” and “b” are obtained from the matrix Q obtained in cv.stereoRectify() and the disparity would be the difference between the centers of the bounding boxes of my detection in both images.

Now, is it convenient to take those parameters directly from the Q Matrix?? or is it better to place them in the code according to the known parameters of the camera?

First of all, thank you both for guiding me and helping me, I hope this post will be useful for future readers.