# Stereo Depth Perception gone wrong

I’ve build a system with 2 cameras (not webcams, something with a good mounting plate) I have them mounted on a 3d printed plate to keep them parallel in yaw and on a single plane in roll. Having a plate with known dimensions gives me the exact distance between cameras. The goal is to get a distance (x,y,z components) to a known object in a scene. I calibrated both cameras separately using chessboard pattern to get a camera matrix.
Here is the overview of the steps I take:

1. grab a frame from both cameras and remove distortions
2. run yolo on a left image to get a bounding box of an object in a scene
3. create a pattern using prev calculated bounding box and run pattern match on a right image
4. triangulate using a single pixel(box center in the left image and matched area center in the right) with the following formula
Z_component = distance_between_cams_mm * horizontal_focal_distance_from_camera_matrx /
(offset_from_the_optical_center_in_left_image - offset_from_the_optical_center_in_right_image)
Question: horizontal_focal_distance_from_camera_matrx <— which cam should I use left/right??? they have the same lenses but focal distances from cal are slightly different
5. Now that we have a distance between an object plane and a camera plane we can calculate horizontal and vertical distance components relative to the left camera:
// X = offset_from_center_x * Z_component / focal_x
// Y = offset_from_center_y * Z_component / focal_y

After all this work accuracy isn’t there, +/- 30 cm when an object is 2 meters away and I’m looking to get distance calculation on objects 8-10 metes away. Things need to improve.
Now the questions:
1. How accurate do these systems get from your experience?
2. Is there anything obvious that I might be doing wrong? Any steps that I’m missing? I found a stereo calibrate function in the opencv api, docs say it translates an image from left to right camera, not sure why would I need to do it.

I know this might be a vouge question, but I have no one else to ask these questions besides this forum.

PS>Please don’t point me to the Disparity Calculation tutorial on the opencv, it shows a trivial sample image and does not go into details on how to tune a disparity map. Plus I don’t need to get the depths of everything just a single object in the scene.

Hi,
A book : Multiple View Geometry in Computer Vision