StereoCalibrate returns incorrect translation matrix values

I am using the green screen example from the Microsoft Azure Kinect sensor SDK, as a basis for my project that involves creating a 3d point cloud from multiple cameras. However upon examining the calibration results from the green screen code I linked above, it appears that the translation matrix from the cv::stereocalibrate function is incorrect. It contains enormous values, an example of a translation matrix I got: [301.243, 10.1133, 126.121]. The cameras I’m using are pretty much right next to each other, the values in the translation matrices are way too big. Does anyone know why this is? Might there be a bug in the green screen example code?

EDIT: I now realize the units used in the green screen example are in mm. However the results still seem to be incorrect. When I use the results in open3d and divide them by 1000, resulting in the unit being in meters, the translation is still incorrect. I measured the chessboard square size correctly.

could those be millimeters?

or maybe your calibration data is insufficient. if you don’t think that it is, it still could be insufficient. that is a common issue, thinking the data is good, when it is not.

just… present the entire dataset you used for calibration. or any dataset that reproduces the problem.

Thanks for replying.

Like I wrote in the edit, it indeed is most likely in mm, I compensate for this now by dividing it by 1000 when using it in my python script. However the result is still not as desired. I cannot provide you with a dataset used for calibrating because there is none. If you examine the green screen example code you can see the images are taken directly during running the code. So the calibration is done ‘live’, I just hold up the chessboard to the cameras and it takes 20 frames. This gives me an error of about 0.2, should be good. I have more details about this issue on my stackoverflow post

low error is necessary but not sufficient. it does not capture the quality of the calibration data.

Ah, I wasn’t aware of that. Do you have any advice on how to proceed? Or on how to verify the quality of the calibration data?

Update: I now changed the code to capture 50 images for the calibration (instead of 20), and it now pauses after every capture so the chessboard can be moved in between captures. This way I made 50 images from the chessboard at different angles and distances. But to no avail, when using this data in my python script the point clouds still do not align. I also updated my python script to divide the values in the translation vector by 2.4, due to the chessboard square size (in my case 24 mm). This does help, but still the result isn’t anywhere near what it should be. In case anyone would like to have the full code, I include my code here. The repos have READMEs that explain the code:
C++ code for calibrating the cameras and generating the color and depth images,
Python code for generating the point cloud(s).