I’m currently working on creating a disparity map using opencv and python. Both my individual camera calibration return an RMS error under 0.15 and my stereo calibrate returns an RMS error under 0.2, both of these seem good. The translation matrix that stereo calibrate returns gives a correct distance for how far away the cameras are and the other values are reasonable too. It seems like I should be getting a good disparity map, but most of the time it’s seemingly random even after tuning the parameters of stereoBM. Here is the disparity map I get. This is using stereoBM but stereoSGBM isn’t much better.
I’m using cameras that are hardware synchronized. Most of my code is copied from this openCV tutorial and my stereoCalibrate was taken from this article. I’m using a printed chessboard with squares that are 20mmx20mm on 11x9 paper and I’m making sure to take photos at multiple angles and that cover the entire fov. I’m downsizing my images to 960x540 from 1920x1080. My code is a bit long but I can post it if necessary.
I’ve also drawn epilines on the images and it turns out ok but with some lines that don’t make sense and feature mapping is also good but grabs a couple points that are not related. Both have some weird anomalies of not being fully accurate. Is this an issue with my calibration or is it possible that the images I’m taking are difficult for these algorithms to process?
This has frustrated me as it seems like I’m doing everything right and my code should be all good but I’m still not getting good results. Am I falling into some sort of common mistake? I looked into my camera documentation and I don’t believe they’re automatically adjusting anything that would affect a calibration. Any help is greatly appreciated as this has been causing me issues for awhile.