Getting a bad disparity map with good calibration

I’m currently working on creating a disparity map using opencv and python. Both my individual camera calibration return an RMS error under 0.15 and my stereo calibrate returns an RMS error under 0.2, both of these seem good. The translation matrix that stereo calibrate returns gives a correct distance for how far away the cameras are and the other values are reasonable too. It seems like I should be getting a good disparity map, but most of the time it’s seemingly random even after tuning the parameters of stereoBM. Here is the disparity map I get. This is using stereoBM but stereoSGBM isn’t much better.

Disparity Map:
dispMap

I’m using cameras that are hardware synchronized. Most of my code is copied from this openCV tutorial and my stereoCalibrate was taken from this article. I’m using a printed chessboard with squares that are 20mmx20mm on 11x9 paper and I’m making sure to take photos at multiple angles and that cover the entire fov. I’m downsizing my images to 960x540 from 1920x1080. My code is a bit long but I can post it if necessary.

I’ve also drawn epilines on the images and it turns out ok but with some lines that don’t make sense and feature mapping is also good but grabs a couple points that are not related. Both have some weird anomalies of not being fully accurate. Is this an issue with my calibration or is it possible that the images I’m taking are difficult for these algorithms to process?

This has frustrated me as it seems like I’m doing everything right and my code should be all good but I’m still not getting good results. Am I falling into some sort of common mistake? I looked into my camera documentation and I don’t believe they’re automatically adjusting anything that would affect a calibration. Any help is greatly appreciated as this has been causing me issues for awhile.

vary your parameters… AT RUNTIME. create a few trackbars.

important: mindisparities and numdisparities, or whatever they’re called. the range of disparity values that is searched.

oh and maybe don’t just post the failed result but EVERYTHING that led up to it.

I’m currently using a custom gui to try to get the best disparity map possible at runtime, which includes numDisparities, blockSize, prefilterType, preFilterSize, preFilterCap, textureThreshold, uniquenessRatio, speckleRange, speckleWindowSize, disp12MaxDiff, and minDisparity. Are there other parameters I should be changing at runtime? I believe I covered them all.

I have included everything leading up to the failure, including the code I’ve based my code on, my process for detecting chessboard corners, what resolution my images are at, the RMS error of individual camera calibration as well as for stereo calibration, and how I’m actually generating my disparity maps. What other aspects of disparity map creation would be helpful?

If you would like I can post my code, although some of it is abstracted to classes and shared between functions so I figured it would be more user friendly to simply include the sources from which I got my code.

Thanks!

Skippy

prose doesn’t help. prose isn’t executable.

all you’ve posted so far is the output of whatever you are doing.

you haven’t shown what you do, or most importantly, what the inputs (pictures) are (source and undistorted), specifically for that disparity map.

Here is my left undistorted image and my right undistorted image specifically for that disparity map. And here is a distorted image from the right camera, I can include the distorted image from the left camera in another reply as this forum only allows me one embedded photo per post.

R45

Hopefully there isn’t too much prose in this.

are you sure the “left” image is for the left camera? it looks like it’s for the right camera.

switch them. try that.

with samples/python/stereo_match.py (slightly modified) I get

that looks reasonable, if not perfect. I didn’t bother touching the parameters. they do need adjusting to get those white (near) spots and the general blobbiness resolved.

I got enough info for now to see what might be going on. if you need to show multiple pictures, an imgur album is just fine, even if it’s an off-forum resource.

You’re completely right that I had them mixed up, by swapping those two old images I got a disparity map similar to yours and any new disparity maps are of similar quality. I feel silly not trying this earlier but I appreciate all your help! Thank you!

Hi,

I attempted to recreate the disparity map with your sample undistorted images without cropping the black edges out. However, I have rather absymal results compared to yours:


Any idea why? My code is as follows:
dstL = cv.imread(testpathL,0)
dstR = cv.imread(testpathR,0)
stereo = cv.StereoBM_create(numDisparities=16, blockSize=15)
disparity = stereo.compute(dstL,dstR)
plt.imshow(disparity,‘gray’)
plt.show()

Am desperate for help as I have been stuck on this for an entire day :frowning:

that’s the issue. wrong algorithm.

and that likely too. not enough.

Is StereoSGBM the appropriate algorithm? My understanding is that it is more computational and “complete”, but StereoBM was used in the python tutorial mentioned by OP here.

The same applies for numDisparities, could you suggest an appropriate multiple of 16?

Hi @yergrandpa I am having the same issue. Di you manage to figure it out?