Recently I’m using the Middlebury Stereo Datasets 2005 for generating the disparity map. Only the dataset Art, Dolls and Reindeer will be used. The requirement is to generate the disparity map with only view1.png and view5.png for each set.
I’ve tried to generate the depth map directly with both cv2.StereoSGBM and cv2.StereoBM which didn’t really give out a satisfying result, here is the result of Art with the StereoSGBM code:
Other than just StereoSGBM and StereoBM, I also saw people using the feature matching method with ORB / SIFT for running cv2.warpPerspective before computing the depth map, however, the way it transforms seems to failed with a bad transformation
After generating depth maps with the above methods, I have also implemented the wls filter but I got confused with the Lamba and sigma value. I would like to ask how should I implement these method to enhance the output
i would like to know if there is a right implementation for using ORB or SIFT w/ cv2.warpPerspective in order to get a better result, I’m using these code to get the psnr for comparing the original disp1 with the depth map I generated: