Bad Disparity map with SGBM algorithm

Good afternoon, I am looking to obtain a good disparity map with the SGBM algorithm to be able to approximate the depth of the objects in the image. The problem is that the disparity map gives me something wrong, since the closest objects appear to me in black, some in the middle in white and in the background black again.

my code to get the disparity map is as follows:

def depth_map(imgL, imgR):

    window_size = 2  # wsize default 3; 5; 7 for SGBM reduced size image; 15 for SGBM full size image (1300px and above); 5 Works nicely

    left_matcher = cv2.StereoSGBM_create(
        minDisparity=-1,
        numDisparities=16*16,  # max_disp has to be dividable by 16 f. E. HH 192, 256
        blockSize=window_size,
        P1=9 * 3 * window_size,
        # wsize default 3; 5; 7 for SGBM reduced size image; 15 for SGBM full size image (1300px and above); 5 Works nicely
        P2=128 * 3 * window_size,
        disp12MaxDiff=12,
        uniquenessRatio=40,
        speckleWindowSize=50,
        speckleRange=32,
        preFilterCap=63,
        mode=cv2.STEREO_SGBM_MODE_SGBM_3WAY
    )
    right_matcher = cv2.ximgproc.createRightMatcher(left_matcher)
    # FILTER Parameters
    lmbda = 70000
    sigma = 1.7
    visual_multiplier = 6

    wls_filter = cv2.ximgproc.createDisparityWLSFilter(matcher_left=left_matcher)
    wls_filter.setLambda(lmbda)
    wls_filter.setSigmaColor(sigma)

    displ = left_matcher.compute(imgL, imgR)  # .astype(np.float32)/16
    dispr = right_matcher.compute(imgR, imgL)  # .astype(np.float32)/16
    displ = np.int16(displ)
    dispr = np.int16(dispr)
    filteredImg = wls_filter.filter(displ, imgL, None, dispr)  # important to put "imgL" here!!!
    filteredImg = cv2.normalize(src=filteredImg, dst=filteredImg, beta=0, alpha=255, norm_type=cv2.NORM_MINMAX);
    filteredImg = np.uint8(filteredImg)

    return filteredImg

I pass my 2 rectified images to it, and it returns the normalized disparity map so I can see it.

The problem is that, as can be seen in the following links, my disparity map is wrong. You can see the objects in the image, but something is missing.

The characteristics of my equipment are the following:

2 ELP 13MP cameras without distortion, the resolution of each image I take is (1944,2592).

Distance between cameras (baseline) = 145 mm

The code with which I capture the frames, dedicates a thread to each camera for the capture of the frames (so I get the images with a difference <11ms)

The cameras are fixed to a 3d print with screws, to make sure they don’t move after the stereo calibration.

First, each camera was individually calibrated with Charucoboard, obtaining a reprojection error <0.25 in both cameras.

Then, they were calibrated in Stereo, they were calibrated with Chessboard, obtaining a reprojection error of 0.179.

Attached in the following:

Disparity Image Normalized

Undistorted Left Image

Undistorted Right Image

I tried to tweak each parameter inside the SGBM function, but to a greater or lesser extent it stays the same. I have managed to get the white color to appear in the front of the image, with “numdisparity=2*16”, but even so there is a lot of mixing of colors, since the objects are distant each other. If anyone knows what I’m doing wrong, I’d really appreciate it!

What did you get result without the WLS filter? I don’t have much experience with machine vision but it seems to me that the results obtained by the wls filter are sometimes better when manually adjusting the right match.

I’ll say those pictures aren’t undistorted+rectified properly. they look like the optical axes aren’t yet parallel, which is what the rectification is supposed to do.

from these pictures, you’d get disparity in both directions (left and right). that’s… not what everything downstream assumes.

repeat your calibration. calibration is hard to get right. use charuco. always angle the board such that the board is not parallel to the image plane of either camera (not pointing at the camera, roughly).

and yes, leave that filter off for now. it obscures any problems.

Good morning! Thanks for your answer. I will try to recalibrate with charuco again. The problem is that the reprojection error decreases when I calibrate in a plane perpendicular to the camera. On the other hand, if I change the angle, it goes above 1.5, which I understand must be < 1.0. I have tried with Charuco patterns of 5x5, 10x5, 15x8, but there is no case, if I tilt the board the error goes up.

I do not know if I have to be guided solely by the reprojection error, and if not, what do you advise me to follow as a parameter to know if I am calibrating correctly?

The only thing I haven’t calibrated with Charuco is the Stereo, I’ll try to face it on that side too to see if I get another result.

On the other hand, I have removed the WSL filter and my result is this (clearly not right)

To better visually perceive the depth map, try normalizing it to 8UC and coloring it. It seems to me that there is a lot of noise, try this setting algorithm: Set the minimum block size to 3, the maximum size to numDisparties 256, select the minDisparties parameter so that the map has a maximum of details, I have about -125, it may also come to remove the UniqeniessRattio parameter to 1 or 0, setPrefilterCap = 63. Using the Speckle filter setting, select the window size for example 256 and the radius around 64-32 then apply smoothing using the parameters P1 and P2.

low reprojection error means your data agrees with itself. if the data is poor, reprojection error can still be low. if r.e. is outrageous, something’s definitely wrong. it being low does not imply that all is well. do not do bad things to lower this value. there is no “must be” requirement on its magnitude. larger r.e. just means you need to calibrate better, not worse (board parallel to image plane is worse).

my advice stands.

Good Afternoon, Thank you both for your reply.

I followed your advice @crackwitz and proceeded to recalibrate the cameras separately, and then in Stereo.

Now I have in both a reprojection error of 0.57 and 0.59, but with a set of 45 and 55 images respectively, giving different angles of inclination to the board with respect to the camera, also greater and lesser distance from it… Whereas previously I only calibrated with 6 images in each camera.

Now, I have a problem. I was thinking about putting together this code, where I use charuco to calibrate the cameras in Stereo. The only limitation is that both cameras must observe the same Ids as the other.

The problem is in the result, I don’t know why the image is extruded in the form of a cone, I attach the code and the result in case you know what is wrong.

import cv2,os
import glob
import time
import numpy as np


#Name for save XML
map_name="10000000bc3fed61"

#Create Dictionary
rows=4
cols=4
sqr_size=.01
qr_size=.008

criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

dictionary = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_4X4_1000)
board = cv2.aruco.CharucoBoard_create(rows,cols,sqr_size,qr_size,dictionary)

imagesLeft = sorted(glob.glob(f'/home/admin/CALIBRATION/FOTO_IZQ_CH/*.png'))
imagesRight = sorted(glob.glob(f'/home/admin/CALIBRATION/FOTO_DER_CH/*.png'))
#
objpoints=[]
corners_list_L = []    #Esquinas del CHARUCO imagen Izquierda
id_list_L = []        #IDs del CHARUCO imagen Izquierda
corners_list_R = []    #Esquinas del CHARUCO imagen Izquierda
id_list_R = []
imgpointsR = []
imgpointsL = []
#
cv2.destroyAllWindows()

##TEST



for images_l, images_r in zip(imagesLeft,imagesRight):
    # Left Image Points
    img_l = cv2.imread(images_l)
    grayL = cv2.cvtColor(img_l, cv2.COLOR_BGR2GRAY)
    corners_L, ids_L, rejected_L = cv2.aruco.detectMarkers(grayL, dictionary)

    resp_L, charuco_corners_L, charucos_ids_L = cv2.aruco.interpolateCornersCharuco(corners_L, ids_L, grayL, board)

    objpoints_L, imgpoints_L = cv2.aruco.getBoardObjectAndImagePoints(board, charuco_corners_L, charucos_ids_L)

    # Right Image Points
    img_r = cv2.imread(images_r)
    grayR = cv2.cvtColor(img_r, cv2.COLOR_BGR2GRAY)
    corners_R, ids_R, rejected_R = cv2.aruco.detectMarkers(grayR, dictionary)

    resp_R, charuco_corners_R, charucos_ids_R = cv2.aruco.interpolateCornersCharuco(corners_R, ids_R, grayR, board)

    objpoints_R, imgpoints_R = cv2.aruco.getBoardObjectAndImagePoints(board, charuco_corners_R, charucos_ids_R)

    #Detections control
    print(resp_R, "       :       ",resp_L)

    if resp_L == resp_R and (resp_L and resp_R) > 1:
        corners_list_L.append(charuco_corners_L)
        corners_list_R.append(charuco_corners_R)

        id_list_L.append(charucos_ids_L)
        id_list_R.append(charucos_ids_R)

        objpoints.append(objpoints_L)

        imgpointsR.append(imgpoints_R)
        imgpointsL.append(imgpoints_L)

        # Draw and display
        cv2.aruco.drawDetectedCornersCharuco(img_l, charuco_corners_L, charucos_ids_L, (255,0,0))
        cv2.aruco.drawDetectedCornersCharuco(img_r, charuco_corners_R, charucos_ids_R, (255,0,0))

        cv2.imshow('imgL', cv2.resize(img_l,(640,480)))
        cv2.imshow('imgR', cv2.resize(img_r,(640,480)))
        cv2.moveWindow("imgR", 800, 0)
        cv2.waitKey(0)



cv2.destroyAllWindows()

cv_file = cv2.FileStorage(f'/home/admin/CALIBRATION/MAPAS/{os.path.basename(imagesLeft[0])[:16]}_CHARUCO.xml', cv2.FILE_STORAGE_READ)
cameraMatrixL = cv_file.getNode('CameraMatrix_L').mat()
cameraMatrixR = cv_file.getNode('CameraMatrix_R').mat()
newCameraMatrixL = cv_file.getNode('NEWCameraMatrix_L').mat()
newCameraMatrixR = cv_file.getNode('NEWCameraMatrix_R').mat()
distL=cv_file.getNode('Dist_L').mat()
distR=cv_file.getNode('Dist_R').mat()
cv_file.release()

flags = 0
flags |= cv2.CALIB_USE_INTRINSIC_GUESS
# Here we fix the intrinsic camara matrixes so that only Rot, Trns, Emat and Fmat are calculated.
# Hence intrinsic parameters are the same

criteria_stereo= (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
retStereo, newCameraMatrixL, distL, newCameraMatrixR, distR, rot, trans, essentialMatrix, fundamentalMatrix = cv2.stereoCalibrate(objpoints, imgpointsL, imgpointsR, cameraMatrixL,distL, cameraMatrixR, distR, grayL.shape[::-1], criteria_stereo, flags)

rectifyScale= 1
rectL, rectR, projMatrixL, projMatrixR, Q, roi_L, roi_R= cv2.stereoRectify(newCameraMatrixL, distL, newCameraMatrixR, distR, grayL.shape[::-1], rot, trans, rectifyScale,(0,0))

#print(retStereo)

stereoMapL = cv2.initUndistortRectifyMap(newCameraMatrixL, distL, rectL, projMatrixL, grayL.shape[::-1], cv2.CV_16SC2)
stereoMapR = cv2.initUndistortRectifyMap(newCameraMatrixR, distR, rectR, projMatrixR, grayR.shape[::-1], cv2.CV_16SC2)

undistortedL= cv2.remap(img_l, stereoMapL[0], stereoMapL[1], cv2.INTER_LANCZOS4, cv2.BORDER_CONSTANT, 0)
undistortedR= cv2.remap(img_r, stereoMapR[0], stereoMapR[1], cv2.INTER_LANCZOS4, cv2.BORDER_CONSTANT, 0)

vis = np.concatenate((undistortedL, undistortedR), axis=1)
heigth, width, _ = vis.shape
for i in range(0, 10):
    cv2.line(vis, (0, int(heigth / 10) * i), (width, int(heigth / 10) * i), (0, 255, 0))
cv2.imshow("Result", cv2.resize(vis, (1280, 720)))
cv2.waitKey(0)

And the output is this:

But when I calibrate with Chessboard, I use the following code:

import numpy as np
import cv2 as cv
import glob, os

img_globdir="/home/admin/CALIBRATION"

################ FIND CHESSBOARD CORNERS - OBJECT POINTS AND IMAGE POINTS #############################

#chessboardSize = (26,19) #[row,cols]
chessboardSize = (20,13)
frameSize = (2592,1944) #[width,heigth]
sqre_size=12.5 #[mm]

# termination criteria
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)


chessboardSize=(chessboardSize[0]-1,chessboardSize[1]-1)
# Creamos puntos de imagenes en 3 dimensiones
objp = np.zeros((chessboardSize[0] * chessboardSize[1], 3), np.float32)

#Multiplicamos al final del mgrid por el tamaño de los cuadrados del tablero (23mm para este caso)

objp[:,:2] = np.mgrid[0:chessboardSize[0],0:chessboardSize[1]].T.reshape(-1,2)

objp=objp*sqre_size

#Se devolera un mapa de puntos en la unidad entregada anteriormente, en este caso en milimetros.

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpointsL = [] # 2d points in image plane.
imgpointsR = [] # 2d points in image plane.


imagesLeft = sorted(glob.glob(f'{img_globdir}/FOTO_IZQ/*.png'))
imagesRight = sorted(glob.glob(f'{img_globdir}/FOTO_DER/*.png'))

for imgLeft, imgRight in zip(imagesLeft, imagesRight):
    imgL = cv.imread(imgLeft)
    imgR = cv.imread(imgRight)
    grayL = cv.cvtColor(imgL, cv.COLOR_BGR2GRAY )
    grayR = cv.cvtColor(imgR, cv.COLOR_BGR2GRAY)

    # Find the chess board corners
    retL, cornersL = cv.findChessboardCorners(grayL, chessboardSize, None)
    retR, cornersR = cv.findChessboardCorners(grayR, chessboardSize, None)
    print(retL," : ", retR)
    # If found, add object points, image points (after refining them)
    if retL and retR == True:



        # Draw and display the corners
        cv.drawChessboardCorners(imgL, chessboardSize, cornersL, retL)
        cv.imshow('img left', cv.resize(imgL,(800,450)))
        cv.drawChessboardCorners(imgR, chessboardSize, cornersR, retR)
        cv.imshow('img right', cv.resize(imgR,(800,450)))
        while 1:
            k=cv.waitKey(33)
            if k == ord('y'):  # Esc key to stop
                valid=True
                break
            if k== ord('n'):
                valid=False
                break
            if k == ord('d'):
                valid = False
                print("Borradas")
                os.remove(imgLeft)
                os.remove(imgRight)
                break
        if valid:
            print("Validamos")
            objpoints.append(objp)

            cornersL = cv.cornerSubPix(grayL, cornersL, (11, 11), (-1, -1), criteria)
            imgpointsL.append(cornersL)

            cornersR = cv.cornerSubPix(grayR, cornersR, (11, 11), (-1, -1), criteria)
            imgpointsR.append(cornersR)

    else:
        os.remove(imgLeft)
        os.remove(imgRight)

cv.destroyAllWindows()

############## CALIBRATION #######################################################
cv_file = cv.FileStorage(f'/home/admin/CALIBRATION/DOCUMENTOS/MAPAS/{os.path.basename(imagesLeft[0])[:16]}_CHARUCO.xml', cv.FILE_STORAGE_READ)
cameraMatrixL = cv_file.getNode('CameraMatrix_L').mat()
cameraMatrixR = cv_file.getNode('CameraMatrix_R').mat()
newCameraMatrixL = cv_file.getNode('NEWCameraMatrix_L').mat()
newCameraMatrixR = cv_file.getNode('NEWCameraMatrix_R').mat()
distL=cv_file.getNode('Dist_L').mat()
distR=cv_file.getNode('Dist_R').mat()
cv_file.release()


########## Stereo Vision Calibration #############################################

flags = 0
flags |= cv.CALIB_USE_INTRINSIC_GUESS
# Here we fix the intrinsic camara matrixes so that only Rot, Trns, Emat and Fmat are calculated.
# Hence intrinsic parameters are the same

criteria_stereo= (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# A PARTIR DE ACA, ALGO NOS SALE COMO EL TUJE. Probaria con un patron asimetrico circular y ver si eso cambia.
# This step is performed to transformation between the two cameras and calculate Essential and Fundamenatl matrix
retStereo, newCameraMatrixL, distL, newCameraMatrixR, distR, rot, trans, essentialMatrix, fundamentalMatrix = cv.stereoCalibrate(objpoints, imgpointsL, imgpointsR, newCameraMatrixL, distL, newCameraMatrixR, distR, grayL.shape[::-1], criteria_stereo, flags)


print(retStereo)
########## Stereo Rectification #################################################

rectifyScale= 1
rectL, rectR, projMatrixL, projMatrixR, Q, roi_L, roi_R= cv.stereoRectify(newCameraMatrixL, distL, newCameraMatrixR, distR, grayL.shape[::-1], rot, trans, rectifyScale,(0,0))

stereoMapL = cv.initUndistortRectifyMap(newCameraMatrixL, distL, rectL, projMatrixL, grayL.shape[::-1], cv.CV_16SC2)
stereoMapR = cv.initUndistortRectifyMap(newCameraMatrixR, distR, rectR, projMatrixR, grayR.shape[::-1], cv.CV_16SC2)

undistortedL= cv.remap(imgL, stereoMapL[0], stereoMapL[1], cv.INTER_LANCZOS4, cv.BORDER_CONSTANT, 0)
undistortedR= cv.remap(imgR, stereoMapR[0], stereoMapR[1], cv.INTER_LANCZOS4, cv.BORDER_CONSTANT, 0)

vis = np.concatenate((undistortedL, undistortedR), axis=1)
heigth, width, _ = vis.shape
for i in range(0, 10):
    cv.line(vis, (0, int(heigth / 10) * i), (width, int(heigth / 10) * i), (0, 255, 0))
cv.imshow("CON", cv.resize(vis, (1280, 720)))
cv.waitKey(0)

and i get the following:

If you know what I’m doing wrong when calibrating the Stereo with charuco, I would appreciate it.

Anyway, save the .XML obtained from the Stereo calibration with Chessboard and calculate the disparity map again with the same images and the values ​​that @Sencis recommended, but i couldn’t get the numdisparity up to 256 since the image was completely black, leaving it currently at 60.

The resulting disparity map is this:

It improved considerably, but I think the problem is in the calibration, since by increasing the number of images in each individual calibration, an improvement in the colors of the image was perceived, since at least you do not see white in the background, now follows a logic.

But I would like to improve the Stereo calibration with Charuco, I don’t know why the result is so bad, being that I use more than 14 images the result is too poor.

Thank you both for sharing your knowledge!!!

well, kinda. let me split hairs.

each can observe arbitrary markers/corners, but for the stereo calibration, you’ll have to calculate the intersection of both, and only use those.

the points have to match up in each array, i.e. model points and image points (for both views).

that is broken. python doesn’t work like that. look at the value of (resp_L and resp_R). the comparison operator does not distribute into that expression. I’d recommend working with python’s debugger. that’s just something I spotted from skimming the code. if that bug is in there, there could be more issues.

The solution that worked for me was something like this:

#include "stereovision.h"
#include <Windows.h>

StereoVision::StereoVision(int imageWidth,int imageHeight)
{
    this->imageSize = cv::Size(imageWidth,imageHeight);
    mx1 = my1 = mx2 = my2 = 0;
    calibrationStarted = false;
    calibrationDone = false;
    imagesRectified[0] = imagesRectified[1] = imageDepth_left = imageDepth_right = imageDepthNormalized = 0;
    imageDepth_left = 0;
    imageDepth_right = 0;
    sampleCount = 0;

//    SBM = cv::StereoBM::create(256, 9);

//    SBM->setPreFilterSize(5);
//    SBM->setPreFilterCap(63);
//    SBM->setTextureThreshold(50);
//    SBM->setSpeckleWindowSize(256);
//    SBM->setSpeckleRange(256);
//    SBM->setMinDisparity(-125);
//    SBM->setUniquenessRatio(0);

//    left_matcher = cv::StereoBM::create(256, 9);

//    WLS_filter =cv::ximgproc::createDisparityWLSFilterGeneric(true);

//    WLS_filter = cv::ximgproc::createDisparityWLSFilter(left_matcher);

//    right_matcher = cv::StereoBM::create(256, 9);

      SGBM = cv::StereoSGBM::create(-140,256,3,300,1200,0,63,1,256,32,cv::StereoSGBM::MODE_SGBM);

//    SGBM = cv::StereoSGBM::create(-9,256,11,0,0,1,63,10,100,32,cv::StereoSGBM::MODE_SGBM);
}

StereoVision::~StereoVision()
{
    imagesRectified[0].release();
    imagesRectified[1].release();
    imageDepth_left.release();
    imageDepth_right.release();
    imageDepthNormalized.release();
}

void StereoVision::calibrationStart(int cornersX,int cornersY, int distCircle){
    this->cornersX = cornersX;
    this->cornersY = cornersY;
    this->cornersN = cornersX*cornersY;
    this->squareSize = distCircle;
    ponintsTemp[0].resize(cornersN);
    ponintsTemp[1].resize(cornersN);
    sampleCount = 0;
    calibrationStarted = true;
}


int StereoVision::calibrationAddSample(cv::Mat* imageLeft,cv::Mat* imageRight){

    if(!calibrationStarted) return RESULT_FAIL;

    cv::Mat image[2] = {*imageLeft, *imageRight};

    cv::Mat cimg[2];

    int succeses = 0;

    for(int lr=0; lr<2; lr++){

        cv::Size imageSize = image[lr].size();

        if(imageSize.width != this->imageSize.width || imageSize.height != this->imageSize.height) return RESULT_FAIL;

        //FIND CHESSBOARDS AND CORNERS THEREIN:
        //int result = cv::findChessboardCorners( image[lr], cv::Size(cornersX, cornersY), ponintsTemp[lr], cv::CALIB_CB_ADAPTIVE_THRESH | cv::CALIB_CB_NORMALIZE_IMAGE );

        cv::SimpleBlobDetector::Params param;
        param.maxArea = 16384; //Size 64x64 = max 62 circles for 640x400 to 44 in board{
        param.minArea = 64; //minSize 8x8
        param.minThreshold = 50;
        param.maxThreshold = 200;
        param.thresholdStep = 5;
        param.filterByCircularity = true;
        param.minCircularity = 0.8;
        param.filterByColor = true;
        param.blobColor = 0;
        param.filterByInertia = true;
        param.minInertiaRatio = 0.4f;
        param.filterByConvexity = true;
        param.minConvexity = 0.7f;
        cv::Ptr<cv::FeatureDetector> blobDetector = cv::SimpleBlobDetector::create(param);

        int result = findCirclesGrid(image[lr], cv::Size(cornersY, cornersX), ponintsTemp[lr], cv::CALIB_CB_ASYMMETRIC_GRID | cv::CALIB_CB_CLUSTERING, blobDetector);/*The OpenCV circle grid detector implementation uses subpixel precision*/

        if(result){

            cv::cvtColor(image[lr], cimg[lr], cv::COLOR_GRAY2BGR);

            //Calibration will suffer without subpixel interpolation
            //cv::cornerSubPix( image[lr], ponintsTemp[lr], cvSize(11, 11), cvSize(-1,-1), cv::TermCriteria(cv::TermCriteria::MAX_ITER | cv::TermCriteria::EPS, 30, 0.1));
            cv::drawChessboardCorners( cimg[lr], cv::Size(cornersY, cornersX), ponintsTemp[lr], result);

            succeses++;
        }

    }
    if(2==succeses){

        for(int lr=0;lr<2;lr++){ cv::imshow("output", cimg[lr]); Sleep(200);  points[lr].push_back(ponintsTemp[lr]);}

        sampleCount++;

        if(trigg_press){

        trigg_press = false;


        for(int lr=0;lr<2;lr++){

//            std::string output;

//            if(lr==0) output = "left";
//            else output = "right";

//            std::string data = "C:/Users/CopterTech/Desktop/build-OpenCV-Qt-StereoVision-Desktop_Qt_5_15_2_MinGW_32_bit-Debug/debug/StereoPair/" + output + "_image_" + std::to_string(sampleCount) +".png";

//            cv::imwritemulti(data, image[lr]);
        }


     }
         return RESULT_OK;
    }

    else return RESULT_FAIL;
}

int StereoVision::calibrationEnd(){
    calibrationStarted = false;

    // Правильное заполнение массива objectPoints расстояние squareSize указываеться в миллиметрах
    for(int k=0;k<sampleCount;k++){

        obj.clear();

        for(int i = 0; i < cornersX; i++ ) for(int j = 0; j < cornersY; j++ ) obj.push_back(cv::Point3f(float(i  * squareSize / 2.0f), float((2.0f * j + i % 2) * squareSize / 2.0f), 0.0f)); //objectPoints[k*cornersY*cornersX + i*cornersX + j] = cvPoint3D32f(i, j, 0);

        objectPoints.push_back(obj);
    }

    cv::Mat M1 = cv::Mat::eye(3, 3, CV_64F);

    cv::Mat M2 = cv::Mat::eye(3, 3, CV_64F);

    cv::Mat D1, D2, R, T, E, F;

    cv::FileStorage fs1("mystereocalib.yml", cv::FileStorage::WRITE);

    //if (M1.empty() || M2.empty()) qDebug()<< "Camera_calibration_rms" << cv::stereoCalibrate( objectPoints, points[0], points[1], M1, D1, M2, D2, imageSize, R, T, E, F, cv::CALIB_SAME_FOCAL_LENGTH + cv::CALIB_ZERO_TANGENT_DIST + cv::CALIB_FIX_ASPECT_RATIO + cv::CALIB_RATIONAL_MODEL+ cv::CALIB_FIX_K3 + cv::CALIB_FIX_K4 + cv::CALIB_FIX_K5, cv::TermCriteria(cv::TermCriteria::MAX_ITER | cv::TermCriteria::EPS, 100, 0.0001));
    qDebug()<< "Camera_calibration_rms" << cv::stereoCalibrate( objectPoints, points[0], points[1], M1, D1, M2, D2, imageSize, R, T, E, F, cv::CALIB_SAME_FOCAL_LENGTH + cv::CALIB_ZERO_TANGENT_DIST, cv::TermCriteria(cv::TermCriteria::MAX_ITER | cv::TermCriteria::EPS, 100, 0.0001));

    fs1 << "Camera_intrinsic_matrix_1_left" << M1;
    fs1 << "Camera_intrinsic_matrix_2_right" << M2;
    fs1 << "Distorsion_coff_matrix_1" << D1;
    fs1 << "Distorsion_coff_matrix_2" << D2;
    fs1 << "Rotation_matrix" << R;
    fs1 << "Translation_vector" << T;
    fs1 << "Essential_matrix" << E;
    fs1 << "Fundamental_matrix" << F;

    vector<cv::Point3f> lines[2];

    double avgErr = 0;

    int nframes = (int)objectPoints.size();

    for (int i = 0; i < nframes; i++) {

        vector<cv::Point2f> &pt0 = points[0][i];
        vector<cv::Point2f> &pt1 = points[1][i];

        cv::undistortPoints(pt0, pt0, M1, D1, cv::Mat(), M1);
        cv::undistortPoints(pt1, pt1, M2, D2, cv::Mat(), M2);

        cv::computeCorrespondEpilines(pt0, 1, F, lines[0]);
        cv::computeCorrespondEpilines(pt1, 2, F, lines[1]);

        for (int j = 0; j < cornersN; j++) {
          double err = fabs(pt0[j].x * lines[1][j].x + pt0[j].y * lines[1][j].y +
                            lines[1][j].z) +
                       fabs(pt1[j].x * lines[0][j].x + pt1[j].y * lines[0][j].y +
                            lines[0][j].z);
          avgErr += err;
        }
      }

    qDebug()<< "avg err = " << avgErr / (nframes * cornersN);

    fs1<< "CALIBRATION_QUALITY_CHECK_Because_the_output_fundamental_matrix_implicitly_includes_all_the_output_information_we_can_check_the_quality_of_calibration_using_the_epipolar_geometry_constraint" << avgErr / (nframes * cornersN);

    //fs1.write("  CALIBRATION QUALITY CHECK Because the output fundamental matrix implicitly includes all the output information, we can check the quality of calibration using the epipolar geometry constraint: m2^t*F*m1=0 avg err =",avgErr / (nframes * cornersN) );

   cv::Mat R1, R2, P1, P2;

   cv::stereoRectify(M1, D1, M2, D2, imageSize, R, T, R1, R2, P1, P2, Q);

    fs1 << "Rectification_transform_rotation_matrix_for_the_first_camera_1_left" << R1;
    fs1 << "Rectification_transform_rotation_matrix_for_the_second_camera_2_right" << R2;
    fs1 << "Projection_matrix_in_the_new_rectified_coordinate_systems_for_the_first_camera" << P1;
    fs1 << "Projection_matrix_in_the_new_rectified_coordinate_systems_for_the_second_camera" << P2;
    fs1 << "Disparity_to_depth_mapping_matrix " << Q;

    fs1.release();
    mx1.release();
    my1.release();
    mx2.release();
    my2.release();
    mx1.create( imageSize.height,imageSize.width, CV_32F );
    my1.create( imageSize.height,imageSize.width, CV_32F );
    mx2.create( imageSize.height,imageSize.width, CV_32F );
    my2.create( imageSize.height,imageSize.width, CV_32F );

    // Precompute map for cvRemap()
    //
    cv::initUndistortRectifyMap(M1, D1, R1, P1, imageSize, CV_32F, mx1, my1);
    cv::initUndistortRectifyMap(M2, D2, R2, P2, imageSize, CV_32F, mx2, my2);

    calibrationDone = true;

    return RESULT_OK;
}

int StereoVision::stereoProcess(cv::Mat* imageSrcLeft, cv::Mat* imageSrcRight){

    if(!calibrationDone) return RESULT_FAIL;

    //rectify images
    cv::remap(*imageSrcLeft, imagesRectified[0], mx1, my1, cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::Scalar());
    cv::remap(*imageSrcRight, imagesRectified[1], mx2, my2, cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::Scalar());

//    left_matcher->compute(imagesRectified[0], imagesRectified[1], imageDepth_left);

//    left_matcher->setPreFilterSize(5);
//    left_matcher->setPreFilterCap(63);
//    left_matcher->setTextureThreshold(50);
//    left_matcher->setSpeckleWindowSize(255);
//    left_matcher->setSpeckleRange(255);
//    left_matcher->setMinDisparity(-125);
//    left_matcher->setUniquenessRatio(0);

//    cv::flip(imagesRectified[0],imagesRectified[0], 1); cv::flip(imagesRectified[1],imagesRectified[1], 1);

//    right_matcher->compute(imagesRectified[1], imagesRectified[0], imageDepth_right);

//    right_matcher->setPreFilterSize(5);
//    right_matcher->setPreFilterCap(63);
//    right_matcher->setTextureThreshold(50);
//    right_matcher->setSpeckleWindowSize(255);
//    right_matcher->setSpeckleRange(255);
//    right_matcher->setMinDisparity(-125);
//    right_matcher->setUniquenessRatio(0);

//    cv::flip(imageDepth_right,imageDepth_right, 1);

//    cv::flip(imagesRectified[0],imagesRectified[0], 1); cv::flip(imagesRectified[1],imagesRectified[1], 1);

      SGBM->compute(imagesRectified[0], imagesRectified[1], imageDepth_left);

//    SBM->compute(imagesRectified[0], imagesRectified[1], imageDepth_left);

//    WLS_filter->setLambda(8000.0);

//    WLS_filter->setSigmaColor(1.5);

//    WLS_filter->filter(imageDepth_left, imagesRectified[0], imageDepthNormalized, imageDepth_right);

//    cv::ximgproc::getDisparityVis(imageDepthNormalized, imageDepthNormalized, -1.0);

    cv::normalize(imageDepth_left, imageDepthNormalized, 0, 255, cv::NORM_MINMAX, CV_8UC1);

    cv::applyColorMap(imageDepthNormalized, imageDepthNormalized, cv::ColormapTypes::COLORMAP_JET);

//    cv::Mat XYZ(imageDepthNormalized.size(),CV_32FC3);
//    cv::reprojectImageTo3D(imageDepthNormalized, XYZ, Q, false, CV_32F);
   // print_3D_points(imageDepthNormalized, XYZ);


    return RESULT_OK;
}

void StereoVision::SetPreFilterSize(int data)
{
  if(data % 2) {
   if (data < 5 )data = 5;
   if (data > 255)data = 255;
//   left_matcher->setPreFilterSize(data);
//   right_matcher->setPreFilterSize(data);
  }
}

void StereoVision::SetPreFilterCap(int data)
{
  if(data % 2) {
  if (data < 1 )data = 1;
  if (data > 63)data = 63;
//  left_matcher->setPreFilterCap(data);
//  right_matcher->setPreFilterCap(data);
  SGBM->setPreFilterCap(data);
  }
}

void StereoVision::SetTextureThreshold(int data)
{
//  left_matcher->setTextureThreshold(data);
//  right_matcher->setTextureThreshold(data);

}

void StereoVision::SetUniquenessRatio(int data)
{
//  left_matcher->setUniquenessRatio(data);
//  right_matcher->setUniquenessRatio(data);
  SGBM->setUniquenessRatio(data);
}

void StereoVision::SetSpeckleWindowSize(int data)
{
//  left_matcher->setSpeckleWindowSize(data);
//  right_matcher->setSpeckleWindowSize(data);
  SGBM->setSpeckleWindowSize(data);
}

void StereoVision::SetMinDisparity(int data)
{
//  left_matcher->setMinDisparity(data);
//  right_matcher->setMinDisparity(data);
  SGBM->setMinDisparity(data);
}

void StereoVision::SetSpeckleRange(int data)
{
//    left_matcher->setSpeckleRange(data);
//    right_matcher->setSpeckleRange(data);
    SGBM->setSpeckleRange(data);
}

void StereoVision::SetBlockSize(int data)
{
    if(data % 2) {
     if (data < 5 )data = 5;
     if (data > 255)data = 255;
//     left_matcher->setBlockSize(data);
//     right_matcher->setBlockSize(data);
     SGBM->setBlockSize(data);
    }
}

void StereoVision::SetDisp12MaxDiff(int data)
{
//    left_matcher->setDisp12MaxDiff(data);
//    right_matcher->setDisp12MaxDiff(data);
}

void StereoVision::SetNumDisparities(int data)
{
//    left_matcher->setNumDisparities(data);
//    right_matcher->setNumDisparities(data);
    SGBM->setNumDisparities(data);
}

void StereoVision::SetSmallerBlockSize(int data)
{
//   left_matcher->setSmallerBlockSize(data);
//   right_matcher->setSmallerBlockSize(data);
}

int StereoVision::calibrationSave(const char* filename){
    if(!calibrationDone) return RESULT_FAIL;
    FILE* f =  fopen(filename,"w+b");
    if(!f) return RESULT_FAIL;
    if(!fwrite(mx1.data,sizeof(float),mx1.rows*mx1.cols,f)) return RESULT_FAIL;
    if(!fwrite(my1.data,sizeof(float),my1.rows*my1.cols,f)) return RESULT_FAIL;
    if(!fwrite(mx2.data,sizeof(float),mx2.rows*mx2.cols,f)) return RESULT_FAIL;
    if(!fwrite(my2.data,sizeof(float),my2.rows*my2.cols,f)) return RESULT_FAIL;
    if(!fwrite(  Q.data,sizeof(float),  Q.rows  *Q.cols,f)) return RESULT_FAIL;
    fclose(f);
    return RESULT_OK;
}


int StereoVision::calibrationLoad(const char* filename){
    mx1.release();
    my1.release();
    mx2.release();
    my2.release();
//      Q.release();
    mx1.create( imageSize.height,imageSize.width, CV_32F );
    my1.create( imageSize.height,imageSize.width, CV_32F );
    mx2.create( imageSize.height,imageSize.width, CV_32F );
    my2.create( imageSize.height,imageSize.width, CV_32F );
//      Q.create(                4,              4, CV_32F );
    FILE* f =  fopen(filename,"rb");
    if(!f) return RESULT_FAIL;
    if(!fread(mx1.data,sizeof(float),mx1.rows*mx1.cols,f)) return RESULT_FAIL;
    if(!fread(my1.data,sizeof(float),my1.rows*my1.cols,f)) return RESULT_FAIL;
    if(!fread(mx2.data,sizeof(float),mx2.rows*mx2.cols,f)) return RESULT_FAIL;
    if(!fread(my2.data,sizeof(float),my2.rows*my2.cols,f)) return RESULT_FAIL;
//    if(!fread(  Q.data,sizeof(float),  Q.rows*  Q.cols,f)) return RESULT_FAIL;
    fclose(f);
    calibrationDone = true;
    return RESULT_OK;
}

I used circleGride pattern for calibration:

For a chessboard, you need to fill the objpoints array in a different way.

You can use MatLab to export calibration parameters:



To compare the calibration results, for example, for the board in this image with a square size of 12.5mm, the objectsPoints array should be filled like this:

Unfortunately, he finds the number of squares of the chessboard in width and height on his own, unlike circles. And I can’t show exactly your board example worldPoints for (20,13) but you can upload the pictures yourself.

Well in the end I chose not to reinvent the wheel. Seeing your images I realized that matlab already has an App to calibrate. So I downloaded it and that’s it! With the same images I got a super good calibration, with a reprojection error of 0.1048. I guess the OpenCV algorithm must not be so optimal. I spent months turning the calibration around, and matlab solved everything for me.

I only have one last question, I was taking many images to see how the Calibration map evolves in different environments. There are images that generate it quite well for me, like this one:

LEFT

RIGHT

Disparity without WSLFilter

Disparity with WSLFilter

But depending on the environment, I find that I have to adjust the values of the SGBM algorithm. For example in this following, I could only get a “good result” using a blocksize = 1

LEFT

RIGHT

Disparity without WSLFilter

Disparity with WSLFilter

In this I was able to get good results, changing numdisparity = 11 and blocksize = 5:

LEFT

RIGHT

Disparity without WSLFilter

Disparity with WSLFilter

My question is: Is it normal to have to change the values of the SGBM algorithm for each image? And if so, given that my project consists of generating depth maps in various environments, be it with a lot of light, low light, at 5 meters or at 15. Is there any way to “Automate” the choice of values?
Thank you very much

I encountered a similar problem when I needed to select depth map parameters depending on the distance and have not yet found an exact solution, OpenCV uses quite old algorithms. It seems to me that on OAK-D cameras and the like, just for these reasons, they use the fusion of neural network algorithms in place with analytical depth map building, as before, depth map remains an incompletely solved problem. Experiment with different algorithms in the network, you may have to use the talent of a data since specialist and make a neural network to adjust these parameters, there are many samples with ready-made depth maps that can be taken as training samples.
Some hardware solutions can also help, for example, to build a depth map at large distances of +100m, you need a large baseline between cameras 300-500mm, lenses with a low level of distortion and a long focal length, for example 12mm, while FOV 33 ° is not a problem, since it is impossible to set up a depth map in the same way It is good to see both near and far objects at the same time. A small viewing angle simultaneously plays the role of a filter closest objects. It also works in the opposite direction. Make sure both cameras have the same exposure, gain and shutter sync, it’s best to use a global shutter if possible.
Use this calculator to determine the necessary parameters for your camera:

https://nerian.com/support/calculator/

Among analytical algorithms, pay attention to some modifications of SGM, among which there are algorithms that no longer have such a parameter as the choice of block size. My StereoPI board is too weak to use it and does not have an Nvidia GPU to use its CUDA implementation, so I could not test it, but apparently block size selection is conditionally automated there.

The result is that part of the left side is cut off.

that part of the disparity volume cannot be estimated completely. sampling the full range of “numdisp”, in that part of the map, would cause the algorithm to move partially outside of one of the pictures.

one could say it’s a “gap” in the algorithm where they were overly cautious. in principle, that volume could be tested, with less than “numdisp” range.

that’s why it is blank.