hello , i have been trying for long time to make 2 script , 1- stereo calibration , 2- generate disparty map , anyway when i genereate ply file when i measure it in meashlab it gives me wrong numbers for exampe my chessboard is 2.5 mm square , i get in meshlab 86.9 something like that , so i thought maybe the ply i generate dont do mm maybe pixels , then i make it in mm it gave me wrong size too , so i used a scale factor but when i fix a scale factor at a distance lets say 10 cm away scanned object , then use the same object but at 11 cm away , it does not give me the real size anymore , i am so tired so kindly someone help me please
“disparity” is a screen-space measure, in pixels.
to get 3D points, you have to
the steps i used in disparty
- Load the left and right images from the specified paths using
np.loadfor calibration files
- Undistort the loaded images using
cv2.undistort()and the intrinsic camera matrices and distortion coefficients.
- Convert the undistorted images to grayscale using
- Create a window to display the disparity map using
- Create a
StereoSGBMobject with default parameters using
- Compute the disparity map using
StereoSGBM.compute()with the grayscale left and right images as inputs.
- Normalize the disparity map to an 8-bit image using
- Display the disparity map using
- Define a 4x4 projection matrix
Q1for the reprojected 3D points.
cv2.reprojectImageTo3D()to create a 3D point cloud from the disparity map and projection matrix.
- Load the left image again using
cv2.imread()to extract its colors.
- Reshape the 3D point cloud and color arrays and concatenate them to create a single
- Remove any points with
NaNvalues and save the remaining points and colors as a PLY file using a custom
i can share my script if you want to take a look on it
the steps for calibration
1-Import necessary libraries: OpenCV and NumPy
2-Set up some parameters for the chessboard used for calibration, including its size, the size of each square on the board, and the desired frame size for the images.
3-Initialize empty lists for storing image points and object points for both the left and right cameras.
4-Create an array of object points that represents the 3D coordinates of the corners of the chessboard in the real world.
5-Use the cv2.findChessboardCorners() function to detect the corners of the chessboard in each image.
6-If the corners are detected in an image, refine their positions using cv2.cornerSubPix().
7-Draw the detected corners onto the image using cv2.drawChessboardCorners().
8-Concatenate the left and right images with detected corners side-by-side, and display the result using cv2.imshow().
9-Repeat steps 5-8 for each image in the dataset.
10-Use cv2.calibrateCamera() to calibrate the left and right cameras separately using the image and object points.
11-Use cv2.stereoCalibrate() to calibrate the stereo camera system using the object and image points from both the left and right cameras, as well as the calibration parameters from the individual cameras.
12-Save the calibration parameters for both the left and right cameras, as well as the rotation and translation matrices for the stereo camera system, to separate .npy files.
my camera rig set up , they are micro cameras away from each other like 10 mm , and they are parraler to each other in X and Y Axis , they are not angled maybe this cause the problem so i need to angel one of them ?