Camera Calibration Intrinsic Parameter Verification

Hello All,

I’m trying to calibrate a stereo camera pair. I’m using the openCV and chessboard calibration methods to calibrate the cameras. I want to understand how I can verify whether the camera calibration parameters are correct or not. For example, when we get the intrinsic matrix, it has a focal length value in pixels. So if I’m using a 6mm lens, then should the camera calibration result match the actual focal length? That is, if I multiply the focal length in pixels (that we found from the intrinsic matrix) with the pixel size in micrometers, should the result match the actual focal length of the lens?

Thanks and regards

this is purely intrinsics. it applies to monocular vision as well. not specific to stereo.

yes, it should match, give or take a few percent.

focal length [px] = focal length [mm] / pixel size [µm/px]

so if you had 6 mm focal length, and your sensor pixels are 2 µm, then you’d have a focal length of 3000 pixels.

that is also how you can get very reasonable intrinsics matrix from just design parameters, without calibration.

if the calibrated focal length happens to be a clean fraction of the expected design value, check your binning. cropping does not affect focal length but it does affect cx,cy

1 Like

Thank you for your reply and confirmation.

I did not understood your point " that is also how you can get very reasonable intrinsics matrix from just design parameters, without calibration." , what do you mean by without calibration?

Also, I have been using raspberry pi hq cameras with 6mm lens, the camera sensor has pixel size of 1.5um X So the focal lenght in pixels should be 4000 pixels. However, when I’m executing the calibration algorithm with 20 images of chessboard pattern, I dont get the similar focal length in pixels, the deviation is too high. The value I get is 200 pixels.

Algorithm reference: OpenCV: Camera Calibration
My chessboard size is 7 rows, 10 columns

Camera Intrinsic Matrix Output for Left Camera:
CameraMatrix_L =
[[200.669 0. 291.749]
[ 0. 200.625 246.391]
[ 0. 0. 1. ]]

Could you please let me know what could be wrong in my calibration process
I have attached one of the input image for your reference.

You mentioned you are using the RPI HQ camera. I did a little bit of digging and it appears that camera has a native resolution of 4056x3040. The image you posted is 640x480. Is this the resolution of your calibration images? If so, the focal length will be different than what you computed because the effective pixel size is larger than what the spec sheet says. (I’m assuming the 640x480 image was generated with binning or scaling/resampling, and not by cropping.)

I note that your calibrated image center is 291,246 - in most cases you expect this to be somewhat close to the numerical center of the sensor. These numbers look like they could correspond to a 640x480 image much more than from a 4056x3040 image.

So that’s the first thing to figure out / account for. Unfortunately I don’t think this explains all that is going on. The image size would account for a factor of 6 difference between calibrated and expected focal length, but you are seeing a 20x difference.

You mentioned that you used 20 images - can you post some of the other images that you used? The 20 images should show the calibration target from different views with different distances, rotations, and with calibration points that cover different parts of the camera image. The image you shared looks centered with the camera pointed almost perpendicular to the calibration target - you want images that are taken at an angle, otherwise you won’t get good estimates for your focal length.

Thank you so much for your answer. It helped. I captured 30 calibration images from different angles and orientations with a resolution of 640x480.
The camera matrix, I got was
695.09 0 233.752
0 700.141 66.58
0 0 1

From this, I calculated the focal length in mm. Since the resolution is decreased to 640x480, the pixel size would be 9.506 um rather than 1.5um which would be used for 4056 resolution. Thus, the calculated focal length is 6.607 mm, which is nearly matching with my lens of 6mm.

Now I also want to know the baseline between the camera, I understood the translation vector generated from camera calibration would give me the x coordinate which is the distance of 2nd camera from 1st, which is equivalent to the baseline, am I correct?
The X-cordinate value i got was -5.847. Since my camera calibration checkboard pattern size was 2.5cm X 2.5cm, the baseline would be = 5.847*2.5 = 14.61 cm, which matches with my setup.

Now, I have calculated the disparity using the cv2.StereoBM_create() and stored it. After this I wish to calculate the depth of a point on the image. The formula for depth is =( Focal Length * Baseline) / disparity. How I want to understand what would be the units for all these parameters to get the depth. Should I use the focal length and baseline from the camera calibration results or I should convert them to actual units millimeters or meters and then find the depth? Do I need to convert the disparity values to any other standard unit?

Thanks and regards,

Is my understanding correct? What is the correct way to calculate disparity?

Can anyone please help me with this?