# Decoding Camera Calibration results: Calculating Precise Focus

Hello together,

Calculating a camera’s focal length from calibration results can be quite a headache, often raising questions that seem to go unanswered. One of the biggest puzzles many people encounter is how to determine the camera’s focal length in millimeters. Another issue that arises is why the focal length obtained from a calibration model often doesn’t match the actual focal length of the camera’s lens. For instance, see these discussions:

“Sorry, as a new user, I am not allowed to post more than 2 links. So, I removed these, as they are the most insignificant ones.”

First of all we need a common ground, so let’s stick to the definition of focal length found on wikipedia: “For a thin lens in air, the focal length is the distance from the center of the lens to the principal foci (or focal points ) of the lens.”

To boost my confidence in the calibration process, I tried to measure the focal length of the lens I’m using, only to end up with varying results. (The lens I am using is specified as having a 90 mm focal length according to the datasheet, but my measured results vary. I get ~ 107 mm.*) My initial thought was that the calibration process might be flawed or inaccurate.

I then came across a post in the opencv forum, which led me to realize that the pinhole model and the lens model are fundamentally different. That’s why you can’t directly compare the focal lengths derived from each model, as illustrated in the sketches I made:

So my final conclusion: The camera calibration calculates the, lets call it “effective focal length” of the projection model, as if the camera were a pinhole camera. This “effective focal length” is not directly comparable to the focal length specified in lens manufacturers’ spec sheets.

So, my final question is: Do you agree with my reasoning, and are my conclusions correct?

Many thanks in advance—I’m looking forward to a great discussion on this.

Best regards, and happy calibrating,

Marius

P.S.: Furthermore, I noticed that my results for f_x and f_y changed slightly (by approximately -7% **) when I set all the distortion coefficients to zero. Shouldn’t the intrinsic parameters be independent of the distortion model? (If anyone is interested in exploring this further, I can provide pictures and the code I used for reproduction.)

* f_x = 28484.52 pixels → Camera Sensor: 23.8 mm x 35.7 mm with 6336 x 9504 pixels. f_mm = (f_x28484.52 pixels)/9504 pixels ~ 107 mm. The reprojection error of 0.8437 indicates that the overall calibration seems to work well (ignoring that it’s a mean and there could potentially be large errors in some cases). Note that the conversion into mm I am using is: f_x_mm = (f_x_pixelsensor_width)/number_of_pixels_along_the_width
** Using the provided flags: ‘cv.CALIB_ZERO_TANGENT_DIST + cv.CALIB_FIX_S1_S2_S3_S4’. I get f_x = 28484.52 pixels without the flags and f_x = 26746 pixels with the flags set.

I utilized the assistance of an AI language model, GPT, to spell-check and help me formulate parts of the text based on my own notes.