Stereo Calibration, focal length and depth

I have two identical stereo rigs with the same baseline. On one of them the lenses for both cameras were swapped to a longer focal length and recalibrated. The 3D points generated by the rig with the longer focal lengths are further away than for the stereo rig with the shorter lenses. The focal length difference is 4mm, the points are >.2m further away at 2m range. Did we mess up our calibration, or is this expected behavior due to rectification forcing everything into a pinhole camera model?

According to numerous diagrams of stereo vision, the distance is from the object to the focal point of the left camera, so it makes sense that longer focal length would result in points that are further away. The scale of this increase in distance is what’s concerning.

The most obvious thought is that a 4mm increase in focal length would place the points 4mm further away, but instead we are getting over 20cm of increased distance. A different way of thinking about it, is that the distance to the object is computed in rectified space that removed the lenses and turned the system with lenses into a system of pinhole cameras. So a 4mm focal length increase in the real world system with lenses resulted in a 20cm focal length increase in rectified pinhole space. Or am I overthinking this.

no, from the optical center of the camera. in the pinhole model, that’s the center of the aperture. with lens optics, that may be ahead or behind the physical center of the lens. with multiple lenses, it’s more complicated.

it doesn’t matter where the lens focal points are.

if your calibration results in values that aren’t true to reality, you messed up the calibration.

a “4mm increase” is meaningless because you didn’t say what it’s relative to.

I think you’re hypothesizing too much. concentrate on a good calibration. show what exactly you do if you want pointers.

What do you mean when you say “…the lenses for both cameras were swapped to a longer focal length and recalibrated.”

Specifically what do you mean by “recalibrated”? Are you re-calibrating the intrinsics for the cameras (which are now different because of the different lenses) and then calibrating the relative pose? Are you calibrating the rig with stereoCalibrate() and having it compute everything (new intrinsics for both cameras and relative pose) If so is your input calibration data sufficient to do this (enough 3D points, not all coplanar etc)? Are you assuming the intrisics didn’t change?

I’m not an expert on 3D vision, but my understanding is that in order to get true measurements of depth you have to have fully calibrated cameras (intrinsics & relative pose). With less calibrated setups (unknown intrinsics) you can get depth measurements up to an unknown scale (so relative depths “point A is twice as far away as point B” but no sense of units), and / or up to an unknown projection (where relative scale isn’t necessarily preserved, so you can only get things like “A is further away than B”, but not “A is twice as far away as B”). If these concepts aren’t familiar to you (calibration up to an unknown scale / unknown projection) it might be helpful to read up on this.

Again I’m not an expert and I might have described some things wrong here, but the sense I get is that you are trying to get actual depth measurements but don’t have a fully-calibrated rig. I believe what you want/need is to calibrate the intrisics for both cameras of the stereo rig (independently) after putting the new lenses on. Use calibrateCamera() and a calibration target (I prefer the charuco pattern). After you have calibrated intrinsics calibrate your rig and use this full calibration in your process…you should be able to get the same depth measurements from both rigs, but with different depth accuracy.

This web-based calculator might be helpful in giving insight into the accuracy (and FOV etc) you can expect from different lens (and baseline, sensor resolution etc) choices:

Thanks for the replies guys. Yes we were doing full calibration getting intrsinsics and extrinsics at the same time using stereoCalibration(). After thinking about this some more I realized that there are a lot of variables that I have no control over. As in I can’t say with 100% certainty that the stereo rig with the zoom lenses is actually the same distance away from the scene as the other one. I will probably need to just redo this experiment at a later date when I can check everything myself.