How to calculate field of view with rational calibration model

Hi all. I am using calibrateCamera() to find the intrinsics matrix and the distortion parameter of a camera that I have. The camera has a wide field of view (~120 degrees) so I decided to use the rational model. The calibration results themselves look good, as if I undistort the chessboard images that I took the lines look straight and the correction overall looks as it should be.

My problem is with trying to calculate an accurate field of view from the calibration results. The manufacturer datasheet says that the horizontal field of view is 122 degrees, but if I use the values in the intrinsics matrix with this formula

urglA

to calculate it the result I obtain is 90 degrees, which is way too small.

When I tried using a distortion model with fewer coefficients the intrinsics values changed, so I assume that I should take the distortion coefficients into account when calculating the field of view, but how exactly? Should I use the intrisics matrix returned by getOptimalNewCameraMatrix() instead, but which value of the parameter alpha should I use then?

calculating with just the projection matrix is valid near the optical center. the distortion function for any parameters approaches identity there.

the distortion function for your lens is probably severe.

the further you move out, the more severe distortion will be.

you’ll have to incorporate lens distortion into your FoV calculations.

Thank you! That’s what I thought. I see the formulas used for undistortion here in initUndistortRectifyMap() but I don’t how to apply them to find the field of view angle

the lens distortion equation maps a geometrically true point/ray to a point on the image plane as the lens distorts and images it.

since you’re interested in true geometry, you’d need to calculate the inverse of the lens distortion equation. that’s done by undistortPoints.

pick a point on the picture that looks like it’s on the edge of what can be imaged.

undistort that point with undistortPoints. you’ll get a vector in 3D space.

post all the numbers you have and a picture from that camera. I might be inclined to play with this.

I see what you mean, so basically for the horizontal fov I could take the first and last horizontal pixel in the middle height of the image and convert them to vectors, then measure the angle between them. I’ll try it out.

Thanks a lot for offering to help! This is one of the pictures that I took for calibration

And these are the calibration results in yml format

%YAML:1.0
---
K: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 9.5784665854967318e+02, 0., 9.6444391828122457e+02, 0.,
       9.5759456146620630e+02, 5.4244663638000429e+02, 0., 0., 1. ]
D: !!opencv-matrix
   rows: 14
   cols: 1
   dt: d
   data: [ 6.4684278331628979e+00, 1.9658375502801708e+00,
       1.9861598885307213e-05, -1.6360659821624597e-05,
       7.0999503936669253e-03, 6.8543043638699608e+00,
       4.2241801262496921e+00, 2.5316025125388103e-01, 0., 0., 0., 0.,
       0., 0. ]

the calibration may be insufficient in the corners. that’s the most difficult area to get right.

sanity check: optical axis ought to result in a vector straight ahead, i.e. (0,0,1) because it’s returning points “on the image plane”. I take cx,cy from your K.

>>> cv.undistortPoints(np.float32([[cx,cy]]).reshape((-1, 1, 2)), K, dc)
array([[[-0.,  0.]]], dtype=float32)

checks out.

now, I think it should be correct to take the extreme corners of the image. I’m doing some off-by-half stuff here because a point describes the center of a pixel, and I’m interested in the corner of those pixels.

>>> cv.undistortPoints(np.float32([[cx,cy], [-0.5, cy], [w+0.5,cy], [cx,-0.5], [cx,h+0.5], [-0.5,-0.5], [w+0.5, h+0.5], [-0.5,h+0.5], [w+0.5,-0.5]]).reshape((-1, 1, 2)), K, dc)
array([[[-0.     ,  0.     ]],

       [[-1.70297, -0.00009]],

       [[ 1.66747, -0.00009]],

       [[ 0.00001, -0.64761]],

       [[ 0.00001,  0.64002]],

       [[-2.14068, -1.20515]],

       [[ 2.08303,  1.1723 ]],

       [[-2.13021,  1.188  ]],

       [[ 2.09318, -1.18915]]], dtype=float32)

now I’m interested in the angle between those vectors and the optical axis, and that’s the arctan of the hypotenuse of each.

>>> np.arctan(np.hypot(*vecs[5].ravel())); np.arctan(np.hypot(*vecs[6].ravel()))
1.1842138
1.1745584

that’s radians, and each is ~half the DFoV. similar considerations for HFoV and DFoV.

  • DFoV: 135.1 degrees
  • HFoV: 118.6 degrees
  • VFoV: 65.5 degrees

does that sound reasonable?

1 Like

Sounds very reasonable, the horizontal field of view is spot on!
Thanks a lot for your help, I learned a lot in the process!