Camera calibration field of view

Having calibration points close to the edges/corners of your image will improve the accuracy in that area, but it shouldn’t reduce reduce your FOV. To get a larger FOV in your undistorted image you might want to look into getOptimalNewCameraMatrix() - the result of this gets used in cv::undistort() or initUndistortRectifyMap() as the newCameraMatrix parameter.

You mention that your lens is highly distorted. In my experience the getOptimalNewCameraMatrix() doesn’t always behave well with high distortion lenses, particularly if the input to calibration doesn’t extend into the corners. By varying the alpha parameter to getOptimalNewCameraMatrix you control how much of the original image gets displayed in the output image - the trade-off is that not all pixels in your output image will have data to fill them, so they will be black. For example, see this post with a full-size undistorted image:

One thing you might try if you haven’t is using an Charuco (chessboard + aruco codes) calibration target as it can handle images where the full target isn’t visible. This makes it much easier to get points that extend into the corners of the image.

1 Like