Unable to solve perspective distortion

Dear OpenCV enjoyers,

I am trying to solve perspective distortion problem with my acA1300-30um camera (Datasheet).

I followed calibration steps (Calibration tutorial), but still I am unable to correct my images from perspective distortion.

I took 14 pictures with 5x5 chessboard pattern from variaous agles and distances. Here I am for example attaching one of them (limited because I am new member of this forum):

After receiving camera calibration and distortion coefficients, the undistorted images are almost the same as the distorted ones:

Do you have any suggestions on what I should do differently or if there is any significant problem? I used 2 methods to undistort my images, but with no proper results.

Here is the Python code that I used:

import cv2
import numpy as np
import os

CHESSBOARD_SIZE = (5, 5)
CALIBRATION_FOLDER = 'CalibImages'

object_points = []
image_points = []

for filename in os.listdir(CALIBRATION_FOLDER):
    image_path = os.path.join(CALIBRATION_FOLDER, filename)
    image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)

    if image is None or image.size == 0 or image.shape[0] == 0 or image.shape[1] == 0:
        print(f"Error: Unable to read or invalid image file {image_path}")
        continue

    ret, corners = cv2.findChessboardCorners(image, CHESSBOARD_SIZE)

    if ret:
        print(f"Corners found in image {image_path}")
        object_points.append(np.zeros((CHESSBOARD_SIZE[0] * CHESSBOARD_SIZE[1], 3), np.float32))
        object_points[-1][:, :2] = np.mgrid[0:CHESSBOARD_SIZE[0], 0:CHESSBOARD_SIZE[1]].T.reshape(-1, 2)
        image_points.append(corners.reshape(-1, 2))

    else:
        print(f"No corners found in image {image_path}")

if len(image_points) == 0:
    print("No images with corners found. Calibration cannot proceed.")
else:
    retval, camera_matrix, dist_coeffs, rvecs, tvecs = 
cv2.calibrateCamera(object_points, image_points, image.shape[::-1], None, None)

    np.save('camera_matrix.npy', camera_matrix)
    np.save('dist_coeffs.npy', dist_coeffs)

    img = cv2.imread('test12.bmp')
    distorted_image = cv2.undistort(img, camera_matrix, dist_coeffs)
    cv2.imwrite('undistorted_image.jpg', distorted_image)

    height, width = img.shape[:2]
    mapx, mapy = cv2.initUndistortRectifyMap(camera_matrix, dist_coeffs, None, camera_matrix, (width, height), 5)
    undistorted_image = cv2.remap(img, mapx, mapy, cv2.INTER_LINEAR)
    cv2.imwrite('undistorted_imageMAP.jpg', undistorted_image)

your image is simply unusable for calibration

  • itā€™s not even sharp
  • 5x5 does not work (90Ā° invariance), it needs one odd and one even side (like 9x6)
  • board must be absolutely flat, yours makes visible waves.
    laminate it to something stiff, like a glass or metal pane
  • needs a (at least) 1 square wide ā€˜quiet zoneā€™ white border (and ofc no other ā€˜accidentalā€™ squares (like underneath or shining through from the back)

all in all, sloppyness does not pay out hereā€¦

calibrateCamera() returns an error. if itā€™s larger than 0.5, go back & take better images.

2 Likes

Berak, thank you for your reply.

  • The problem is that I am using infrared camera. This is probably the best picture I can get. I was only able to take this picture because of the long exposure time.
  • What do you mean ā€œdoesnā€™t work for 90Ā° invarianceā€?
  • Understood
  • Why is that? In this tutorial they used borders similar to mine. Some of their squares are not even whole.

The ā€œerrorā€ you are referring that calibrateCamera() returns 0.303 in my case. I will try another calibration with all the suggestions and see how it works.

the ā€˜inner cornersā€™ of the board end up in the same place, so it cannot distinguish it from a 90Ā°, 180Ā°, or 270Ā° rotated one

1 Like

some reading:

ā€œ90Ā° invarianceā€: a more descriptive term might be (rotational) symmetry. if the board is identical to its 90 degree rotated self (number of rows equal to number of columns), then the algorithm canā€™t tell uniquely how to assign image points to model points. even if itā€™s 180 degree rotationally symmetric, that can cause trouble.

your board appears to be a 5x5 type (number of ā€œinnerā€ corners, or saddle points). if you had a 5x7, that would still be 180 degree symmetric. go for 5x6 or 5x8, which will appear without rotational symmetry.

the border/quiet zone looks okay for a checkerboard. for these patterns, some border is required but it need not be that wide. 20-50% of a square size should be fine in most cases.

for monocular calibration on a checkerboard with square squares, that might be irrelevant, but it WILL bite you if you ever do stereo calibration.

bad estimates of lens distortion happen when corners of the view havenā€™t been covered thoroughly with points. in the corners of the view, distortion is worst, and the distortion model will ā€œgo wildā€ there most easily, so thatā€™s where you need those points.

IDK if OpenCV has an algorithm to recover the topology of regular grids from just the set of points. it appears capable of doing that on circles grids but not checkerboard grids. circles grids have their own downsides.

the usual ā€œsolutionā€ is to use ā€œcharucoā€ boards, but thatā€™s even worse. charuco boards use markers to give the pointts identity, so the algorithm doesnā€™t require the entire board to be in view, making it easier to obtain points in the corners of the view. however, charucos require more resolution. thatā€™s a huge downside.

try again with the board, get its corners into the corners of your view.

always out-of-plane-angle the board. perspective foreshortening must be evident. in-plane rotation is mostly pointless.

the board should always cover a sizable fraction of the view. Iā€™d recommend that it always intersects the center of the view.

1 Like

If I understand what you are trying to achieve (remove perspective distortion from the image) and what you are doing (calibrating a camera and removing optical distortion) I think you need to change your initUndistortRectifyMap call to include a perspective warp, too.

Here is an example of some code I use to do something similar to what I believe you are trying to achieve.

cv::initUndistortRectifyMap(m_camMat, m_distCoeffs, cv::Mat(), 
                            perspective*m_camMat, cv::Size(outputWidth, outputHeight), CV_16SC2, 
                            warpMap1, warpMap2);

Where perspective is calculated from cv::getPerspectiveTransform()