Distortion correction

I’m trying to correct image distortion (like barrel distortion) with OpenCV.

My inputs are :

  • a calibration image; this image is a view of a calibration target, made of round dark spots regularly spaced on a clear background; this target is centered on the camera and perpendicular to the camera optical axis,
  • an image to be corrected; on this image the object is flat with a rectangular shape; and it’s roughly located and oriented like the target (centered and perpendicular to camera axis).

AFAIK, the calibration functions in OpenCV, are designed for a ‘true’ 3D calibration, requiring more than one calibration images, and giving as results :

  • the intrinsic data (camera optical parameters),
  • the extrinsic data (3D position/orientation of the camera, in the calibration reference).

For my case (correction of distortion) I think I just need intrinsic data, and I wonder if I can get them with a single calibration image.

What I’ve tried:

  • find the calibration circles on my image with findCirclesGrid() : OK (this gives me the 2D image points),
  • construct the 3D world points (for each point : x=horizontal point index, y=vertical point index, z=0),
  • calibrate the camera with calibrateCamera(),
  • build a new camera model with getOptimalNewCameraMatrix(),
  • initialize the x and y correction tables (to speed up future image correction) with initUndistortRectifyMap(),
  • correct distortion with remap().

The result is an image where the object borders are straight, but the object appears like warped : the top left corner doesn’t move in the corrected image but the 3 other corners do, so in the corrected image the object shape is not really rectangular.

I’ve also tried to build up manually the correction tables (from distortion parameters k1, k2, k3, k4, k5, k6, p1, p2, assuming I can get them from another method) but this gave the same result : the resulting corrected image looks deformed.

Is there anybody who have faced similar behavior ?
Any solution ?

For information, here after the code extract :

// Allocate blob detector
Ptr<FeatureDetector> blob_detector = SimpleBlobDetector::create(params);

// Search for calibration points
grid_ok = findCirclesGrid(calib_image, grid_size, image_points, CALIB_CB_SYMMETRIC_GRID, blob_detector);

if (!grid_ok)

// Draw points
//drawChessboardCorners(image, grid_size, image_grid_points, grid_ok);

// Build world points
pt_3d.z = 0;
for (ii = 0; ii < nb_points_y; ii++) // Row
   // Set point (y)
   pt_3d.y = ii;

   for (jj = 0; jj < nb_points_x; jj++) // Col
       // Set point (x)
       pt_3d.x = jj;

       // Add point

// Put calibration data (points arrays) into calibration data structures (array of array)

// Calibrate camera
err = calibrateCamera(world_grid_points, image_grid_points, calib_image.size(), camera_matrix, dist_coeffs, rvecs, tvecs);

// Optimize camera resolution
new_camera_matrix = getOptimalNewCameraMatrix(camera_matrix, dist_coeffs, calib_image.size(), 0, calib_image.size());

// Create distortion correction maps
initUndistortRectifyMap(camera_matrix, dist_coeffs, Mat(), new_camera_matrix, calib_image.size(), CV_16SC2, dist_corr_map_x, dist_corr_map_y);

// Remap imagee (replicate 'undefined' dest pixels)
remap(dist_image, corrected_image, dist_corr_map_x, dist_corr_map_y, INTER_NEAREST, BORDER_REPLICATE, 0);

that’s generally a bad idea, unless you feed that orientation into the calculations (I don’t know off-hand if OpenCV even allows that).

you need the pattern to be sitting obliquely. the picture must show foreshortening/perspective.

OpenCV’s intrinsic calibration gives you poses, not for the camera, but for the boards you view. you can ignore those. they are secondary.

if you’re planning to estimate distortion yourself, that’s a lot of extra work. I’d strongly recommend just using the existing calibration routine.

Hi Crackwitz

Thanks for the reply.

The constraint of using a single calibration image, with a perpendicular viewing angle, is linked to the project (existing industrial system).

For the determination of distortion coeffs (k1, k2 etc.) it’s probably possible.

In fact the software already includes a calibration function.
It’s written in Labview, and the Lawview calibration function (more or less equivalent to OPenCV findCirclesGrid() + calibrateCamera()) gives these coeffs.

But then, the Labview function that uses this data to undistort the image, is really slow (10 times slower to equivalent OpenCV function).

That’s why I’d like to use, at least, the OpenCV functions initUndistortRectifyMap() + remap() to have an efficient (fast) distortion correction.

Hope it’s clear.


Can you use Labview to calibrate the distortion and then use the results in OpenCV? Maybe you can just “drop in” the distortion results to OpenCV and use them directly, or maybe you will have to convert somehow…but I would think, if nothing else, you could create your own distortion maps based on the Labview-generated results.

Hi Steve,
Thanks for your reply.

Yes, that’s exactly that I’ve tried to do in my second attempt : build the correction tables (with initUndistortRectifyMap() ) assuming I know the camera intrinsic coeffs.

For this I’ve followed the code found in math - correcting fisheye distortion programmatically - Stack Overflow.
I’ve used the code posted in the second answer (in python, I translated it in C++).
But the result was not as expected : result image was like warped : pixels near top left corner stay unchanged, pixels near opposite corner moves ‘a lot’ …

That’s why I opened this thread on the forum.

Thanks for any suggestion.