Hi,
I’m trying to correct image distortion (like barrel distortion) with OpenCV.
My inputs are :
- a calibration image; this image is a view of a calibration target, made of round dark spots regularly spaced on a clear background; this target is centered on the camera and perpendicular to the camera optical axis,
- an image to be corrected; on this image the object is flat with a rectangular shape; and it’s roughly located and oriented like the target (centered and perpendicular to camera axis).
AFAIK, the calibration functions in OpenCV, are designed for a ‘true’ 3D calibration, requiring more than one calibration images, and giving as results :
- the intrinsic data (camera optical parameters),
- the extrinsic data (3D position/orientation of the camera, in the calibration reference).
For my case (correction of distortion) I think I just need intrinsic data, and I wonder if I can get them with a single calibration image.
What I’ve tried:
- find the calibration circles on my image with findCirclesGrid() : OK (this gives me the 2D image points),
- construct the 3D world points (for each point : x=horizontal point index, y=vertical point index, z=0),
- calibrate the camera with calibrateCamera(),
- build a new camera model with getOptimalNewCameraMatrix(),
- initialize the x and y correction tables (to speed up future image correction) with initUndistortRectifyMap(),
- correct distortion with remap().
The result is an image where the object borders are straight, but the object appears like warped : the top left corner doesn’t move in the corrected image but the 3 other corners do, so in the corrected image the object shape is not really rectangular.
I’ve also tried to build up manually the correction tables (from distortion parameters k1, k2, k3, k4, k5, k6, p1, p2, assuming I can get them from another method) but this gave the same result : the resulting corrected image looks deformed.
Is there anybody who have faced similar behavior ?
Any solution ?
For information, here after the code extract :
// Allocate blob detector
Ptr<FeatureDetector> blob_detector = SimpleBlobDetector::create(params);
// Search for calibration points
grid_ok = findCirclesGrid(calib_image, grid_size, image_points, CALIB_CB_SYMMETRIC_GRID, blob_detector);
if (!grid_ok)
return(false);
// Draw points
//drawChessboardCorners(image, grid_size, image_grid_points, grid_ok);
// Build world points
pt_3d.z = 0;
for (ii = 0; ii < nb_points_y; ii++) // Row
{
// Set point (y)
pt_3d.y = ii;
for (jj = 0; jj < nb_points_x; jj++) // Col
{
// Set point (x)
pt_3d.x = jj;
// Add point
world_points.push_back(pt_3d);
}
}
// Put calibration data (points arrays) into calibration data structures (array of array)
image_grid_points.push_back(image_points);
world_grid_points.push_back(world_points);
// Calibrate camera
err = calibrateCamera(world_grid_points, image_grid_points, calib_image.size(), camera_matrix, dist_coeffs, rvecs, tvecs);
// Optimize camera resolution
new_camera_matrix = getOptimalNewCameraMatrix(camera_matrix, dist_coeffs, calib_image.size(), 0, calib_image.size());
// Create distortion correction maps
initUndistortRectifyMap(camera_matrix, dist_coeffs, Mat(), new_camera_matrix, calib_image.size(), CV_16SC2, dist_corr_map_x, dist_corr_map_y);
// Remap imagee (replicate 'undefined' dest pixels)
remap(dist_image, corrected_image, dist_corr_map_x, dist_corr_map_y, INTER_NEAREST, BORDER_REPLICATE, 0);