Hi,
I am developing a surround view system in which the ocamcalib toolbox is used to undistort the fisheye camera images. However when I create the xmap and ymap using the calibration result of the toolbox and remap the original image to the defisheyed image, the lines appear to be zigzag or wavy in the scaled down image. (see the images below)

The image at the top is the original fisheye image:
The image in the middle is the scaled up defisheyed image, and the lines are just fine.
The image at the bottom is the scaled down defisheyed image, the lines becomes zigzag and wavy.
Could you shad some light on any solution of this problem? Thank you very much in advance！

Sorry for the delay. I was not allowed to post more than one picture. I postponed the edit due to off duty yesterday and merge the images into one today.

Thank you for replying! Yesterday I also came with the similar idea, but it didn’t take the predicted effect. Maybe I tried the wrong way. Since the code is supposed to be open source, I will put the source code below. The following is the function that creates the xmap and ymap to undistort the fisheye image according to a given scale, the model.something like model.img_size, model.center, model.invpol, etc., are acquired from the calibration result file by ocamcalib toolbox by Prof. Davide Scaramuzza. It seems that during that update it doesn’t give an appropriate interpolation, so some lines become folded ones.

void Defisheye::createLUT(Mat &mapx, Mat &mapy, float sf)
{
Point3d p3D; // X, Y, Z
mapx.create(model.img_size.height, model.img_size.width, CV_32FC1);
mapy.create(model.img_size.height, model.img_size.width, CV_32FC1);
float xc_norm = model.img_size.width / 2.0;
float yc_norm = model.img_size.height / 2.0;
p3D.z = -model.img_size.width / sf; // Z
for (int col = 0; col < model.img_size.width; col++)
for (int row = 0; row < model.img_size.height; row++)
{
p3D.x = (row - yc_norm); // X
p3D.y = (col - xc_norm); // Y
// norm = sqrt(X^2 + Y^2)
double norm = sqrt(p3D.x * p3D.x + p3D.y * p3D.y);
if (norm == 0)
{
mapx.at<float>(row, col) = (float) model.center.y;
mapy.at<float>(row, col) = (float) model.center.x;
continue;
}
// t = atan(Z/sqrt(X^2 + Y^2))
double t = atan(p3D.z / norm);
// r = a0 + a1 * t + a2 * t^2 + a3 * t^3 + ...
double t_pow = t;
double r = model.invpol[0];
for (uint i = 1; i < model.invpol.size(); i++)
{
r += t_pow * model.invpol[i];
t_pow *= t;
}
/* | u | = r * | X | / sqrt(X^2 + Y^2);
| v | | Y | */
double u = r * p3D.x / norm;
double v = r * p3D.y / norm;
/* | x | = | sx shy | * | u | + | xc |
| y | | shx 1 | | v | | yc | */
mapy.at<float>(row, col) = (float)((model.affine(0, 0) * u + model.affine(0, 1) * v + model.center.x));
mapx.at<float>(row, col) = (float)((model.affine(1, 0) * u + model.affine(1, 1) * v + model.center.y));
}
}

And also the horizontal fov of the camera is over 180 degree, so the defisheyed camera image is supposed to extend unlimitedly to the left and right, if I am not misunderstanding it. Yesterday I scaled up the mapx and mapy when creating them, and resized them back at the end of the function, which seemed actually to do nothing.