Finding original pixel values from undistorted image (re-distorting)

I am using a stereo camera set-up with a very wide angle (180 degree). I suspect that the camera calibration is not optimal, because I am not getting very accurate results when trying to find the world coordinates of a point using the following method:

 points4d = cv2.triangulatePoints( left_proj_mtx, right_proj_mtx , points1u, points2u)
 points3d = (points4d[:3, :]/points4d[3, :]).T

where points1u is the centre location of an object in the left image and points2u is the centre location of the same object in the right image. If I have a point on my undistorted image, for example (1000 pixels, 1200 pixels), how I can I easily compute the pixel location of that point in my original image ? I want to remap a list of of points so that I can estimate the bearing to the camera using the original points, since I am unable to extract bearing information using my undistorted images. I do not want to re-distort the whole image.

I don’t think there is a cv::distortPoints function for the normal camera models, but if you are using a fisheye model there appears to be a cv::fisheye::distortPoints() function. I haven’t used it, but maybe this is what you want?

For the regular camera models, a few ideas.

  1. You might be able to come up with a way to compute distortion parameters that encode the inverse distortion. If so you could then just call cv::undistortPoints() (on your undistorted points) with this inverse model and get what you want.
  2. You could try to use cv::projectPoints (which does 3D → 2D projection based on rvec, tvec, and camera matrix, and then applies the distortion model to get the distorted position) to compute the distortion for you. I think you would just have to use 0 vecs for rvec and tvec, and then assign a Z coordinate that is the same as your focal length.
  3. you could borrow the code from the projectPoints function that is related to applying the distortion:
        r2 = x*x + y*y;
        r4 = r2*r2;
        r6 = r4*r2;
        a1 = 2*x*y;
        a2 = r2 + 2*x*x;
        a3 = r2 + 2*y*y;
        cdist = 1 + k[0]*r2 + k[1]*r4 + k[4]*r6;
        icdist2 = 1./(1 + k[5]*r2 + k[6]*r4 + k[7]*r6);
        xd0 = x*cdist*icdist2 + k[2]*a1 + k[3]*a2 + k[8]*r2+k[9]*r4;
        yd0 = y*cdist*icdist2 + k[2]*a3 + k[3]*a1 + k[10]*r2+k[11]*r4;

My usecase is as follows: Given a rectilinera image, apply a barrel distortion to it.
I have my camera calibrated and have the K and D matrices. I use a mesh grid. Convert the coordinates to homogenous, multiply with K inv and then send them to cv2.distortPoints(). Now the output of this does not seem to be reasonable. How do I use this output get the distorted image?