Glsl shader to distort image according to opencv calibration

Hi,

I need to distort a 3D view to match with a real camera view. I have calibrated the camera with opencv and I need to have a shader to distort (not undistort!) the 3d view.

I have tried various glsl shader and try to simulate the result and it seems it’s not working.

1/ is K1 > 0 a barrel distortion ? I find various sources with some saying that I need to have K1<0 to have barrel?
2/ Do I need to do that to distort (I need to do at this a texture fetch with a sampler in the image so I need to find the correct distorted u,v to display in my glsl shader :

According to this post, they use this Geometric Image Transformations — OpenCV 2.4.13.7 documentation to undistort the image (undistort glsl pixel shader help. · Issue #1041 · microsoft/Azure-Kinect-Sensor-SDK · GitHub) from what I understand it’s to distort the image ? Not undistort? Why do I need to divide by f to normalize ? I was thinking that I need to normalize by the width in pixel of the camera image ?

Best regards

`

The function actually builds the maps for the inverse mapping […]. That is, for each pixel (u, v) in the destination (corrected and rectified) image, the function computes the corresponding coordinates in the source image (that is, in the original image from camera).

pay close attention to this.

the equations describe the mapping of points (projected onto image plane) from straight ideal to distorted actual picture.

remap + initUndistortRectifyMap “pulls” a pixel from the distorted actual picture, into the straight ideal picture, by calculating where an ideal pixel comes from. it uses the same equation for the inverse operation because the direction of data flow is also inverted (pull not push)

if you need to distort an ideal picture, you use the same equations, but they’ll only tell you whereto push a point into the distorted result. that is computationally inconvenient because texture lookups are pull, not push.

OpenCV’s projectPoints, since it maps ideal points to distorted points (points=push), can use this equation directly. undistortPoints however, also pushing (=points) from distorted points to undistorted points, has to numerically invert the equation for every single point.

if you wanted to map points from ideal to distorted, you could use the equation directly (projectPoints). if you wanted to map pictures from ideal to distorted, you’d have to spend the effort to (numerically) invert the function for every pixel, or find a different approximation, perhaps using some mesh.

Hi,
thanks for your help.
I understand but are you certain in which order work opencv ?

on the page : https://docs.opencv.org/master/d9/d0c/group__calib3d.htm they indicate that barrel distortion is when K1<0
on some documentation there is the reverse regarding opencv barel distortion sign

on this page OpenCV: Camera calibration With OpenCV it indicate xdistorted=x(1+k1r2+k2r4+k3r6)

and on Learning Opencv book (O’Reilly) : you have page 646
xcorrected =x(1+k1r2+k2r4+k3r6)

I didn’t address that. or the ambiguity in what people consider a “pillow”.

I’d say with k1 > 0 you get corners that are stretched out, with k1 < 0 you get corners pushed in (relative to a clean straight view). that’s what the equation tells me at 1:36 am.