# Glsl shader to distort image according to opencv calibration

Hi,

I need to distort a 3D view to match with a real camera view. I have calibrated the camera with opencv and I need to have a shader to distort (not undistort!) the 3d view.

I have tried various glsl shader and try to simulate the result and it seems it’s not working.

1/ is K1 > 0 a barrel distortion ? I find various sources with some saying that I need to have K1<0 to have barrel?
2/ Do I need to do that to distort (I need to do at this a texture fetch with a sampler in the image so I need to find the correct distorted u,v to display in my glsl shader :

According to this post, they use this Geometric Image Transformations — OpenCV 2.4.13.7 documentation to undistort the image (undistort glsl pixel shader help. · Issue #1041 · microsoft/Azure-Kinect-Sensor-SDK · GitHub) from what I understand it’s to distort the image ? Not undistort? Why do I need to divide by f to normalize ? I was thinking that I need to normalize by the width in pixel of the camera image ?

Best regards

`

The function actually builds the maps for the inverse mapping […]. That is, for each pixel (u, v) in the destination (corrected and rectified) image, the function computes the corresponding coordinates in the source image (that is, in the original image from camera).

pay close attention to this.

the equations describe the mapping of points (projected onto image plane) from straight ideal to distorted actual picture.

remap + initUndistortRectifyMap “pulls” a pixel from the distorted actual picture, into the straight ideal picture, by calculating where an ideal pixel comes from. it uses the same equation for the inverse operation because the direction of data flow is also inverted (pull not push)

if you need to distort an ideal picture, you use the same equations, but they’ll only tell you whereto push a point into the distorted result. that is computationally inconvenient because texture lookups are pull, not push.

OpenCV’s `projectPoints`, since it maps ideal points to distorted points (points=push), can use this equation directly. `undistortPoints` however, also pushing (=points) from distorted points to undistorted points, has to numerically invert the equation for every single point.

if you wanted to map points from ideal to distorted, you could use the equation directly (projectPoints). if you wanted to map pictures from ideal to distorted, you’d have to spend the effort to (numerically) invert the function for every pixel, or find a different approximation, perhaps using some mesh.

Hi,