Understanding the functions projectPoints() and undistortPoints()

Trying to check if I was using correctly both `projectPoints()` and `undistortPoints()`, I am obtaining results which I find hard to understand.

Having defined the following intrinsic parameters:

``````# calibration matrix
K = np.array([
[500.0, 0.0, 300.0],
[0.0, 500.0, 250.0],
[0.0, 0.0, 1.0]
])

# distortion coefficients (k1, k2, p1, p2, k3)
distCoeffs = np.array([1.5, -0.95, -0.005, 0.0025,  1.16])
``````

and the following pointcloud in the camera reference (with all the points being in the same plane):

``````# Coordinates of points in plane, representing a pointcloud in camera reference (c)
# plane
H, W = 1, 2
X, Y = np.meshgrid(np.arange(-W, W, 0.2), np.arange(-H, H, 0.2))
X, Y = X.reshape(1, -1), Y.reshape(1, -1)
# add depth. Pointcloud of n points represented as a (3, n) array:
Z = 5
P_c = np.concatenate((X, Y, Z * np.ones_like(X)), axis=0)
``````

I was expecting that the following process would yield the original pointcloud:

1. Projecting the points, while accounting for the distortion i.e. obtaining the distorted coordinates:
``````# project points, including with lens distortion
U_dist, _ = cv2.projectPoints(P_c, np.zeros((3,)), np.zeros((3,)), K, distCoeffs)
# projections as (2, n) array.
U_dist = U_dist[:, 0].T
``````
1. Undistort them to get the normalized coordinates:
``````# get normalized coordinates, in camera reference, as a (2, n) array
xn_u = cv2.undistortPoints(U_dist.T, K, distCoeffs, None, None)[:, 0].T
``````
1. and multiplying the previous normalized coordinates with the depth of the plane in the camera reference to get the original pointcloud:
``````# add depth.
P_c2 = Z * np.concatenate((xn_u, np.ones_like(X)))
``````
[Complete code]
``````import numpy as np
import cv2

# calibration matrix
K = np.array([
[500.0, 0.0, 300.0],
[0.0, 500.0, 250.0],
[0.0, 0.0, 1.0]
])

# distortion coefficients (k1, k2, p1, p2, k3)
distCoeffs = np.array([1.5, -0.95, -0.005, 0.0025,  1.16])

# Coordinates of points in plane, representing a pointcloud in camera reference (c)
# plane
H, W = 1, 2
X, Y = np.meshgrid(np.arange(-W, W, 0.2), np.arange(-H, H, 0.2))
X, Y = X.reshape(1, -1), Y.reshape(1, -1)
# add depth. Pointcloud of n points represented as a (3, n) array:
Z = 5
P_c = np.concatenate((X, Y, Z * np.ones_like(X)), axis=0)

# ---------------------------------------------
# PROJECTION WITH DISTORTION
# project points, including with lens distortion
U_dist, _ = cv2.projectPoints(P_c, np.zeros((3,)), np.zeros((3,)), K, distCoeffs)
# projections as (2, n) array.
U_dist = U_dist[:, 0].T

# UNPROJECTION accounting for distortion
# get unnormalized coordinates, in camera reference, as a (2, n) array
xn_u = cv2.undistortPoints(U_dist.T, K, distCoeffs, None, None)[:, 0].T
P_c2 = Z * np.concatenate((xn_u, np.ones_like(X)))

# check equality (raises error)
assert np.allclose(P_c, P_c2), f'max difference: {np.abs(P_c - P_c2).max()}'
``````

However, this is not true and the resulting pointcloud is significantly different than the original one.

I feel this may be due to a misunderstanding in the use of the previous functions.

Any help to understand where I’m doing the wrong step(s) is highly appreciated.

EDIT
After some more experimentation, I believe the issue is with `undistortPoints()` rather than with `projectPoints()`. The latter is deterministic, while the former needs to solve a non-linear optimization problem. Emprically, by increasing the distortion, `undistortPoints()` tends to give worse results. However at lower levels, it correctly fixes it.

undeclared crosspost: