I want to project 3D points to 2D image of a camera with known intrinsic and extrinsic parameters. I would also want to compute visibility for each point based on the camera pose. The problem that when I use distortion coefficient with moving camera (e.g camera moves from left to right), the 2d points that should go outside the camera’s viewport (e.g point is at the right side of the camera image - not visible) go back through the camera’s viewport (e.g point goes from right side to the left side of the camera image like it’s mirrored - is visible).

First I transform the 3d points to camera coordinate system with:

```
cam_points_3d_homo = np.dot(cam_inv_transform, points_3d_homo)
```

and then use projectPoints() to convert to 2D coordinates on the camera:

```
out, _ = cv2.projectPoints(camera_points_3d_homo[:3, :].T, np.array([.0, .0, .0]), np.array([.0, .0, .0]), intrinsic_mat, dist_coff)
```

Then I calculate visibility by comparing it with camera image resolution and checking the sign of the z-axis:

```
out = out.reshape(-1, 2)
visible = np.all((0 <= out) & (out <= self._resolution), axis=1) & (camera_points_3d_homo[2, :] >= 0)
```

Funny enough, if I use `np.array([.0, .0, .0, .0, .0])`

instead of `dist_coff`

, visibility calculation works fine.

One solution is to first calculate visibility with no distortion and when calculating 2D points for visible points on the camera image we use the distortion coefficients, but that seems a bit redundant.

Any ideas why this happens and how to fix it?