projectPoints() point visibility with distortion

I want to project 3D points to 2D image of a camera with known intrinsic and extrinsic parameters. I would also want to compute visibility for each point based on the camera pose. The problem that when I use distortion coefficient with moving camera (e.g camera moves from left to right), the 2d points that should go outside the camera’s viewport (e.g point is at the right side of the camera image - not visible) go back through the camera’s viewport (e.g point goes from right side to the left side of the camera image like it’s mirrored - is visible).

First I transform the 3d points to camera coordinate system with:

cam_points_3d_homo = np.dot(cam_inv_transform, points_3d_homo)

and then use projectPoints() to convert to 2D coordinates on the camera:

out, _ = cv2.projectPoints(camera_points_3d_homo[:3, :].T, np.array([.0, .0, .0]), np.array([.0, .0, .0]), intrinsic_mat, dist_coff)

Then I calculate visibility by comparing it with camera image resolution and checking the sign of the z-axis:

out = out.reshape(-1, 2)
visible = np.all((0 <= out) & (out <= self._resolution), axis=1) & (camera_points_3d_homo[2, :] >= 0)

Funny enough, if I use np.array([.0, .0, .0, .0, .0]) instead of dist_coff, visibility calculation works fine.

One solution is to first calculate visibility with no distortion and when calculating 2D points for visible points on the camera image we use the distortion coefficients, but that seems a bit redundant.
Any ideas why this happens and how to fix it?

you could check if the Z coordinates become negative. you’d have to check that (sometime) after transforming into camera frame. if projectPoints is given a non-zero rvec, the points’ z coordinate in world space has no obvious relation to their z coordinate in camera space. I don’t know if projectPoints returns the Z coordinates for you to check.

I’m not sure I understand exactly what is happening, but I would suggest calling cv::undistort on an image with your calibrated intrinsics and looking at the results. In some cases the distortion values you get from calibrateCamera result in a distortion function that is non-monotonic and the image (or projected points) can “wrap around”…So this could be what you are seeing.

Maybe.

I have also encountered what you have described - erroneous projection of object points onto the image plane when the object points are too far out of view. This, as I understand it, is a limitation of the camera models (i.e. pin-hiloe) - they provide invalid projections when the points are out of the model’s bounds.

And yes, if the points can be out of view, you need check the validity of the projection. For all practical purposes, you can use your approach where you use projectPoints with dist_coeffs set to 0s to project the object points onto the undistorted image plane, however the check is technically not correct…

If you need to guarantee that an object’s point projection onto the undistorted plane lies within the bounds of the image plane, then you need to first map the limits of the image plane onto the undistorted plane and then check if the projected points lie within that area.

Given a set of points on the image plane (say 8 points around the perimeter) you can map the boundary of the image plane to the undistorted plane by first using undistortPoints() to go to 3D normalized points and then projectPoints() with dist_coeff = 0s to go to the undistorted plane. Catalog these points recording the interior region. These cataloged points are then what you compare your existing output from projectPoints().