Following the tutorials and tonnes of videos, I was able to combine an rgb and depth image, conduct the calibration (with many images) to construct a point cloud.

Within Open3D (I know this forum is OpenCV) and within the pointcloud, I was able to identify a particular surface of interest, find it’s plane/perpendicular/normal (Z-vector) using o3d.segment_plane() and I was also able to draw an arrow for visualisation purposes. FYI, it was able to calculate the complete A, B, C, D of the plane equation.

Since I have the Z-vector, I would like to next construct the X-vector, which in concept, can be easily found if we were to use cv2.minAreaRect() on a 2D image.

What I have:

- The filtered pointcloud (inliers only)
- A specific location of the center of the surface (Tvec)
- It’s orientation (Rvec) or plane equation ABCD

**How do I attain a 2D image of the filtered pointcloud, from a viewpoint along the Z-vector, looking straight down at the surface.**

The purpose is so that I can subsequently use cv2.minAreaRect() to get an angle to construct the X-vector.

What I have already tried:

- I’ve tried cv2.projectPoints() but the resulting x and y coordinates for each of the points attained were ridiculously far (like x: -300,000 and y: -300,000)

Could anyone guide me on what needs to be done so that I can achieve such a 2D image by converting from 3D back to 2D from a specific viewpoint please?