Ground truth 3D joints estimation using triangulate points

Hello.

I have been trying to triangulate points (joints) from multiple cameras in order estimate, let’s say the ground truth 3D position of the joints in space.

I have used both the triangulatePoints function from opencv and the triangulatePoints implemented in the sfm library in opencv contrib.

Using the opencv function I have to move one camera to (0,0,0) and change the location of the second camera accordingly, to get somewhat reasonable results. The weird thing is that points that are closer to the optical center appear to align correctly with the body joints while moving away from it, the projected skeletons appear squished, which indicates to me at least that there is an issue with the focal length?

The thing is when I project 3D points predicted by a 3D method they are extremely accurate using the same calibration parameters. The images are obtained from ZED2 cameras. In theory using their API you get rectified images, indicated by the distortion coefficients which are 0. Undistorting the image/points has no effect ( which makes sense)

So in this case my P matrices are

P1 = K1[R1|0] and P2 = K2[R2|t2], where t2 is relative to cam1

With the sfm.triangulatePoints I get even worse results especially if I move the origin point, and adding a 3rd camera facing in the opposite direction makes them even worse.

My main question is, is my approach correct towards estimating the gt 3D positions correct? Do the cameras have to be close to one another ? (in my case they cover a 4x4x2 space)

For anyone facing the same issue the R|t matrix has to be in w.r.t (world to cam, not cam to world) after changing that I dont have to move the origin point as well and the projections are correct :slight_smile: