Scenario: We have kept two cameras (Cf and Cb) in the front and back of a room nearly facing each other. Both are the same model with the known camera intrinsics. Now we placed four Aruco markers, visible from both the cameras.
(a) - two of the markers (M1, M2) on the centre table, parallel to the ground plane.
(b) - two of them (M3, M4) on the Left and Right walls.
Approach A: From both Cf and Cb images, we recover the positions of all the markers. We made PTS_Cf (16,2 array) and PTS_Cb (16,2 array) and calculated the Essential Matrix (findEssentialMat()). From the Essential Matrix (decomposeEssentialMat() / recoverPose()) we calculated the R,t vectors wrt both Cf and Cb.
The t vector is a Unit vector.
Approach B: We calculated R, t and the Projection Matrix for all the four Aruco markers (estimatePoseSingleMarkers). From the Projection Matrix for individual markers we calculated the Essential Matrix and subsequently the R, t values wrt both Cf and Cb.
(1) Please comment whether our “Approaches” are correct?
(2) Can we use the Epipolar Geometry for nearly opposite cameras, to find the relative positions?
(3) The computed t is Unit Vector. How could we get the actual translation values?
(4) For Approach B, when we are calculating the individual R, t wrt to both Cf and Cb, the values changes wrt to marker position. As per our understanding, they should have been constant regardless of the four Aruco marker positions. Any suggestions?
(5) Any other approach we should try?