Relative Position of Two cameras

I have two cameras taking images of Aruco markers. I then calculate the pose of each camera for each marker using estimatePoseSingleMarkers and then calculate the relative orientation between the cameras by inverting one of the poses and adding it to the other pose. However, the calculated relative pose of the cameras is different for each marker. why is this so? shouldn’t the relative orientation of the cameras remain constant?

some data and code please?

my logic is as follows, given poses of the same target in target frame:
Pose 1=R1 and T1, Pose2=R2 and T2,

First invert Pose1,

      R1_inverse = np.matrix(R).T       (invert rotation by transposing it)
      T1_inverse = np.dot(-R_inverse, np.matrix(T1))

then Compose inverted Pose1 with Pose2 to get the relative pose of camera 2 from camera 1:

       info = cv2.composeRT(R1_inverse, T1_inverse,R2, T2)
      composedRvec, composedTvec = info[0], info[1]

the composed R and T should now be the pose of camera 2 from camera 1 and should be the same for every target. Or am I missing something?

in principle that sounds correct.

however the math may contain errors, or not.

I would recommend working with 4x4 matrices, always, instead of handling R and T separately. that only leads to headache.

make sure of the sense of each matrix. aruco pose matrices, speaking 4x4, ought to express a transformation from marker frame to the camera frame, or equivalently, they express the marker’s pose in the camera frame.

that’s the typical matrix notation and it helps keep things “type-safe” when composing (multiplying).

\begin{align*} ^\text{destination frame} ~ &T ~ _\text{source frame} \\ ^\text{camera frame} ~ &T ~ _\text{marker frame} \\ \end{align*}
1 Like

THanks. I will try that and see how it goes. Cheers