Pose estimation tvec values jumping inconsistently

Hey there, wonder if anyone who has experience with the Python aruco module can help me out

I am using opencv-contrib-python==4.6.0.66

Calling the getPoseEstimationSingleMarker() method to generate the tvec and seeing some pretty jumpy values but only when I move the aruco marker to the left of the camera (my left, so right if from the cameras viewpoint).

Some example values

tvec X is 4.24502987824621
tvec Y is 2.499267051316424
tvec Z is 6.775988573633114
tvec X is 169.2422447101812 # sudden jump
tvec Y is 100.71959431654864
tvec Z is 270.8844311643304
tvec X is 4.207372054835526 # back to normal
tvec Y is 2.4881988834106514
tvec Z is 6.70487895479196

I took a look at the Aruco marker corner values generated by the cv2.aruco.detectMarkers() method to see if there was a corresponding jump but things look consistent there.

ARUCO corners are (array([[[695., 455.],
        [903., 453.],
        [904., 663.],
        [698., 668.]]], dtype=float32),)

tvec X is 6.53182424147733 # corresponding tvec
tvec Y is 3.19014749024916
tvec Z is 9.802933541613273
ARUCO corners are (array([[[695., 455.],
        [903., 453.],
        [904., 663.],
        [699., 668.]]], dtype=float32),)

tvec X is 153.24543286567615 # jump seen in tvec, but not in corners
tvec Y is 56.984347355582884
tvec Z is 221.61871973172512

This leads me to think that the issue is happening during estimatePoseSingleMarkers()

Has anyone encountered similar problems or have an idea what might be skewing the tvec values?

Many thanks,
Artem

you are getting occasional false positive detections.

count the number of detections. filter by ID.

your marker is broken. the green polygon drawn on it is misaligned vs. what I can see, due to one corner being mangled. show us a close-up picture of your physical marker.

also: there is insufficient “quiet zone” around the black square.

PnP is a non-linear problem. You can have these spurious 3D pose jumps while having very close 2D coordinates.

What I would try with a video stream:

  • classically solve for the initially pose
  • for the subsequent poses, use SOLVEPNP_ITERATIVE to refine for the pose

If you are using independent images (not a video stream), you can perform in parallel multiple solvePnP methods and keep the pose that yield the lowest reprojection error.

Thank you both,

Thanks for the “quiet zone” point. In my case the issue was caused by me resizing the video stream frame. I think because I had calibrated with the initial frame size, then resizing it threw things off. I removed the resizing line which completely removed the jitter.

capture = VideoStream(src=0).start()

while True:
    frame = capture.read()
    frame = imutils.resize(frame, width=1000) # removing this line fixed the jitter
    (corners, ids, rejected) = cv2.aruco.detectMarkers(frame, arucoDict, parameters=arucoParams)

I also followed the tutorial here 3D pose estimation using aruco tag in python to add an additional undistort step before calling detectMarkers()

Thanks for the update.
After resizing the image, have you also scale the calibration intrinsic matrix when you had the problem?