# Aruco marker distance calculation off

I’m trying to detect an aruco marker and calculate the distance to it from the camera, but the results I’m getting are off by a factor of 2.37.
I’ve measured the real physical distance to 63cm, but according to my tvecs vector, the distance is ~149cm. So if I simply change the “marker_size” variable used to scale the objectpoints accordingly, the distance from tvecs is spot on.

Here’s the code performing the pose estimation:

``````def __init__(self, a_dict, marker_size):
self.a_dict = a_dict
self.params = aruco.DetectorParameters()
self.util = Util()
self.marker_size = marker_size #mm
self.marker_objpts = np.array([[-self.marker_size/2,  self.marker_size/2, 0],
[ self.marker_size/2,  self.marker_size/2, 0],
[ self.marker_size/2, -self.marker_size/2, 0],
[-self.marker_size/2, -self.marker_size/2, 0]], dtype=np.float32)

def pose(self, frame, corners, ids, mtx, dist, flags):
"""
#### Estimate pose of Aruco marker ####
"""
markerLength = 156
axis_length = self.marker_size / 2
if len(corners) > 0:
ids_f = ids.copy()
ids_f= ids.flatten() #Why do this? (NicolaiNielsen)
ret, rvecs, tvecs = cv.solvePnP(self.marker_objpts, corners[0], mtx, dist, False, flags)
imgpts, jac = cv.projectPoints(self.marker_objpts, rvecs, tvecs, mtx, dist)
self.project_aruco_bounds(frame, imgpts)
self.util.project(frame, imgpts)
cv.drawFrameAxes(frame, mtx, dist, rvecs, tvecs, axis_length)

return rvecs, tvecs, imgpts
``````

And the resulting frame shows:

And I performed validation of the calibration using a separate set of images of the same chessboard used for calibrating:

The mean reprojection error for the whole set of 9 validation images was ~0.82, which should be fine? I also used the calibration parameters to undistort the validation images, and they ended up looking fine with no strange curves or anything. Additionally, I’ve triple checked that all square/marker-sizes used in calibration and pose estimation are in the same unit (mm).

I’ve really run out of ideas for what may be causing this. Would really appreciate any help I can get.

why do you call `solvePnP`? why do you not use aruco’s pose estimation?

that scale factor could be due to a bad/wrong focal length or the entire camera matrix is wrong. probably bad calibration. if you think the calibration is good, it still might be not. beginners can never tell but they think they can.

I see no calibration data (intrinsics) so that is impossible to validate.

I see no experimental setup where you place the marker at a known measured (with a ruler) distance and provide the original camera image along with the marker’s size.

It has finally been solved. The issue resided in the calibration being done on images that were taken using Logitech’s software program for the camera, rather than OpenCV itself. So I’m guessing the camera had different settings during calibration/validation than when I was running detection in OpenCV. Distance and pose is now spot on. Such a stupid mistake to make…

you can request a specific size of video frame with OpenCV. the default is 640 by 480, regardless of the camera’s sensor resolution.