Hi! I’m making a project, as the title says, to calculate camera coordinates and rotation relative to a marker detected via aruco. You can check the project and all the source code in my github: GitHub - ThiroSmash/Camera-Pose-From-Aruco-Marker
I have been doing some tests, and the results are reasonable, but the error is somewhat inconsistent and I don’t know what to make of it. Currently I’m mostly paying attention to the angles, specifically the Y-axis rotation angle. I’ve uploaded some samples taken at 0, 30 and 50º to the github rep in the data folder, since I can’t put more than one image in this post. Here’s one at 30º:
The only thing easily noticeable from these tests is that at 0º the angle is more precise when the marker is at the center of the screen rather than an edge. The rest of the results seem somewhat random to me, though.
- Calibrations I have done
I’ve calculated and refined the camera matrix and distortion coefficients, following OpenCV’s tutorial. I did an iterative program to take as many valid samples as the user wants. I’ve done many different calibrations, and used for these tests the intrinsic parameters from a calibration of 50 iterations, and a re-projection error of 0.02.
These parameters are passed to the solvePnP function when calculating the translation and rotation vectors.
- Calibrations I haven’t done / possible sources of error
Though I have the camera’s intrinsic parameters, these are never used in the process of actually detecting the marker. It is the aruco module that determines the position of the marker’s corners without any knowledge of the camera, and then I use those corners directly for solvePnP. I imagine that this is giving me corners about two or three pixels off at worst, but I’m not sure how I’m supposed to compensate that.
I don’t know of methods to improve the aruco detection itself, other than trying the corner refinement flags of contour and subpix, which yield nearly identical results.
I’ve thought about perhaps applying the undistort method to the detected markers and then make a new detection to find hopefully better corners, but I don’t know if this would actually hinder solvePnP’s performance.
Then there might be problems with the measures themselves. The marker is glued flat to a wall, but it is slightly rotated in the Z-axis. As far as my understanding goes, this shouldn’t have any effect on the tests, but again, I’m not certain.
To position the camera, I’m using a specialised ruler that can bend into any angle and automatically calculates it with precision of 0.1º. I am placing the ruler and the camera itself manually, though, so that might trigger slight inaccuracies.
However, I’ve repeated these tests multiple times and always get similar results, so the error by manual placement seems unimpactful enough.
Some help and guidance here would be very welcomed, but if anything, I would like someone else to test the program and see what results they get.