Hi,
I am using Aruco markers on curved surfaces and wish to improve the detection rate. Currently, I can pick up the markers around about 90% of oreientations, but this may not be enough for my goals.
When looking at the rejected markers, the target markers are very solidly picked up but are being rejected.
The positions of markers are out of my control, so there’s no way that I can get around using them on a curved surface.
Is there a way to see at which step the markers are being rejected, so that I can tune this parameter? The scene is very “clean”, so I’m not too worried about false positives.
so you’ll wanna find out where your markers are rejected.
and you might wanna provide a picture (not a screenshot) that exhibits the issue.
note that “the module” now is a namespace in the objdetect core module. the remaining aruco module in contrib is either gone or just a documentation stub to redirect people to the namespace in objdetect.
arucos are for pose estimation. curved surfaces break all of that. curved surfaces are not supposed to work for this. do not expect detection to work on curved surfaces. it is not supposed to.
if you just need identification for your application, use QR codes. they’re made to be robustly read/identified. they are not for pose estimation/AR, but for carrying identification.
do not abuse arucos for identification. use QR codes for identification.
do not abuse QR codes for pose estimation. use arucos for pose estimation.
if someone makes you use the wrong technology here, it’s on them to change their mind (if they made that decision) or push back up the chain of command. this is not a matter of “style” or preference. those technologies are for different purposes, are neither designed nor expected to work for the other respective purpose.
There’s really no way to get around using curved markers, that’s just the task I’ve been given. If I could mount the Arucos on a flat sheet or position the objects in a known holder I would, but I can’t modify the target at all. I understand fully how the pose estimation works, and I don’t need a six axis estimate- just a direction to the center of the marker from the camera.
The main thing I’m wondering about is adjusting DetectorParameters to be more willing to pick up warped markers. The system is clearly picking up the quad very easily, it’s just rejecting it too harshly. I currently don’t know what step it’s getting rejected at, so I’m just taking shots in the dark now. I’ve looked at polygonalApproxAccuracyRate, minCornerDistanceRate, perspectiveRemovePixelPerCell, aprilTagMaxLineFitMse, aprilTagCriticalRad and more but without being able to see which criteria is rejecting it it’s really hard to see any difference.