Aruco Recognition

Is it possible to use OpenCV to pull this information from an aruco marker?
distance from camera,
rotation along the z-axis(yaw),
direction of the rotation along z-axis,
velocity,
rotational speed,
rotation direction?

If so, can I have an explanation on how I could accomplish this?

https://docs.opencv.org/master/d9/d6d/tutorial_table_of_content_aruco.html

Thank you so much for your reply!
I am a bit confused on how to do the calibration with the aruco board thing. is there a tutorial anywhere on that process?

what exact camera do you have? you might not need to calibrate. a camera matrix can be calculated from a few basic parameters, and distortion can often be assumed to be zero. I can help with that.

“charuco” is apparently the most convenient way to calibrate a camera. never did that. AFAICS it’s described there.

The camera I am currently using is a Microsoft LifeCam VX-5000, it is more for test purposes right now, as i intend on getting a camera with a faster frame rate at some point.

I have see short videos of people showing it being detected, but I haven’t seen how to actually do it.

ok so 640x480 resolution and 55 degrees DFOV.

and I’m retracting my assertion about negligible distortion. I’d expect it to be noticeable for a product of that quality.

calculation:

  • resolution: 640 × 480 pixels
  • half-diagonal: 400 pixels
  • half-DFOV: 27.5 °
  • assuming square pixels so f_x = f_y = f
\begin{align*} f \cdot \tan(27.5°) &= 400 ~\text{px}\\ f &= 768 ~\text{px} \end{align*}

so your matrix should look like:

C = \begin{pmatrix} 768 & 0 & 319.5 \\ 0 & 768 & 239.5 \\ 0 & 0 & 1 \\ \end{pmatrix}

you should be able to get reasonable results with that but it won’t be very accurate. at least it spares you from having to calibrate. you can run (or copy) the marker detection example right away.

and here’s a convenient online generator for markers, which spares you from calling the API to generate them: Online ArUco markers generator

How did you get the 319.5 and the 239.

And where did you find the half-DFOV?

Thank you so much!!

and the 1, I’ve never done matrix stuff before with camera stuff

But I want to understand

Also be aware that if your camera has zoom or focus adjustments your intrinsics will change. For just focus it might not be significant to matter (depending on your application / accuracy needs).

I’d probably calibrate the camera if you plan on using this for anything serious. It is a bit of a learning curve and requires some up-front effort, but you will get much better results for whatever you do next. (You mention wanting to measure distance etc.) Print a Charuco calibration target (maybe 8-1/2 x 11 size is big enough for your case?) and mount it to something flat (glass works well) and capture 10 or so images from different positions / angles. Once you run the images through the Charuco calibration process you don’t have to worry about it anymore (unless you switch cameras or adjust your optics)

As far as extracting data from the aruco markers, yes you can get much (all?) of what you want with some work, assuming you know the aruco marker size etc. Your accuracy for distance (based on size of the marker, I presume) might not be great. Velocity, rotation, etc. might need to be filtered (expect a lot of “noise” in your measurements)

The aruco marker detection isn’t great at localizing the 4 corner points in my experience. You might be able to refine the results by calling cornerSubpix on the points it returns, but I have always achieved the best results with a double corner (as with the chessboard target). Maybe you can add extra corners to an Aruco marker (to create “chessboard corners”) but I think the aruco detection “needs” a white border around the marker, so I’m not sure if it would work or not.

This feels to me like a “try it out and see what works and what doesn’t” situation to me. You might have to get creative to get what you want out of it. Maybe you’ll end up with a second camera.

Good luck with it!

I calibrated the camera and i have the matrix, distortion, rvecs, and rtvecs.

As I am new to this, and the OpenCV aruco detection page seems to explain it in c++, I do not know how to use this information within python, and how to pull that information from the marker.

I have several markers across the disc I want to pull this information from, I just don’t know how to get the distance, yaw, and direction of yaw.

I was able to pull this information using trig, but it was nothing more than an estimation. I want to be able to pull more accurate results.

I would recommend setting cornerRefinementMethod = cv.aruco.CORNER_REFINE_APRILTAG which gives very stable results and they’re reasonably true between near and far markers

Thanks! I wasn’t aware of that option (it looks like it became available in 3.4.2) - I will give it a try.