I would like to identify when a laser pointer hits lines 1 and 2 that represent the edges of an aruco visual marker. The laser moves horizontally. What would be a good approach to identify the moment when the laser crosses these lines, taking into account that the luminosity of the environment can change. Any suggestion?
Is the camera moving or stationary? If it’s moving you might have to do some additional work to handle the perspective change, but here are some things I’d be thinking about for a task like this:
How accurately are the Aruco corners being detected? I’ve had some less-than-ideal results for the corners in the past, so depending on your accuracy requirements you might consider adding some additional features to your target (I’m thinking chessboard corners in known positions outside of the aruco white zone).
I’m not a big fan of using color directly for tracking, but in this case it might be appropriate. Extract the R channel, apply a threshold and blob detection?
Look into background modeling. Depending on how significantly / quickly the luminosity is varying, you can probably get a pretty good background model which, with image differencing, would probably work quite well at distinguishing the laser pointer.
As the laser pointer crosses into the black portion of the aruco, the image will have a step function in the intensity for the laser. To some extent the background modeling and image differencing might account for this, but I would still expect a non-uniform response and would probably affect the resulting shape / intensity distribution of the blob. The typical approaches (center of mass of the blob, fitting a circle / ellipse to the blob) would tend to have bias in the position.
Does the line you are using for where the laser crosses have to be tangent to the Aruco, or can you position the Aruco offset (either vertically or horizontally) so the detection area of the laser pointer is a uniform material/color (all white)?
what’s the goal of all this?
why did you choose an augmented reality marker to define your two lines? what’s the purpose of those lines?
why is there a laser pointer? what’s moving, the laser pointer or the camera+surface? what causes the movement and for what purpose?
I need more precision. Pose estimation based only on the Aruco visual marker is not sufficient. I thought about moving a laser range finder (TOF) to the edges of the marker to obtain the distances and more accurately calculate the position of the visual marker.
The camera and visual marker are fixed, a stepper motor would move the laser until it touches the edges of the marker.
If you want more accurate pose estimate, you might consider an Charuco board. You will get more feature points and most likely they will be more accurate - both would be helpful for getting more accurate pose estimation.
How accurate are your intrinsics? Your pose estimation will depend on your intrinsics / distortion calibration for your camera.
I have already performed several calibrations and at different resolutions. I had tried it with Aruco Board, but I haven’t tried it with Charuco yet. I’ll do a test. I am also considering the possibility of using a Kinect as a last resort…
What are the scores you are getting from camera calibration? What level of accuracy do you need for the laser position, and what accuracy have you been able to achieve so far?
If I were doing this I would start my making sure I had good intrinsic calibration for the camera, and I would use more feature points for my pose estimation. When you have only 4 points for pose estimation, small error in just one of the point locations can dramatically influence the results. The Aruco you are using to estimate pose looks wrinkled or not flat - that could degrade your results, maybe significantly.