Camera Calibration and Laser Detection

Hey guys,

I have a project where I am detecting a laser location relative to a target that is on screen. I have some concerns and need some guidance because I think this could be very cool.

For camera calibration, the camera might have a chance to tilt vertically or horizontally. I was thinking to use the chessboard on a corner of the screen, rather than cover the whole screen. Is that okay to have a smaller chessboard in a corner of the screen, or should the chess board be at the center of the image, thus screen.

For laser detection, I was going to test either a red or green laser. Say I use a green laser, and I took a picture of the screen with the laser dot on it. Also, the screen is white, so looking at the green plane won’t help as much because the pixel intensities for green and white would be similar. Perhaps I can look at the other color planes instead? Or maybe there is a function that can do this already?

Any guidance would be greatly appreciated with how I should approach this project!

Thanks y’all

one? bad idea. that’s unstable.

put easily recognizable shapes (simple shapes) in all four corners. if detection costs enough, you should track them instead of re-detecting in every frame. they don’t need to be unique. “crash test dummy” crosshairs (the saddle points in a checkerboard) are just fine. or circles.

“augmented reality” markers (aruco, …) are designed to work on their own, which is why they need to cover some area (numerical stability). slapping a quartet of those into the corners of the screen is silly. proof of that is how often people think of trying it.

it’s light. light adds. make sure the camera isn’t exposing so “well” that the addition of a laser dot drives any of the sensor’s pixels into saturation. err on the side of underexposure.

2 Likes

Hello I see that light makes for poor object detection when training a new system…underexposure issues noted.

One thing to be aware of if you are using a color camera is that the laser light will be a single wavelength (let’s say red around 650nm) - this means you won’t see it in the blue or green channels (maybe a bit in green). Assuming your camera uses a Bayer filter for color, you will probably notice some pixelation / structure in the image. Depending on the spot size and accuracy requirements this might be a bit of an unexpected challenge. Not the end of the world or anything, but something to be aware of. A b&w camera + a red color filter might be something to consider.

As far as calibrating the camera goes, a few questions. Is the screen a monitor? A projection screen? Something else? (is it flat?) The vertical / horizontal tilt - is this dynamic, or just an adjustment you make during setup?

If it’s a flat screen that you control the images on, you could display a calibration target on the screen itself to calibrate the camera, but this assumes the camera isn’t moving during use. How fast do you need to detect / locate the laser? Is this for a firearm simulator or something similar? If so you’ll probably want a camera with a high frame rate so your latency isn’t awful.

1 Like

Hey Steven,

I am going to use either green or red. Leaning more towards red but I will contrast the red plane by subtracting the max of the green or blue plane from the red plane. This should make the actual red color intensities on the red plane large while other color intensities lower. I will have have a threshold value to assign binary values (essentially if the intensity is above 50 make it a 1). The laser dot will be a cluster of pixels, and I only need one. I will use K-means clustering to find the centroid, but since it is only one centroid, it is simply finding the average. I then add a target bitmap matrix on the image and use the distance formula to find the distance in pixels. My group is creating a function the convert pixels to degrees. But I will do some more research on a b&w camera, I just was assuming the actual image processing was easy lol.

As for camera calibration, we will have four chessboards in the corner of the screen. The screen is a flat, projection screen material. It will be fastened to the testing table, not having any tilt (not dynamic). I have been using OpenCV functions to find the chessboards, create a homography matrix based off a reference image that is in the correct position, and the a perspective warp to make the distorted, angled image correct.

I have done some tests, and as the angle off the center for an image increased, the distance from same laser dot to target increased. This makes me to find a tolerance of how far off the camera can be. I will use live display of the camera and some sort of box to show where the chessboards can be. Kind of like a online bank check deposit.

Do you have any ideas on testing this tolerance for the box on the live display? Or if there are better ways of creating a holography matrix?

We are fixing an old test that takes long, so detecting the laser does need to be fast, but not to the degree you are thinking (I am assuming). As long as the camera is calibrated to the correct position, and can take pictures to process the image to find the distance will work great.

I know this was a dump, but any help will be greatly appreciated.

Thank you,
Teagan

It sounds like you have a plan. I can’t see what you are doing from where I’m sitting, so I can only offer general suggestions and even those are based on assumptions. For example, I’m not sure if the screen is just for the laser, or if there are other things being projected on it. From some of the things you have said I get the sense that you are detecting the laser and then measuring how far the laser is from some reference point on the screen, but I’m just guessing. It also sounds like you need the camera tilted with respect to the screen for some reason (I’d guess so it has a clear view of the screen), and you are worried about how the angle affects the accuracy.

A few general suggestions / thoughts.

You might be able to use a fixed threshold value, but in many real-world cases I find that a dynamic threshold is needed. Particularly if your camera has auto-exposure enabled, or if ambient lighting isn’t easily controlled (which usually seems to be the case.)

Depending on the camera optics and your accuracy requirements, you might need to calibrate the intrinsics of the camera first. (Are the effects of lens distortion apparent in your images? Do straight lines in the world look straight in your images?) Many lenses with a 90 deg FOV or less have low enough distortion that you can ignore it, but just keep in mind that as your accuracy requirements go up, you have to start paying attention to more and more sources of error, and optical distortion might be important enough to model.

If your screen is flat, your optics don’t have much distortion, and your accuracy requirements aren’t too tough, then I’d be inclined to just use a homography. It sounds like you’ve already gone down that path, so I’d probably stick with it until you have a reason not to use it. It’s not clear to me how often you want to calibrate this rig. Once per year? Once per day? Once per second? The answer matters to which approach I’d take.

Assuming things are “locked down” and you don’t need frequent calibration, I’d probably start simple, at least for the prototype phase. If the corners of your screen are fixed and can be measured accurately, I’d probably just take a picture of the screen (with the camera you plan to use, in the position/orientation you intend to use it in) and then go manually locate the corners in the image using photoshop or gimp or whatever. Then calculate your homography based on the hand picked image locations + world (plane) points that you measured somehow. Maybe you could put some controllable LEDs in the corners of the screens and automate the process that way. (I don’t think I quite understand how you are using the chessboard targets in the corners of the screen, so I can’t really comment on that, but if it’s working for don’t let me lead you astray.)

In the eventual “real” version of this project, how is the screen mounted? How is the camera mounted? I ask this because, for example, the screen might move quite a bit due to air currents. (HVAC kicks on, someone opens a door, etc.) Maybe that’s not an issue? And if the camera is mounted overhead by connecting to the building structure, you might find that it moves/vibrates a lot when someone closes a door in a nearby office. Hopefully you won’t have any surprises like this, but in my experience going from an idea to a working system usually involves addressing some unanticipated issues. The best way to uncover them is to implement something and see how well it works. :slight_smile:

As for camera angle and accuracy, I’d probably try to not be more than 30 degrees off axis if I could help it, but that’s not a hard and fast rule by any means.

1 Like

Hey Steve,

Thank you for all your insight. I will try to paint the picture of this project a little better.

I am in a group building this automated tester to test the accuracy of motor actuators. More specifically, the actuators that control the angle of a side view mirror. This is done by shooting a laser at a mirror and reflecting it to a target grid that an OEM created. The angle of the actuator has two voltages, horizontal and vertical, and essentially we want to make sure that if we recall those voltages, it is at the same angle.

The current test is slow and takes a long time to set up so my solution is to make it faster and automated. Essentially, everything is done on this mobile test desk made out of 80/20. The screen is mounted vertically (perpendicular to the table) and four plates that have the chessboards are fastened on top of it (helps with make sure the screen doesn’t move and gives a more permanent design). Also the screen will be pull taught at the top and bottom, so HVAC/wind should not be a concern… hopefully.

As for the actual targets, which are specific for OEMs, we decided to create a bitmap of where the target centers are. This would make this tester be able to do multiple OEMs than having only Fords test set up (although Fords test is more thorough than other companies). So after all the image processing to get the laser dot as one pixel, the bit map target location would be added on to it, and the distance formula would be used.

My whole problem with camera calibration is the fact that the camera COULD be bumped into, offsetting the angle. The camera is mounted on 80/20 on a sort of slider rack to move horizontally in respect to the screen, but also the whole mount can be moved vertically in respect to the screen as well. The camera will be mounted on some sort of fine tunable ball joint… frankly I need to do more research on that.

Oh btw, the camera was requested to be a GoPro HERO13 Black. Not my first wish to use, but I was urged to use it in this test by my sponsor (annoying I know). It has a 156 degree FOV, which now that you are saying it would be good to be 90 degrees or less is frustrating that I had to go this route. But I have done a lot of work getting live display feedback to work so an operator can physically adjust the camera.

I have done some very rudimentary tests, not with the camera or actual laser to test my code with OpenCV functions. I can’t seem to upload photos, but I printed the 9x6 chessboards and place them as the four corners of a rectangle on a white board. I took a reference image, straight on, then had a group mate draw a red dot and took a control image (I tried my best not to move). Then I also took an angled image (very exaggerated, but also I do not know the exact angle off). I should add that if the image was broke up into quadrants, the dot was in Q1.

From the reference image, I used the find chessboard corners 4 times and find the corners for the reference (I do like your idea of hand finding the pixel corners for the reference image, I need to try that). And then would run a test with the control and the angled image. For ease of this test, I made the center of the image the target. For the control image, the distance was 874.08 pixels. While for the angled, the distance was 882.56 pixels.

This is why I want live display so the technician can line up the camera in a box fitting the chessboard corners so the homography matrix can fix minimal angles. But in reality, the camera will very rarely be out of the correct position. Just all this calibration for the “what ifs?”

There is my plan, I know there is a lot of kinks to work out but I think it is doable. Also, if you’d like, I can share my email to send you the result pictures and so we can stay in touch (there is probably some sort of direct chat on here so we can privately exchange).

I want to thank you for your engagement and assistance. I only have so much theoretical knowledge from the one image processing course I took. I am always wondering what are better ways to solve this test.

Do I understand correctly that each test is run in a short amount of time? Something like:

  1. put the mirror in the test fixture
  2. command it to a certain position (with a known voltage)
  3. measure the position of the laser reflected by the mirror onto the screen
  4. change the control voltage
  5. measure the position again (?)
  6. change the control voltage to the original value
  7. measure the position of the laser for a third time
  8. compare the measurements (image position) from steps 3 and 7
  9. determine if the mirror returned to the original position within some allowed tolerance

If that’s the case, and you can be sure the camera doesn’t move during the test, calibration might not be so important. At the end of the day you are just looking for the laser returning to the original spot within some tolerance? So you don’t really need absolute measurements, but just relative comparisons? Having said that, I do think a calibrated rig would be better overall for a few reasons, particularly because it would support more sophisticated tests in the future…but as a starting point maybe just compare the difference between the two measurements in pixels?

I don’t have any experience with that camera, but I suspect it’s fairly distorted. I also suspect you can find calibration parameters online pretty easily - they won’t match your exact camera, but might be good enough to use for your purposes. I do think you will need to do something to correct the distortion (based on what I’m imagining). The homography can only map linear / projective transformations, and the lens distortion is nonlinear…so you have to handle that separately.

If the laser dot will only be found in a small portion of the image, and you can position your camera so that the laser shows up near the center of the image, you might be able to get away with using a homography for just that part of the image (where distortion is minimal), but there are pitfalls to this approach and really I would suggest a different camera / lens if that’s possible. I don’t think the GoPro was a good choice, but I understand you might be stuck with it. (if so, I’d look for some calibration parameters online for that camera and see how well they work. Calibrating the intrinsics yourself is a bit of a side project and I’d try to avoid it at this stage.)

I think you might be limited in uploading images at first, but you will be allowed to upload images once your post count is a little higher (I think…)