Hey Steve,
Thank you for all your insight. I will try to paint the picture of this project a little better.
I am in a group building this automated tester to test the accuracy of motor actuators. More specifically, the actuators that control the angle of a side view mirror. This is done by shooting a laser at a mirror and reflecting it to a target grid that an OEM created. The angle of the actuator has two voltages, horizontal and vertical, and essentially we want to make sure that if we recall those voltages, it is at the same angle.
The current test is slow and takes a long time to set up so my solution is to make it faster and automated. Essentially, everything is done on this mobile test desk made out of 80/20. The screen is mounted vertically (perpendicular to the table) and four plates that have the chessboards are fastened on top of it (helps with make sure the screen doesn’t move and gives a more permanent design). Also the screen will be pull taught at the top and bottom, so HVAC/wind should not be a concern… hopefully.
As for the actual targets, which are specific for OEMs, we decided to create a bitmap of where the target centers are. This would make this tester be able to do multiple OEMs than having only Fords test set up (although Fords test is more thorough than other companies). So after all the image processing to get the laser dot as one pixel, the bit map target location would be added on to it, and the distance formula would be used.
My whole problem with camera calibration is the fact that the camera COULD be bumped into, offsetting the angle. The camera is mounted on 80/20 on a sort of slider rack to move horizontally in respect to the screen, but also the whole mount can be moved vertically in respect to the screen as well. The camera will be mounted on some sort of fine tunable ball joint… frankly I need to do more research on that.
Oh btw, the camera was requested to be a GoPro HERO13 Black. Not my first wish to use, but I was urged to use it in this test by my sponsor (annoying I know). It has a 156 degree FOV, which now that you are saying it would be good to be 90 degrees or less is frustrating that I had to go this route. But I have done a lot of work getting live display feedback to work so an operator can physically adjust the camera.
I have done some very rudimentary tests, not with the camera or actual laser to test my code with OpenCV functions. I can’t seem to upload photos, but I printed the 9x6 chessboards and place them as the four corners of a rectangle on a white board. I took a reference image, straight on, then had a group mate draw a red dot and took a control image (I tried my best not to move). Then I also took an angled image (very exaggerated, but also I do not know the exact angle off). I should add that if the image was broke up into quadrants, the dot was in Q1.
From the reference image, I used the find chessboard corners 4 times and find the corners for the reference (I do like your idea of hand finding the pixel corners for the reference image, I need to try that). And then would run a test with the control and the angled image. For ease of this test, I made the center of the image the target. For the control image, the distance was 874.08 pixels. While for the angled, the distance was 882.56 pixels.
This is why I want live display so the technician can line up the camera in a box fitting the chessboard corners so the homography matrix can fix minimal angles. But in reality, the camera will very rarely be out of the correct position. Just all this calibration for the “what ifs?”
There is my plan, I know there is a lot of kinks to work out but I think it is doable. Also, if you’d like, I can share my email to send you the result pictures and so we can stay in touch (there is probably some sort of direct chat on here so we can privately exchange).
I want to thank you for your engagement and assistance. I only have so much theoretical knowledge from the one image processing course I took. I am always wondering what are better ways to solve this test.