I’m trying to figure out what size an object at distance d from the camera would have in pixels.
In my understanding this is exactly what the camera matrix should do. Given a 3d Point (x,y, d) it should project that point onto a UV Image plane (u, v). But in the documentation an “arbitrary scaling factor” s is introduced which seems to throw away all the useful information that I put into the calibration. Meaning the physical coordinates of my chessboard corners in mm.
To my understanding, mathematically, if opencv didn’t normalize by that factor s all the relavant information would fall out of the calibration process, including the physical sensor size etc. Why is all of that thrown away? And if It’s not can I somehow retrieve that factor s to do 3d reconstruction in phsyical units using the camera matrix?
Thank’s!!