The more I learn about epipolar geometry the more confuse I get about this.
in epipolar geometry you need to know the distance ‘f’ in metric units, to be to obtain the 3D coordinates of the detected pattern points, or triangulate features in stereo vision, etc.
Now, how is this possible if all OpenCV calculates is a fx and fy in pixels and we are not telling the sensor size (given by manufacturers), neither the manufacturer focal lenght (metric).
Intrinsic parameters of the camera . As mentioned before, in this problem the camera is assumed to be calibrated. In other words, you need to know the focal length of the camera, the optical center in the image and the radial distortion parameters. So you need to calibrate your camera. Of course, for the lazy dudes and dudettes among us, this is too much work. Can I supply a hack ? Of course, I can! We are already in approximation land by not using an accurate 3D model. We can approximate the optical center by the center of the image, approximate the focal length by the width of the image in pixels and assume that radial distortion does not exist. Boom! you did not even have to get up from your couch!
Now, how is this possible if all OpenCV calculates is a fx and fy in pixels and we are not telling the sensor size (given by manufacturers), neither the manufacturer focal length (metric).
I.E I am aware enough of the calibration process and each of the terms obtained from it. Which goes back to the same point, fx and fy are not in metric units after the calibration process, how does OpenCV get the focal length in metric units needed for epipolar geometry calculations.
@DoDoM You can have a look at my reply to the other thread where you asked the same question: Disparity calculations and camera centers - #4 by lpea. To summarize, only the ratio between the focal length and the pixel size is important, therefore it is sufficient to estimate the focal length “in pixels”.
Nevertheless I’m asking for monocular camera. The goal is to draw epilines (thus variable Z). And my main concern was since I need to know the distance Camera-ImagePlane which f to use.
Would be possible to apply the same principle to obtain the triangle sizes and angles between the camera-imagePlane-imagePixelXY.? Using fx for pixel x coordinates and fy for y pixel coordinates and them sum up the angles with the hypothenusas?
OpenCV uses fx with a pixel on the image x and the optical canter cx.
All of them in pixel units, while doing the same with fy, y, and cy in order do the calculations, like for example minimizing the reprojection errors of the different solvePnP methods.
OpenCV is placing the image plane at which distance form the camera origin? A distance fx (in pixels this would be confusing) for the calculations on x axis, and at a distance fy for the calculations on y axis?
I am also finding other sources that mention they normalize it to a distance f = 1.