Hi all,
I am trying to calibrate the raspberry pi HQ camera using opencv and a 7x7 chessboard and I could use your opinion on the data I have. I am looking at the intrinsics’ standard deviation returned by opencv and they seem to be completely off. For the principal point the X component has a deviation of ~13 pixels while the Y component has a deviation of ~19 pixels. For the focal length, both the X and Y components have a standard deviation of ~400 pixels. The overall RMS reprojection error returned by opencv is ~1.6 pixels. Image size is 3040x4056.
Additionally, after more detailed investigations on the corrected images using the computed calibration data I concluded that the measurements are not identical across the field of view of the corrected image. Measurements between evenly spaced points in reality are decreasing from top to bottom and left to right in the image. To me this sounds like a perspective error that algorithm did not account for.
The main question here is: given all this data, should I try to continue trying to minimize the reprojection error or do you think this is the best the algorithm can do? From what I know the ideal value for the reprojection error should be more than one order of magnitude smaller, close to 0.1 pixels.
Any input is appreciated, thank you!