Hey,
I’m working on trying to calibrate (lens and spatial) a camera with a zoom lens in a 3d space using Aruco markers.
The basic idea is to do multiple calibration runs for a variety of zoom/focus settings of the lens and then, utilizing data that get’s transmitted from the camera, blend between the different intrinsics and distortions calculated by calibrateCamera.
My main problem atm and perhaps a question if the approach is right: when calibrating with a narrow zoom level, the reprojection error get’s too large - as I’m already using ~2000 markers for the space the idea is now to do an inital detection of markers and if I only detect a certain threshold of markers (let’s say 5) I would replace the detected aruco markers with a set of smaller aruco markers and rerun the aruco detection. (halfing the size of the aruco markers would in this example give me 20 markers to detect instead of 5)
Does this sound like a feasible approach or should I rethink the process?
Thank you
Markus