What would you make of this output from drawChessboardCorners?

image

Not sure why the huge circles … I do have the right number of corners detected by findChessboardCorners and am using the same dimensions for both these api calls.

It’s an ipad display of the checkerboard pattern as caught on a rolling-shutter webcam during a video and annotated by drawChessboardCorners. It looks like the size of the circles simply does not adapt to the distance from the camera, cluttering the board area in the images where I take the board a little back from the cameras (from 1/4 of the viewport to say 1/16th of the viewport taken by the board).

Does this imply that the calibration algorithm is intended for near field positioning of the board image?

When the board takes about 1/4 of the viewport, looks pretty good and standard (you’ll have to take my word as I’m not allowed more than one image attachment as a new forums user).

opencv-python version 4.2.0.34

the problem here is that the resolution is simply too poor.

those circles have a fixed radius in pixels. if they are this “big”, you fed it a tiny picture.

without seeing precisely what you do (source code), there can only be speculation.

I think the algorithm is just meant for images where the calibration board is no less that 1/4th the viewport/image, that’s all. Is also seems that also all web examples I happened to gloss over out there for calibration with checkerboards, present the board relatively up-close to the camera. Clearly the hard-coded circles radius is no fit for when the board takes only around 1/16 of the image. Mathematically it doesn’t have to be so, but I guess the standard implementations assume it. Still learning.

The resolution of the images is high, it’s rather that the image I put here is a crop of a small area of the image. My code is the same as the vanilla code samples for this kind of calibration.

finding a pose isn’t calibration as such.

if you do want to perform calibration (i.e. of the lens), don’t put the pattern that far away.

if you want to find a pose, consider using some type of Augmented Reality marker. checkerboards are good for calibration, but if you have tiny/far away views of them, they perform worse than actual AR markers.

perhaps you should explain what you’re doing and why. from what little I can gather, I find your approach requires questioning.

I apologize for any inclarity on my part. I am trying to calibrate (a.k.a. camera-resectioning), in order to have the necessary matrices/data as a basis for human pose interpretation. I already have models that retrieve bodily landmarks, and intend using the information from the calibration for reprojecting the 2D landmarks from those models to a plane parallel to the ground/horizon so that the angle of the camera is neutralized away.

Thanks for helping out with becoming more sure that the calibration algorithm here expects close up images of the calibration board, as I grow to understand.