Camera calibration from a set of 3d and corresponding 2d points

the API takes vectors of vectors so you can have a different number of points (and different models too) in each view.

since you have “one view”, just take your points for the model and the image, and put them in a vector each, and then pass 1-vectors containing those.

it should work, if your data is okay. calibration requires, overall, a three-dimensional cloud of points, so you can either give that in a single view, or you take multiple views of a planar model/object.

the “unusual” Z, not being zero, is acceptable. don’t worry.