Camera calibration from a set of 3d and corresponding 2d points

Hi,

I have a set of 3d points in the space and I know their 2d projection on the image plane.
I would like to calibrate my camera using this dataset. However the cv::CalibrateCamera takes as input vector<vector> for the object points and vector<vector> for the image points whereas what I have is more like vector and vector.
Is there a way to use the cv::CalibrateCamera with my dataset or is it impossible ?

Regards,
Romain.

so, you have the equivalent data of a single checkboard image ?

Kind of, except that my 3d points do not have z fixed to 0 but to 5. I have as much points as the number of pixels in my image.

If your calibration target is planar (which it sounds like it is, just with a Z value of 5), then you won’t have enough information to get a full calibration of the camera. If you know the focal length you could pass it in along with the cv::CALIB_FIX_FOCAL_LENGTH (read the documentation / source for more info - you might have to pass in other flags, like USE_INTRINSIC_GUESS, but I’m not sure). I don’t know if a single planar target + a known / fixed focal length is enough to get the rest of the calibration, but it might be (I think so?). You also (might) want to pass in cv::CALIB_FIX_ASPECT_RATIO - but I’m not sure if that is necessary / how it interacts with FIX_FOCAL_LENGTH.

If your calibration target has depth to it - different Z values for the known world points - then you can get full calibration from a single image assuming you have enough points and they aren’t degenerate in some way (I think the points can’t be from a quadric surface, but I’m not sure, and that’s easy to avoid / unlikely).

In any case, when you have only one image (and therefore one set of image points and corresponding object points) you just stuff each vector into a vector (which will have one element) and pass it to the calibration function. It’s just a special case of one image, so your outer vector has one entry. I have done this with a 3D calibration target and it works just fine.

-Steve

the API takes vectors of vectors so you can have a different number of points (and different models too) in each view.

since you have “one view”, just take your points for the model and the image, and put them in a vector each, and then pass 1-vectors containing those.

it should work, if your data is okay. calibration requires, overall, a three-dimensional cloud of points, so you can either give that in a single view, or you take multiple views of a planar model/object.

the “unusual” Z, not being zero, is acceptable. don’t worry.

Thanks @Steve_in_Denver and @crackwitz for your answers. I’ll try that.

Hi there,
calibrating from a single image of a non-planar target is possible.
However, OpenCV uses Zhang’s method for initialization, which is based on per-view homographies. Therefore, OpenCV will throw an error/exception in this case.
Are you able to divide your data into “virtual views” of co-planar sets of points perhaps?
Our commercial camera calibration library, libCalib does support non-planar targets. You can check it out here: Calib Camera Calibrator Software for Geometric Camera Calibration [Win/Linux] – calib.io

Jakob // Calib.io

1 Like

I have done this before with OpenCV, and I didn’t have any errors. In my case I had a planar calibration target where I captured multiple images at different Z positions (with precisely known distances) and combined them into a single set of 3D->2D correspondences (the 3D points with different Z values). (My point is that it wasn’t a single 3D target, but a 2D target with known Z displacements, but I think it is practically equivalent). In any case it worked just fine (and gets used regularly without issue), so there is a way to get it to work. It’s possible that I supplied an intrinsic guess for the cameras or otherwise constrained it so the algorithm was able to work. Maybe you have some insight into why it works in this specific case when it wouldn’t be expected to generally for 3D targets? For the OP I would suggest trying it to see if it works for your case, as I got quite acceptable results (in my admittedly specific/constrained use case).

-Steve

Steve, you have set an intrinsic guess. For close-to-planar, one way to obtain the guess could be to set z-coordinates to zero in a first run. See link below.

Hi, it seems that I am getting into the same situation. May you share your final solution to this problem?