Camera calibration with reference frame

Hello Guys,

I just read this tutorial about a camera calibration with Python and OpenCV:

https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html#exercises
This makes me think of a question: Is there no input for my reference coordinate system for the position of my camera in the world?

note: the “tutroals” (!) site has been deprecated since 2014. please use official docs:

https://docs.opencv.org/master/dc/dbb/tutorial_py_calibration.html

intrinsic calibration doesn’t concern itself with coordinate frames. it’s all about the lens and “sensor” properties.

if you want to establish your camera’s coordinate frame relative to some other coordinate frame (or vice versa), that’s “extrinsic” calibration.

1 Like

indeed, it never knows about that, all positions are wrt to the camera only

attach some gps to your cam, to get “world” coords :wink:

1 Like

Thank you for the hint, I will keep that in mind for the future. You told me that i have to do the “extrinsic” calibration, ok I understand. But what is, if i have multiple cameras? It is possible to get the “extrinsic” parameters of the other cameras with respect to just on camera?

point them all at the same calibration pattern.

you get transformation matrices that map from the pattern’s frame into the camera frame. or you get rvec and tvec, which contains the same info, but in an inconvenient format, but you can calculate a 4x4 matrix from that.

you can invert those matrices. you can multiply them. that’s how you get matrices transforming from any frame into any other.

computer graphics deals with this. there is much written.

1 Like