calibrateHandEye and calibrateRobotWorldHandEye with 3 dof camera on hand

I need to calibrate a multisensor machine with a camera and other sensors on hand. The camera hand has only 3 deg of freedom (translations x,y,z, NO ROTATIONS) therefore to calibrate the camera I have to acquire a calibration plate (circle grid or chessboard) put in not only one position.

So I would acquire N1 times the cal plate in position_1, N2 times in position_2 (rotation of position_1),….Nm times in position_m (rotation of position_(m-1)).

These acquisitions can for sure be used for the method cv::calibrateCamera() but I was wondering : can they be used also for calibrateHandEye() or calibrateRobotWorldHandEye() ?

In the doc it is not very explicit but it seems that calibrateHandEye requires b_T_t to be constant (it ‘s not my case) and it seems that calibrateRobotWorldHandEye requires w_T_b to be constant (it’ s not my case) so I doubt it won’t be possible to use these 2 methods in my application. May be the documentation should be improved to make my case and similar cases more clear.

Could you please give an advice about this? Thank you very much

Hi there, I am clearly not an expert but in my understanding of the subject the short answer is you cant. On a single call to calibrateHandEye all the data acquired needs to have the same object pose in robots base frame (the grid in your case).

The algorithms that solve this problem do it by minimizing an error function. This error function is built based on a cinematic chain using the information you are giving. And that cinematic chain is the same for every data point you are using for the opencv function.

I guess you could build your own calibration where the object pose is know but can change between data points (you whould have to pass them as input arguments) but that is not how the opencv calibrateHandEye behaves.