I am trying to do hand in eye calibration with my robot however I am my transformation matrix is not very precise and has a large offshoot in translation, which I think I can fix with offshoots but I want to fix calibration. I think the error is from solvepnp. I have some questions:
Which SolvePnp method returns the most accurate translation and rotation?
When I pass my rotation I convert it to 3x3 rotation with cv2.rodrigues. Should I have done anything else?
The lines are colinear and I don’t have time constraints.
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30000000, 0.00000001)
I also use the line above
To understand, are you using this function for hand-eye calibration?
It looks a little bit odd for me that
SolvePnP() alone would lead to large inaccuracies. For me, it could be something like not enough variations in the robot poses, or maybe bad camera calibration?
Yes it is *
We found out too late the reason it did not work was that we assumed that the camera coordinates and the camera coordinates from opencv was aligned…
The functioned worked as it should we just made a mistake.
Quick question to make sure about handeye(). I have to make the base2gripper transformation inverse when it is eye to hand transformation I am looking for and when it is eye in hand I should not take the inverse when using the function right?
I have to make the base2gripper transformation inverse when it is eye to hand transformation
Have also a look at this test: opencv/test_calibration_hand_eye.cpp at 50e8ad285b6fcf388c4283b2c433aac099a6562c · opencv/opencv · GitHub
for a complete example.
I want to find a target with my camera. I need the transformation T_B^t= where B is the base and t is the target (i.ie the target in the base frame). From the function hand-eye() I get the transformation T_C^G. The other matrices that exist is the T_C^t which is the target in the camera frame and the T_B^G transformation from base to gripper. To get the target is this the correct formula?
T_B^T=T_B^G\cdot T_G^C \cdot T_C^t
In the code I found this line:
Mat T_base2cam = homogeneousInverse(T_cam2gripper) * homogeneousInverse(T_gripper2base);
However, I dont get the order and the inverse and i am not sure if this is the one i am looking for. Where is the eye in hand part?
I tried the one from the code and it looks like It might be what I am looking for. Is it possible that you can explain it? Why am I wrong?
Eye-in-Hand configuration is the following:
The eye-in-hand calibration allows you to estimate the transformation between the camera and the robot end-effector.
If you have the pose of an object of interest with respect to the camera frame, you can transform it to have its pose with respect to the robot end-effector frame. Then, using the pose of the robot end-effector with respect to the robot base frame, you can have the pose of the object with respect to the robot base frame.
The pose of the robot end-effector with respect to the robot base frame can be retrieved using the robot kinematic model: using the current joint positions give you the pose of the end-effector with respect to the robot base frame.
Eye-to-hand configuration is the following:
The camera is static with respect to the robot base frame.