Camera calibration and Hand-eye calibration together

Hello, I try to use camera calibration together with Hand-eye calibration. For camera calibration I use this code opencv-examples/CalibrateCamera.py at master · kyle-bersani/opencv-examples · GitHub . For robot to gripper transformation i use following pipeline:

  • get joints values
  • compute forward kinematic task
  • compute transformation matrix
  • get the inverse of this matrix
  • put them inside Hand-eye calibration

My question is if I can use the output from camera calibration rvec and tvec as input to hand-eye calibration. Unfortunately, our attempts to implement it end in bad results, so i try to find where is problem.

For Hand-Eye calibration, you can also look at these ressources:

Thank you for the mentioned sources, unfortunately I could not read from them if the way I use the functions cv2.calibrateHandEye and aruco.calibrateCameraCharuco is correct, below I attach the codes of how I load the data and how I use these functions together.

# read pose
def read_position(file_name, prohibited_positions=[]):
    """
    This function read file with positions and compute transformation and rotation form griper to base of robot.
    :param file_name: Path to position of file with recorder positions of robot.
    :param prohibited_positions: Positions that correspond to poorly captured images, and it will be unused.
    :return: Rotation matrix from gripper to base, Translation matrix from gripper to base.
    """

    # innit
    cors = []
    angles = []

    # read lines in file
    with open(file_name, 'r') as f:
        lines = f.readlines()

    for position, coordinates in enumerate(lines):
        coordinates = list(coordinates.split())
        coordinates = [float(position) for position in coordinates]

        # kick position of missing photos
        if position in prohibited_positions:
            continue

        # coordinates
        translation_vec_list = list(map(lambda x: x/constants.MM_TO_M, coordinates[0:3]))
        translation_vec = np.array([[translation_vec_list[0]], [translation_vec_list[1]], [translation_vec_list[2]]])

        # angles in deg
        rotation_vec_list = list(map(lambda x: math.radians(x), coordinates[3:6]))
        rotation_mat = euler_angles_to_rot_mat_zyx(rotation_vec_list[0], rotation_vec_list[1], rotation_vec_list[2])

        # angles in rad
        # rotation_mat = euler_angles_to_rot_mat_zxz(coordinates[3], coordinates[4], coordinates[5])

        # creating block matrix
        transposition_mat = np.block([[rotation_mat, translation_vec], [np.zeros((1, 3)), np.ones(1)]])

        # create inverse matrix
        inv_transposition_mat = np.linalg.inv(transposition_mat)

        # split block matrix to rotation matrix and translation vector
        angles.append(inv_transposition_mat[0:3, 0:3])
        cors.append(inv_transposition_mat[0:3, 3])

    cors = np.array(cors)
    angles = np.array(angles)
    return cors, angles



and calibrations together



# Compute camera matrix and distortion coefficients
    calibration, camera_mat, dist_coefs, rvecs, tvecs = aruco.calibrateCameraCharuco(
        charucoCorners=corners_all,
        charucoIds=ids_all,
        board=plate.charuco_board,
        imageSize=image_size,
        cameraMatrix=None,
        distCoeffs=None)

# get rotation and translation of end effector
    rot_vec, trans_vec = read_position.read_position(path_to_pose, unused)

    # compute transposition and rotation of camera to griper transformation
    camera_to_hand_rotation, camera_to_hand_translation = cv2.calibrateHandEye(
        R_gripper2base=rot_vec,
        t_gripper2base=trans_vec,
        R_target2cam=rvecs,
        t_target2cam=tvecs, method=cv2.CALIB_HAND_EYE_TSAI
    )

Could you provide the estimated camera_to_hand_rotation, camera_to_hand_translation and what you should expect (at least for the translation part)?


For robot to gripper transformation i use following pipeline:

  • get joints values
  • compute forward kinematic task
  • compute transformation matrix
  • get the inverse of this matrix
  • put them inside Hand-eye calibration

This is the end-effector to the robot base frame transformation you should provide (so computing the inverse should not be needed in what you have written) as input but in your last code R_gripper2base, t_gripper2base should be OK.

The data must be “synchronized”, I mean the same transformations for the “Hand” must correspond to the same transformations for the “Eye”:

hand-eye_figure

I have a problem with the rotation value, I get the rotation in the rounding state this:
[1 0 0;
0 1 0;
0 0 1]
, but I expect the value:
[0 0 1;
0 -1 0;
1 0 0].