# Estimate Camera transform from chessboard transform

I am trying to estimate the translation and rotation of my camera relative to the chessboard calibration pattern. I have calibrated my camera, and I am able to get the tvec and rvec of the chessboard in the camera coordinate system using solvePNP, as well as verify the validity of these axes using drawFrameAxis.

I then try to get the camera transform with the following code:

``````def get_camera_transform(rvec: np.ndarray, tvec: np.ndarray):
#Convert rotation vector to matrix
r_board_to_cam, _ = cv2.Rodrigues(rvec)
#Rotation matrix should be orthogonal so taking transpose gives inverse
r_cam_to_board = r_board_to_cam.T
t_cam_to_board = np.dot(-r_cam_to_board, tvec)
#Convert matrix back to vector
r_cam_to_board_v, _ = cv2.Rodrigues(r_cam_to_board)
return np.degrees(r_cam_to_board_v), t_cam_to_board
``````

My translation vector seems fine, but the rotation doesn’t seem to be accurate or offset by any constant factor that I can determine. I thought that taking the inverse of the initial rotation matrix would work, but it doesn’t produce the result I expected. Perhaps it has something to do with the rodrigues algorithm?

the inverse is the correct path.

you might want to work with 4x4 matrices instead of carrying R and t individually.

to simplify the rotation between camera and board, you might wanna define the coordinate axes on the board to be somewhat like the camera’s. if your red is X and green is Y… then maybe define X to point right on the board, and Y down instead.