Hi Anders,
Thank you for your response!
I am using the getActualTCPPose()-Function from RTDE.
After that, I extract the rotation and translation vectors. Translation vector would be scaled with 1000 to get mm in unit. And the rotation vector is scaled and I used Rodrigues operator from OpenCV to obtain the rotation matrix. Since I wrote the code with python. I have checked the value of this matrix with scipy and it delivers the same output of rotation matrix too. The TCP is set at the end flange (X,Y,Z,RX,RY,RZ: 0,0,0,0,0,0). My camera is mounted via a synchro-flange like this picture:
I have read the description in OpenCV. At first, it was not so easy to understand the notation of the homogeneous transformation. Since I had different notations from the lecture at the university.
From my understanding, we should do following steps:
- Extract robot TCP → get R- and T- Vector for the TCP.
- Take image of checkerboard → findCheckerBoard + solvePNP → get R- and T-Vector for the target w.r.t camera.
- Feed all of the input to the calculateHandEye-Function from OpenCV.
The python scripts and images are located here.