I am past the stage of camera calibration for this vision system but am unable to understand how I can use the camera matrix to get accurate real world x, y coordinates of my target object. I have to use these coordinates to navigate my robot arm to pick up the object.
Hi, first of all make sure, that camera and the robot are calibrated in the same coordinate space. Then you can use solvePnP()
function to calculate the pose of object relative to camera. Using matrix multiplication you can get pose of object realtive to base of world coordinate system and send it to the robot:
H_BO = H_CB.inv() * H_CO
Where H_BO
is the pose of object in world coordinate system, H_CB
is pose of camera relative to base of coordinate system (from extrinsic camera calibration) and H_CO
is the pose of object relative to camera you get as result of solvePnP()
function.
Thank you for the response. Can you please explain the meaning of “calibrated in the same coordinate space.” If I calibrated and tested my camera at home and then set up my robot cam on a test bench in the lab, so I have to get the extrinsic parameters again? My other curiosity is; do the extrinsic parameters change with changing Z-axis value (i.e camera is moved closer or farther from the object)?
Yes, every time the camera changes its position/rotation, extrinsic parameters of the camera are changed and camera’s extrinsic calibration needs to be done again (intrinsic stays the same).
First of all I would suggest you to read more about extrinsic calibration of camera so you understand the relations between camera <-> world <-> robot.