I have been working on hand-eye calibration, but I am still getting incorrect results. I suspect that the data I have collected for the calibration is the problem. Can you please provide more details about how you collected your data? If you have ever used targets, which target did you use (chessboard, ArUco marker, Charuco board, etc.), and how did you place them? The robot I’m working with is a 2-DOF robot (translation only), and it is a hand-to-eye configuration. I’m trying to create an auto-calibration API.
Hi Torayeff,
Hope you’re doing well!
I’ve also been working on eye-to-hand calibration recently, but I’ve been running into significant errors in my calibration results. It’s been a bit frustrating, to be honest.
I remember you mentioned earlier that eye-hand calibration is quite sensitive to target and gripper poses, and you managed to solve your data collection issues. I was wondering if you could share some details about how you collected your data? Any tips or best practices you followed would be incredibly helpful for me.
It was long time ago and I cannot recall the details. But I remember making sure that checkerboard is really fixed to the arm of a robot. I noticed even if it slightly changes then it ruins calibration process.