Hand-eye Calibration

I can see you use calib.io. Fun fact, it was my old professor. If you have read some of the articles/posts on his page, you would learn some good practices when calibrating. → Information

It is very nice you spotted the problem yourself. When you design the checkerboard target such a 180-degree rotation ambiguity can occur if either both sides are even or odd.

To solve you problem you need to design either another patteren. In order to be rotation-invariant, the number of rows needs to be even and the number of columns odd, or the other way around.

However, I would just keep the charuco targets, they are more robust. And one big advantage is that you can collect images from the far edges, as they are uniquely coded and identifiable. Compared to the chessboard where the entire chessboard has to be visable, charuco overcome this limitation.

Anyways, you can try with the chessboard, then you also have something to compare with. I reached a convergence of around 14 images and robot poses.

When you have calculated the hand-eye transformation, you can plot it all together (maybe with less data points) and have a nice visable view of the camera pose and robot pose w.r.t the calibration target.

I hope this helps you! :smile:
You are more than welcome to write again. I would recomend to read the information I’ve linked.