I am trying to compute the affine transformation (rotation and translation) between two successives 2D-Lidar acquisitions.
The aim is to “redraw” the last set of measures in the initial coordinate system (and then to build a “map” of environment).
Robot (the oriented heart) detects points (a collection of angles and distances) around 10 times by seconds. These set of measures can be converted in a B&W picture, centered on robot (positive and negative values of x & y).
On next measure set, robot has already translated and rotate. Points can be converted in a B&W picture too.
But some points are not exactly corresponding to transformation (reflection problem during measure, obstacle becomes hiden by an other point).
I have tried several processing with “estimateAffinePartial2DAsync”, but often error of processing occurs, and the affine transform (out result) are not understandable.
Has somebody already succeed to process such situation.
Is there a problem if the set of cv.Point2 contains negative coordinates ?