How to compute affine transform between two successive lidar measures set

Hello.
I am trying to compute the affine transformation (rotation and translation) between two successives 2D-Lidar acquisitions.

The aim is to “redraw” the last set of measures in the initial coordinate system (and then to build a “map” of environment).

Robot (the oriented heart) detects points (a collection of angles and distances) around 10 times by seconds. These set of measures can be converted in a B&W picture, centered on robot (positive and negative values of x & y).
On next measure set, robot has already translated and rotate. Points can be converted in a B&W picture too.
But some points are not exactly corresponding to transformation (reflection problem during measure, obstacle becomes hiden by an other point).

I have tried several processing with “estimateAffinePartial2DAsync”, but often error of processing occurs, and the affine transform (out result) are not understandable.

Has somebody already succeed to process such situation.
Is there a problem if the set of cv.Point2 contains negative coordinates ?

Best regards.

negative coordinates shouldn’t be a problem.

align your point clouds using “iterative closest point” (ICP) or related algorithm.

estimateAffine… needs the points to pair up. you’d have to reimplement parts of ICP for that.

look into the “Point Cloud Library” (PCL) for point cloud processing.

Ok, thank you for the direction.
But what about functions like cv/estimateAffinePartial2DAsync ? can it be helpful, or does it only work with true image rotation/translation ?
Best regards.

I don’t understand. “true”?

Sorry,
I would say “with pictures taken with camera that has been rotated/translated, not cloud of point with some differences”.

there’s cv::transform which applies a matrix to a list of points.