I am lost trying to achieve something after implementing the basis of the software.
As of now, i am capable of detecting features in an image, matching to an image of the same object and find its pose in 3D, so the placed model is rotated and aligned with the real world counterpart.
My current problem is that i cannot keep track of the rotation if the same object is rotated to the point of all the tracking features are missing (ie, the object is rotared 180° (yaw) and i am seeing the back of it)
I can take pictures and have the features of the object in any angle but i don’t know how to “stitch” all the features so while i rotate the real world object, i can rotate the 3D model to match it, in 360° (yaw).
Would it be possible to point me in the right direction to what i need to implement to achieve this? what am i missing? Is it even possible?
Is ICP (iterative closest point) the way to go after detecting the features of a certain image of a certain angle of the same object?
Thank you in advance!