Hi,
I can estimate relative poses between two frames using FindEssentialMat() and then RecoverPose(), further I can TriangulatePoints() correctly. And after that I can track camera motion by corresponding 3D-2D points with SolvePnPRansac(). But after few frames when I want to again Triangulate new 3D points by using TriangulatePoints() with the poses which I have found earlier using SolvePnPRansac() the result is a mess.
I even tried to derive relative pose between two frames’ absolute poses(result of SolvePnPRansac) using Mat currToPrevT = currT * prevT.inv(); and using it to find out Projection Matrix(3x4 Mat) to feed TriangulatePoints() 2nd Projection Matrix param, and using Identity pose’s Projection Matrix as 1st Projection Matrix param but still the result is mess.
Basically, what I can see is that I can only get correct TriangulatePoints result when using derived Projection Matrix from FindEssentialMat() and then RecoverPose() as 2nd param and Identity pose’s Projection Matrix as 1st Projection Matrix param of TriangulatePoints().
So to sum up the question, can’t we use Projection Matrices(3x4 Mat) of absolute poses(4x4 Mat from SolvePnPRansac) in order to Triangulate Points? Or we can only Triangulate Points with Projection Matrices of relative poses(from FindEssentialMat()) and if so then how to keep its scale consistent with the initial TriangulatePoints() result?