Hi @Tetragramm,
I’m interested in your 3Dmapping application, developped in the opencv_contrib on your github and I have some questions, I hope you can help me with.
My use-case consists on estimating 3d detected object position with the help of camera trajectory (3D mapping with known postions) and therefore I wanted to test your application.
I wanted to ask if in the application, you work with global or local positions of the camera and if you can provide a source for the implimented triangulation algorithm.
Hi there. If I had ever finished it, the paper citation would be embedded in the documentation. As it is, it’s hiding in the /doc folder.
Selective angle measurements for a 3D-AOA instrumental variable TMA algorithm by Kutluyil Dogancay, Reza Arablouei in IEEE Signal Processing Conference, 2015
The positions used must all be in the same coordinate system, whether from a charuco board, a set of markers, or other navigation system. If you localize the camera outputs global positions directly, rather than rvec + tvec, you can strip out the conversion code in addMeasurementImpl and just add it directly.
Let me know if you have any other questions.
Thanks for the quick reply!
My goal is to integrate your code to ROS. I get the camera pose in the camera global coordinate system. by stripping the conversion code in addMeasurementImpl, you mean this part:
Blockquote
camera_rotation = camera_rotation.t();
const Mat camera_translation = ( -camera_rotation * tvec );
los = camera_rotation * los;
Apologies, I seem to have messed up my notifications for replies and got busy elsewhere.
The goal of that whole section of code is to fill the variables positions and angles. These contain the 3d position of the camera. and the azimuth and elevation of the object to be localized. The vector extends from the camera position, and the azimuth and elevation is in the same 3d coordinate system as the positions.