Can you please advise on where I can find more information on how / where the
depthTo3d() function is used, and an example showing how to use it?
I’ve read the documentation online: OpenCV: RGB-Depth Processing, but is there some examples? I’m trying to understand
k the calibration matrix (I’m a newbie so please be patient).
Here is what I’m trying to do:
- I have an OAK-D depthAI camera, I can recieve both an RGB and Depth images.
- I want to try and use OpenCV’s
depthTo3d()function to convert the depth image to 3D points, which can then be used by Open3D to create the point cloud.
- My thoughts are, if I pass can pass it the camera calibration information from the OAK-D, then I’d be able use this function to get the 3D points.
- My thinking would be to then register the two pictures using OpenCV to algin, scale, etc.
- These transformed images could then be passed straight into open3d as geometries that can be used to create and RGBD, and thus a point cloud.
The currently posted examples from Luxonis show how to do this, but they bring in each image as an np array, then go through an onurus scaling and aligning process, mostly using np.
I’m currently taking CV1, and in week 2 they talk about how openCV has far more effective (and efficient) methods for dealing with images. Therefore, since the OAK-D has the capability of sending
OpenCV-like frames, why not send it an image frame that it can deal with directly, and perfrom all the alignment and scaling functions within OpenCV, instead of Numpy?
If you guys could please help me understand
depthTo3D better, I could at least try a few thigns to see if it works.