Hi,
I have a mecheye deep camera. It gives me a colour image and a depth map. It also has a function that returns the parameter arrays for each camera.
My project consists of finding boxes on a pallet at different heights. Using the API provided by mechmind and OpenCV I am trying to superimpose the deepmap and the 2D image, but I am having some problems.
At a distance where I make a homography between both images everything is fine. But when I decrease that distance I start to see a horizontal sliding between both images.
Should I adjust the homography I have applied depending on the distance I am working at, or should that homography be one for any distance? That distance is provided by the 3D image.
As you can see in the image there is a horizontal misalignment in the top box marked in red. The one marked in blue is the distance at which I perform the homography.
Thank you for your response. I have seen an example on the mechmind page for projecting a 2d image onto a point cloud. What I want to do is to create a mask with everything at a certain distance from the camera to filter the 2D image, but at the moment I don’t know how to do it.
The camera returns me this parameters, but I don’t understand the translation that the camera is giving to me.
Rotation: From Depth Camera to Texture Camera:
[0.9994753140272203, -0.004729284653347138, -0.03204263592242101]
[0.004422137710777892, 0.9999436627330821, -0.009649666212622343]
[0.032086466746217575, 0.00950290621945331, 0.9994399198676868]
Translation From Depth Camera to Texture Camera:
X: -20.705958880687646mm, Y: 2.605356548486989mm, Z: 20.070005911936793mm