Depth Images and encoding 32FC1

Hello guys,

Im trying to read depth images via ros and then process them with python and cv2.

I get the image from my topic and then use this command to convert the message to cv2 image: cv2_img = bridge.imgmsg_to_cv2(msg, ‘32FC1’)

The encoding of the image is 32FC1, which i read is that a float number represents each pixel value. But if i print my ros image message i can see that there are no float numbers but integers from 0 to 255.

So when i try to look at the image with cv2.imshow(‘pic’, cv2_img), i can see the image but you can only see shapes once it is really really close to the camera which sucks bc i wanted to how far objects are away from me.

Is there any way to fix my images so my processed depth image can see further?

how are those generated ? please explain.

The depth images are generated by the depth camera of my robot in the virtual environment Gazebo. Via ros im subscribing to the image topic of the depth camera and save the image message i recieve in a variable.

unfortunately, imho we cannot help much here (and it’s quite off-topic)

converting whatever you get from ROS to float does not magically increase the depth resolution, if it was [0…255] (8bit) before, the range will still be the same, whatever format you have it in.
(maybe what you get there is not even the actual depth map, but an (reduced) image meant for vizualization, but how would we know here ?)

so, this is not an opencv problem,
you’ll have to find out, if or how it’s possible to increase the depth resolution in Gazebo.

@Qilos. Does this help?
cv2_img = bridge.imgmsg_to_cv2(msg, 'passthrough')

Hello thank you for your help

I tried it but it didnt work.

present all the setup on the ROS side. state precisely what depth camera you use, all the relevant configuration. ask people who build the robot.