I work with a stereo pair. I want to try to avoid obstacles for my robot. I have a distortion map that I got after calibrating a stereo pair. If I understand correctly, then to avoid obstacles it is necessary to get the xyz coordinates of the obstacles (I detect them separately) and also the point cloud. Actually, how can I get this data? I work in C ++.
If there is another solution to my problem, I will be glad to know. Thank you!
the same, i.e. extract channels 0 and/or 1 instead of 2. you seem to understand the data.
I think mean(3d_) should give you a 3-element Scalar with the mean x, y, z values
I would expect x and y to be close to 0 because you picked a region around the optical center. the means should deviate a bit if the points aren’t on a plane parallel to the image plane.
watch the quality of the data, especially the disparity map. imshow it (or the z-values of the point cloud), judge it against your expectations (does it look blurry, patchy, grainy, threadbare, …). consider mapping those values so near things (small z) are light and far things (large z) are dark.
measure your calibration pattern’s true square size (width, height). measure across multiple steps for more accuracy (say nominal 25mm squares, if you measure 149 mm across 6 steps, it’s really 24.83mm)
check distances with a tape measure or yard stick
hold the pattern still, manually trigger pictures. many cameras have rolling shutter, movement will cause distortion.
keep these pictures so you can re-run calibration for different values (such as swapping the measured width and height of your squares, in case that’s mixed up)
watch the various error measures (reprojection error, etc), try various things, watch the error measures improve/worsen
I’d have to see the situation. you decide what’s an obstacle, when to avoid it, and how to avoid it.
Thank you! As for the distance, I’ll get back to you after I check.
About the obstacles. I plan to detect obstacles (for example, a chair, plants) and if any object is recognized, then make a detour around this obstacle. At the moment, I cannot figure out how best to do this.
you can use the 3D data as a simple range sensor, like one of those cheap ultrasonic distance sensors. you’d know the distance but not the type of object.
for a simple robot that avoids any obstacles, that should be good. add some SLAM, point cloud fusion, structure from motion, … to generate a map.
if you want to determine the type of object from the point cloud, that’s more involved. I’d not do that, but do inference on the RGB data instead, because that has proven solutions (DNNs such as YOLO).