Pointcloud to heightmap for image based segmentation

Hello,

I’m trying to find height differences (in a heightmap based on a lidar pointcloud) in the form of lines. I’ve tried several methods but with varying success. I hope someone more experienced in image-processing can guide me in the right direction to make my workflow more robust.

Background: we’re processing air-borne lidar pointcloud data of residential buildings into a semantic 3D-model. Part of the processing is identifying the 3D roof-surfaces (using ransac) and then trying to find the edges of said surfaces using a combination of plane intersection for sloped surfaces, and roof-edge detection using image-processing for flat-flat surface “intersections” or edges that have a jump in height.

The current “height difference detection” workflow is as follows:

  1. Generate heightmap by sampling average Z values from the pointcloud
  2. Calculate laplacian gradient (ksize: 1 or 3)
  3. Remove outside lines on layout of building (we know these, only interested in interior roofedges)
  4. Apply binary threshold (+ filter noise by MorphologyEx Close op)
  5. Detect lines using HoughtLinesP

The main issue I’m having is the output of the heightmap generation always has blurry edges. This is due to the sparsity of the pointcloud (~16 points/m) and the sampling that we need to do to get a map. Because of this both canny (also gradient based) and laplacian don’t give a good well-defined result.

I’ve tried downsampling the image, which works a bit better, but also loses a lot of accuracy.

One solution I thought of to reduce the blurry edges is creating a histogram, finding peaks, and then “compressing” the area around the peak towards the peak. This should sharpen the edges, but I’m having trouble turning this concept into practical code (the compression part).

Hope someone can give me some ideas! Thanks!

This is an example output of the heightmap, where you can see the edges are quite blurry:

image

I think you’re throwing away information in the middle of your flow.

this is a geometry problem so I’d approach it as such. don’t reduce it until your only solution is a blasted Hough transform.

stop with the gradient and work with that.

you can tell contiguous planes because those “pixels” have the same slope vector (and small enough height difference)

that’s a connected components analysis (floodfill) with a predicate expressing how “much” gradients (and height) differ and what’s considered “the same”. if you can formulate a floodfill on RGB pixels that vary a little (noise) instead of being perfectly equal, you can formulate it for the gradient vectors you have.

you can tell “edges” from where the gradient changes…

second derivative and see what that yields you… or estimate the planes and see where they end (“pixels” start leaving the plane, for some distance threshold of “leaving”).

1 Like

Thank you! I see what you mean, I’m going to try the floodfill approach.

I mean… you could calculate a per-pixel “discontinuity” measure that judges jumps and changes in gradient. that would be a decent edge map. you could then hope to threshold that, and find regions of flatness encircled by borders of discontinuity. I guess that could work.

if you could upload some data, I’m willing to play with it. grayscale height maps in 8/16 bits would be just fine.

oh I’d maybe recommend reducing the size of the gaussian kernel or whatever you use to turn the mesh into a height map, only to get a little sharper edges/cliffs.

oh as for rendering your point cloud into a depth map, give scipy.interpolate.griddata a try.

Hey, thanks for your suggestions. I didn’t know how to use the OpenCV floodfill directly because it requires a starting point. What I did is calculate a histogram of the image, then find the local maxima. Using those I “collapse” everything within a certain range to that value, effectively locally compressing histogram, turning the blurred lines into sharp ones.

When I do this and the edges are sharper I can use canny, and that works reasonably well.

If I understand canny correctly it is actually using gradients to determine the edges, right? Do you think there’s a benefit to doing this in another way?

Regarding the pointcloud to heightmap conversion, I’ve tried reducing the sampling radius, but if I make it too small I get holes in the data because not a single point can be found. I can fix that again with a morphology close, but that has it’s limits.

These are some original heightmaps: https://postimg.cc/gallery/t4xBXW8/50015d07