Fast contour set comparison?

Suppose that I have a whole bunch of masks describing contours/patches of objects in an image. Is there an efficient way to test contours for similarity between images? (I’d like to avoid N*N, particularly on whole-image masks).

I have N masks describing object contours for each scene (the output of a segmener)
I’d like to compare the masks for similarity between scenes (the objects are static in the scene - I don’t care about translation, or rotation invariance, but its possible the contours might be chomped into by occluding objects). Mostly I want to identify what’s the same and what’s new/gone.

I could use their mask’s IOU here - but that seems a little expensive.

It seems that run-length-encodings of the masks could do the same much cheaper. Are there alternatives I might consider?

Thanks in advance.

1 Like

no. that’s dirt-cheap to calculate because it’s parallelizable in every way (SIMD, threading, GPU, …) and is already parallelized without you doing anything (SIMD, threading) or very little (GPU: UMat).

those neural networks emit masks, not contours, and calculating a contour from a mask is a little more complex.

moving away from masks also potentially forces you to spend that O(N^2) figuring out which goes with what, or coming up with further smart algorithms to make that lookup more efficient.

IoU is trivial in comparison.

also, “similarity” is usually the word to describe shape (i.e. shape matching/description) or any semantic content. since you consider IoU a valid solution, you seem to want to estimate overlap over time.

I wouldn’t say its a trivial computation.

Naively:

def iou(mask1, mask2):
    return (mask1 & mask2).sum() / (mask1 | mask2).sum()

t = time.time()
for m1 in masks:
    for m2 in masks:
        iou(m1,m2)
print(f'{time.time()-t}:3.1f')

takes almost 9 seconds on my 24 core 3900x.
(here with ~24 masks).

I can spend time optimizing this… but what I’m looking for is a algorithm that efficiently compares the outlines of masks. Even this IOU case isn’t too great for me because new object the scene take a chomp out of the masks existing segments – the IOU score penalizes the lost space even though the contour may be very similar. Something like IOU of the contour mask with contourmask = mask & ~erode(mask,n) masks might be better for measuring whether a blob has the same shape? Perhaps there are even better measures of shape difference.

you only do that because you cut the labels map apart into individual components. don’t do that.

just keep the entire picture with labels, then labels1 == labels2 to get overlap, then recombine with labels to count intersection pixels.

please clarify: do objects move? do you require the movement to be compensated in the calculation, i.e. the same object, translated, still gives 100% “overlap”?

That’s how the labels come to me. (And yes, n^2 is naive - lots to save be checking bbox overlaps, etc.).
The masks I get are in a hierarchy (they overlap). I can build some kind of representation for it.
But again, was kinda hoping there are already tools for this (actually the cv2 contours API talk about hierarchies … :thinking: ).