Separating touching objects in binary image

Hello, I am working on a project where the goal is to measure geometric parameters of a segmented wooden planks. In some masks, planks are separated clearly, but in other cases I have touching regions. I would like to know if there is a way to extend already existing gap so all the planks are separated.
Image with touching regions marked:


Original image: https://i.imgur.com/2Lgw10G.jpg
Your help is highly appreciated. Thanks!

A few things come to mind.

  1. Can you show us the input image?
  2. How did you get to the mask image from the input image? Maybe you could tweak come of the image processing parameters to prevent this from happening in the first place? (Is there a visible gap between the planks in the input image?)
  3. I might consider fitting lines to the image and wherever you find two* nearly parallel lines that are close to each other (or intersect at a point inside the image), compute an average line and draw it to the line as black. You might need to draw it with a width wider than 1 pixel so aliasing doesn’t result in the two separate things remaining connected diagonally at the pixel level.

*in reality you might end up with a bundle of lines grouped together (if you are using houghLines to detect the lines, for example), so you’d compute the average line among potentially many lines.

1 Like

Thank you for respose Steve!
0. Here is resized input image. I used u-net to segment it.


Predicted mask looks like this: https://i.imgur.com/RxYMTe4.jpeg

  1. I used function to process the following mask (aim was to remove all the noise):
def mask_preprocessing(mask):
    gray = cv2.imread(mask)                                                   #read mask

    #---mask preprocessing stage
    ret,thresh = cv2.threshold(gray,150,255,0)                                #apply threshhold to remove any noise
    kernel_1 = np.ones((5,5), np.uint8)                                       #dilation kernel
    kernel_2 = np.ones((4,4), np.uint8)                                       #erosion kernel
    dilation = cv2.dilate(thresh,kernel_1, iterations=1)                      #apply dialation
    erosion = cv2.erode(dilation, kernel_2, iterations=1)                     #apply erosion
    preprocessed = cv2.cvtColor(erosion, cv2.COLOR_BGR2GRAY)
    
 
    return preprocessed

Yes, you are right. In original image and processed mask there is a gap. But in some cases even in original images there are almost no visible gaps, so I need to make my code robust and universal for all the situations. I will definitely consider your advice. Maybe you have any exaple codes for the simillar situation?

I suspect that only AI can solve that satisfactorily… or you impose some changes on the mechanical side so these boards get some separation

that grayscale image looks separable but comparing the two mask pictures, the source of the first mask must be two boards with a very faint gap between them.