Opencv how to make edges more visible

Hello!

I want to calculate all gummies in this image.

My first approach was to mask gummies with hsv colors and calculate them by HughCircles and bounding rectangles sides ratio but the result was so bad and the white gummies were invisible. Green masking was quiet good but no other colors, whites were almost invisible.

Result of HSV approach:
light_red
Imgur

Today I wanted to try again and detect all gummies then contour them and find the color in this contour but I don’t know how to find edges after this processing:

bgr_img = cv2.resize(bgr_img, (int(bgr_img.shape[1] / 5), int(bgr_img.shape[0] / 5)))
laplace = cv2.Laplacian(bgr_img, cv2.CV_8U)
cv2.normalize(laplace, bgr_img, 0, 600, cv2.NORM_MINMAX)
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)

Imgur

your edges are clear enough (for a machine)

you may now proceed to finding contours

But besides edges there are a lot of salt-paper noises when i want to delete it with medianblur i will lose edges of white gummies

Edit: Also I would like to know if this approach would be good in this task

you can fiter out those contours later, using contourArea() or similar

 contours, hierarchy = cv2.findContours(gray_img, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)

            filtered_cnts = []
            for cnt in contours:
                area = cv2.contourArea(cnt)
                if area > 20:
                    filtered_cnts.append(cnt)

            cv2.drawContours(bgr_img, filtered_cnts, -1, (0, 255, 0), 1)

ooookkk, i see :wink:

maybe another threshold() (maybe OTSU) before the contours

Tried everyone threshold

run median filter before sobel/gradient operator

after median blurring

bgr_img = cv2.pyrDown(cv2.pyrDown(img))
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
gray_img = cv2.medianBlur(gray_img, 11)
laplace = cv2.Laplacian(gray_img, cv2.CV_8U)
cv2.normalize(laplace, gray_img, 0, 600, cv2.NORM_MINMAX)
ret, gray_img = cv2.threshold(gray_img, 0, 255, cv2.THRESH_BINARY)

I don’t know if this approach is the best to get all white gummies

those banding contours look like it’s reacting to any tiny gradient… likely because your threshold level is silly (0). why is it 0?

because if I set like 10 I don’t get closed contour of white gummy. I tried to make so morpho operations like closing with a lot of iterations but it provides some displacements and more noise

I’ve split the picture into hue/saturation/value planes. saturation plane:

the surface is slightly reflective. that’s unfortunate. you see “reflections” of the gummies on the table.

the clear worm is also tricky. if you can live with not seeing clear gummies, that picture is trivial to threshold and get contours from. you could apply some dilate (grayscale) to it before thresholding to catch the less glowing ones easier.

I’ve also played with a laplacian or sobel on the source image, then applied a MORPH_CLOSE to make the contours join up and get thicker… separating the gummies from each other would be a first step, after which you can fill in the shapes… or you get outermost contours (or contours with hierarchy, and remove all that are inside another contour)

I’ve played some more.

useful step: white balance. so the background is absolutely white/gray, which makes thresholding on saturation work better with the clear gummies.

it should also be lit uniformly. the above step has limits.

saturation is much closer to black for the background now:

dilation and threshold:

Can you give me some advices how to make white balance, i am a beginner in image processing also i would like to know if dilated circles like that would be detected with HughCircles because the main task is to calculate them based on shape and color.

If not i will just simply switch to HSV masking and ignore the white gummies

For the rest worms/bears i wanted to add rotatedRectangles and try to detect them with ratio of sides because worms have much lower ratio

here’s how to white-balance.

the crucial steps are dealing with the gamma map, estimating gray, and multiplying the color channels to correct for that.

#!/usr/bin/env python3

import os
import sys
import numpy as np
import cv2 as cv

pyrlevels = 2

print("loading")
source = cv.imread("f8e7cc120b9ebaf3f507b57207739162e4d64bd5.jpeg")
(height, width) = source.shape[:2]

# for convenience in calculations. float32 and range 0.0 .. 1.0
im = source / np.float32(255)

# remove gamma mapping -> linear color space
print("removing gamma")
im **= 2.4

# reduce size (halve twice), which also reduces noise
print("scaling down")
for k in range(pyrlevels):
	im = cv.pyrDown(im)

# estimate gray value
print("estimating gray")
gray_world = False
if gray_world:
    gray = np.mean(im, axis=(0,1)) # gray world assumption
else:
    gray = np.median(im, axis=(0,1)) # reacts to majority color, more robust if foreground/objects favor a particular color

print("gray:", gray)

# adjust color channels
print("correcting")
correction = gray.mean() / gray
im *= correction


print("applying gamma")
# reapply gamma map
im **= (1 / 2.4)

print("done")

cv.namedWindow("source", cv.WINDOW_NORMAL)
cv.resizeWindow("source", int(width / 2**pyrlevels), int(height / 2**pyrlevels))
cv.imshow("source", source)

cv.namedWindow("adjusted", cv.WINDOW_NORMAL)
cv.resizeWindow("adjusted", int(width / 2**pyrlevels), int(height / 2**pyrlevels))
cv.imshow("adjusted", im)
cv.waitKey(-1)
cv.destroyAllWindows()

1 Like

don’t use Hough-anything here.

use findContours. then analyze each contour. docs.opencv.org has all the information.

a measure of circularity would be useful in estimating if something is round or not. you can calculate a contour’s area, circumference (hence “radius”), …

color… you have the contour. pick the center of mass and look at the pixel’s color in the source image.

if you can ignore the clear gummies, that’ll make things a lot easier and more reliable. I had to pick some sketchy thresholds to catch those.

If you want you can share these sketchy thresholds. I am bothering with this problem for 1 week and the first approach was this:

I couldn’t find the way to do this and trying every tool available in opencv considering switching to neural networks…

    def process(self):
        for i in range(len(self.images)):
            print(f"Image_{i}")
            img = self.images[i]

            for color_name in self.colors:
                color = self.colors.get(color_name)
                # colors
                bgr_img, c_cropped_img = self.mask_color_adjust_color(img, color[0], color[1])
                # make threshold img
                gr_img = cv2.cvtColor(c_cropped_img, cv2.COLOR_BGR2GRAY)

                ret, threshold = cv2.threshold(gr_img, 0, 255, cv2.THRESH_BINARY)

                # ============ DETECT CIRCLES =============
                circles = 0
                height, width = gr_img.shape
                circle_mask = np.zeros((height, width), np.uint8)
                circles, circle_mask = self.detect_circles(threshold, circle_mask)
                threshold = cv2.bitwise_and(threshold, circle_mask)
                if circles is not None:
                    self.objects[10] = len(circles[0, :])
                    circles = len(circles[0, :])
                else:
                    self.objects[10] = 0
                    circles = 0
                # ==========================================

                countours, hierarchy = cv2.findContours(threshold, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

                snake = 0
                bear = 0
                areas = []
                # ============ DETECT BEAR/SNAKE BASED ON AREA ===============
                for cnt in countours:
                    area = cv2.contourArea(cnt)
                    if area > 100:
                        areas.append(area)
                        rect = cv2.minAreaRect(cnt)
                        box = cv2.boxPoints(rect)
                        (tl, tr, br, bl) = box
                        DimA = np.sqrt(pow(tl[0] - tr[0], 2) + pow(tl[1] - tr[1], 2))
                        DimB = np.sqrt(pow(tl[0] - bl[0], 2) + pow(tl[1] - bl[1], 2))
                        dim = [DimA, DimB]
                        longest = max(dim)
                        shortest = min(dim)
                        ratio = shortest / longest
                        max_area = max(areas)  # snake max
                        noise_area = 0.2 * max_area  # noise
                        if area > noise_area:
                            # print(ratio, area)
                            if ratio < 0.43:
                                snake += 1
                                # print("Waz", area)
                            elif ratio > 0.43:
                                bear += 1
                                # print("Mis", area)

                # =========================================================
                if color_name == "green":
                    self.objects[3] = bear
                    self.objects[9] = circles
                    self.objects[12] = snake
                elif color_name == "orange":
                    self.objects[2] = bear
                    self.objects[8] = circles
                    self.objects[13] = snake
                elif color_name == "yellow":
                    self.objects[4] = bear
                    self.objects[10] = circles
                    self.objects[14] = snake
                elif color_name == "light_red":
                    self.objects[0] = bear
                    self.objects[6] = circles
                elif color_name == "dark_red":
                    self.objects[1] = bear
                    self.objects[7] = circles

            self.result[self.image_names[i]] = self.objects
    def mask_color_adjust_color(image, color_low, color_high):
        bgr_img = image
        bgr_img = cv2.resize(bgr_img, (int(bgr_img.shape[1] / 5), int(bgr_img.shape[0] / 5)))
        bgr_img = cv2.medianBlur(bgr_img, 3)
        bgr_img = cv2.convertScaleAbs(bgr_img, alpha=1.2, beta=1)

        hsv_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2HSV)

        curr_mask = cv2.inRange(hsv_img, color_low, color_high)
        g_cropped_img = cv2.bitwise_and(bgr_img, bgr_img, mask=curr_mask)

        return bgr_img, g_cropped_img

It’s almost perfect but when I set threshold to get clear gummies the rest of them with close distance will merge. (in next image)