# Make an algorithm more robust for identifying contours on images

Hi all.

I’m using opencv and numpy libraries to identify and draw a rectangle at the center of specific shapes. Below is the code I wrote so far and then I’ve added two different examples

Code:

``````import cv2
import numpy as np
from os import listdir
from os.path import isfile, join

def prepareImage(initial_image):
# Convert the image into grayscale
gray_image = cv2.cvtColor(initial_image, cv2.COLOR_BGR2GRAY)
# Increase the contrast based on the alpha and beta values (see CONFIGURATION)
contrasted_image = cv2.convertScaleAbs(gray_image, alpha=alpha, beta=beta)
# Invert the colors (black to white and vise versa) because findContours() method detects white areas
inverted_image = cv2.bitwise_not(contrasted_image)
return inverted_image

def getContours(inverted_image):
# Find Contours
ret, thresh = cv2.threshold(inverted_image, 125, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw Contours
# cv2.drawContours(inverted_image, contours, -1, (255,0,0), 3)
# showImage(inverted_image)
return contours

def extractVials(contours):
vial_contours = []
for contour in contours:
if cv2.contourArea(contour) > 1000:
vial_contours.append(contour)
return vial_contours

def drawCoordinates(vial_contours, image):
for contour in vial_contours:
# Find the center of the contour
cnt = contour
M = cv2.moments(cnt)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# Draw the rectangles of coordinates ontop of the initial image
draw_image = cv2.rectangle(image, (cX-int(window_x/2),cY-int(window_y/2)), (cX+int(window_x/2),cY+int(window_y/2)), (0,0,255), 2)
showImage(draw_image)

def showImage(image):
cv2.imshow("Image with Coordinates", image)
cv2.waitKey(0)
cv2.destroyAllWindows()

# CONFIGURATION
alpha = 2.9     # Contrast control (1.0-3.0)
beta = 0        # Brightness control (0-100)
window_x = 15
window_y = 20
images_path = 'CAPTURES'
# END OF CONFIGURATION

input_files = [f for f in listdir(images_path) if isfile(join(images_path, f))]

# Loop through all the images
for image_file in input_files:
image = cv2.imread(images_path + '/' + image_file)
p_image = prepareImage(image)
contours = getContours(p_image)
vial_contours = extractVials(contours)
drawCoordinates(vial_contours, image)
``````

Examples:

In some cases the algorithm works pretty well since it finds all the vials and successfully draw the rectangles. Here is an example.

Initial image and with rectangles:

Other times it seems that the algorithm fails almost totally. Such an example is the following

Initial image and with rectangles:

I would like to make it more robust in any case. Could anyone suggest some changes or ideas to try?

please try to put your images here, not on an external bin, thank you
(you may use several posts to overcome the “only 1 allowed for new users” limit here)

code isn’t a black box. you can look at intermediate results.

Probably moderators put them as embedded now.

Yeap. I know it’s not, since I wrote it. The thing is that I believe that this approach might not be the right one and I was wondering if there is any different one.

what I meant to say is… for a discussion it helps if I don’t have to run your code to see what’s going on. what would help me is to see intermediate results, i.e. you might wanna post some intermediate results from the failure case.

@arronar

When developing a computer vision pipeline like yours, showing intermediate results helps a lot. Showing the thresholded image and the contours found tells you what’s going on.

You can end up with a dozen window images on your screen, it doesn’t matter, they are for debugging only.

You also can add some scrollbars to interactively tune some parameters, like threshold level. This way you’ll have an intuition of your problem and know where to fix your solution.

I kinda missed that implied question. I’ll address that now.

what you do could work. one cannot say that this won’t work.

we can discuss alternate approaches.

you could assume/require that camera and tube holder never move relative to each other. then you could define the locations of the tubes once manually.

since you have such nicely tinted substances there, this screams “color spaces” to me. here’s saturation:

combined with a selection (inRange) on hue for just the blue (none of the green) tint, that’d select the blue liquid very reliably.

you could also define further masks manually that blind the reflections from whatever mirror surface the tubes’ tips rest on (pictures appear upside down).

overall, getting some rectangles on the liquid in these tubes needn’t be difficult at all, if you can control the environment somewhat.

I’m wondering, what do you need these rectangles/ROIs for?

So here are some intermediates for the initial code:

For the failure case I’ve got this image with the contours draw (they have white border) :

And these are the corresponding areas of them.

``````0.0
0.0
2.5
0.0
0.0
0.0
11.5
19.0
10.0
0.0
27.5
22.5
9.0
0.0
15.0
21.5
38.0
35.5
5.5
0.0
2.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
4.0
0.0
455.0
2.0
0.0
0.0
11.0
0.0
7.0
0.0
0.0
0.0
0.0
0.0
0.0
11.0
2.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
18556.0
16.0
2.0
12.0
2.0
2.0
2.0
2.0
4.0
2.0
2.0
2.0
7.0
7.0
2.0
2.0
2.0
5.5
2.0
2.0
4.0
2.0
4.0
5.5
4.0
5.5
2.0
4.0
2.0
4.0
2.0
6.0
2.0
2.0
2.0
38.0
2.0
2.0
441.5
0.0
0.0
2.0
2.0
24.0
4.0
0.0
``````

The same with the successful one:
Contours found:

Contour areas:

``````0.0
10.0
0.0
0.0
0.0
49.5
4.0
0.0
0.0
0.0
28.0
2.0
0.0
0.0
0.0
18.5
125.5
13.0
0.5
0.0
1958.0
2.0
2.0
0.0
1667.5
10.0
1947.5
2.0
2.0
2.0
1879.0
2.0
18.0
2.0
2.0
4.0
8.0
2.0
2014.0
0.5
0.0
1.5
6.5
198.5
183.5
2.0
0.0
0.0
0.5
0.0
0.0
4.5
2117.5
6.0
2.0
0.0
0.0
4.0
1.5
1.5
0.5
1.5
11.0
273.5
12.0
22.0
7.0
4.0
7.0
0.5
20.0
45.5
8.5
54.5
``````

It seems that something is going but with the contour finding and then this problem is transferred to the function where I’m filtering according to their area.

I’m wondering, what do you need these rectangles/ROIs for?

I’m just use the content of them for some RGB analysis.

P.S I’m going to test your alternative proposition and come back. Cheers.

So here are some intermediates for the initial code:

For the failure case I’ve got this image with the contours draw (they have white border) :

And these are the corresponding areas of them.

``````0.0
0.0
2.5
0.0
0.0
0.0
11.5
19.0
10.0
0.0
27.5
22.5
9.0
0.0
15.0
21.5
38.0
35.5
5.5
0.0
2.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
4.0
0.0
455.0
2.0
0.0
0.0
11.0
0.0
7.0
0.0
0.0
0.0
0.0
0.0
0.0
11.0
2.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
18556.0
16.0
2.0
12.0
2.0
2.0
2.0
2.0
4.0
2.0
2.0
2.0
7.0
7.0
2.0
2.0
2.0
5.5
2.0
2.0
4.0
2.0
4.0
5.5
4.0
5.5
2.0
4.0
2.0
4.0
2.0
6.0
2.0
2.0
2.0
38.0
2.0
2.0
441.5
0.0
0.0
2.0
2.0
24.0
4.0
0.0
``````

The same with the successful one:
Contours found:

Contour areas:

``````0.0
10.0
0.0
0.0
0.0
49.5
4.0
0.0
0.0
0.0
28.0
2.0
0.0
0.0
0.0
18.5
125.5
13.0
0.5
0.0
1958.0
2.0
2.0
0.0
1667.5
10.0
1947.5
2.0
2.0
2.0
1879.0
2.0
18.0
2.0
2.0
4.0
8.0
2.0
2014.0
0.5
0.0
1.5
6.5
198.5
183.5
2.0
0.0
0.0
0.5
0.0
0.0
4.5
2117.5
6.0
2.0
0.0
0.0
4.0
1.5
1.5
0.5
1.5
11.0
273.5
12.0
22.0
7.0
4.0
7.0
0.5
20.0
45.5
8.5
54.5
``````

It seems that something is going but with the contour finding and then this problem is transferred to the function where I’m filtering according to their area.

I’m wondering, what do you need these rectangles/ROIs for?

I’m just use the content of them for some RGB analysis.

P.S I’m going to test your alternative proposition and come back. Cheers.

EDIT:

@crackwitz , I tried your method but apparently I’m missing something. I turned the image into HSV color space and then created a plot of the color pixel to see how they separate from each other. From that plot, then I chose with a color picker a light blue and a dark blue. I turned them into HSV values here and then used the `inRange()` method to create a mask. After applying the mask to the original image I just got a total black.

Below is the code as well and the output images.

``````for image_file in input_files:
image = cv2.imread(images_path + '/' + image_file)
hsv_image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
showImage(hsv_image)

# Create a graph to select the color values
h, s, v = cv2.split(hsv_image)
fig = plt.figure()
axis = fig.add_subplot(1, 1, 1, projection="3d")

pixel_colors = image.reshape((np.shape(image)[0]*np.shape(image)[1], 3))
norm = colors.Normalize(vmin=-1.,vmax=1.)
norm.autoscale(pixel_colors)
pixel_colors = norm(pixel_colors).tolist()

axis.scatter(s.flatten(), h.flatten(), v.flatten(), facecolors=pixel_colors, marker=".")
axis.set_xlabel("Hue")
axis.set_ylabel("Saturation")
axis.set_zlabel("Value")
plt.savefig("mygraph.png")

# Get a range of selected colors
dark_blue = (220.0000, 75.0000, 45.4902)
light_blue = (209.2500, 46.5116, 67.4510)
plt.subplot(1, 2, 1)
plt.subplot(1, 2, 2)
plt.imshow(result)
The dark blue that I selected is the `#1d3a74` while the light one is `#5c85ac`.
`https://aws1.discourse-cdn.com/standard11/uploads/opencv/original/2X/0/021ea47d4e4005dd2c598b7fae0e5598a29f4251.png`
`https://aws1.discourse-cdn.com/standard11/uploads/opencv/original/2X/5/5eaa9a2ed647bdecb1f50b18b61545291546b917.png`