I’ve been learning OpenCV and since i’m very much into climbing i wanted to try to get the contours of the holds of a climbing wall.
This looked very easy since the holds ussually have very distinct colours.
Unfortunatly I can’t get this to work properly.
It comes down to getting the treshholding technique right i assume.
I’ve been playing with these and other examples for a couple of weeks now but never get close to something that is acceptable.
Anyone can point me in the right direction?
Thanks!
Kind regards,
# organizing imports
import cv2
import numpy as np
segmented = []
# path to input image is specified and
# image is loaded with imread command
image1 = cv2.imread('C:\\Users\##\Downloads\jos.jpg')
blue, green, red = cv2.split(image1)
# cv2.cvtColor is applied over the
# image input with applied parameters
# to convert the image in grayscale
img = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
# applying thresholding
thresh1 = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C ,
cv2.THRESH_BINARY, 9, 18)
contours1, hierarchy1 = cv2.findContours(thresh1, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_NONE)
12
# draw contours on the original image
13
image_contour_blue = img.copy()
14
cv2.drawContours(image1, contours=contours1, contourIdx=-1, color=(0, 255, 0), thickness=2, lineType=cv2.LINE_AA)
15
# see the results
cv2.imshow('Contour detection using blue channels only', image1)
cv2.waitKey(0)
16
cv2.imshow('Contour detection using blue channels only', blue)
cv2.waitKey(0)
cv2.imshow('Contour detection using green channels only', green)
cv2.waitKey(0)
cv2.imshow('Contour detection using red channels only', red)
17
cv2.waitKey(0)
# the window showing output images
# with the corresponding thresholding
# techniques applied to the input image
#cv2.imshow('Adaptive Mean', thresh1)
#cv2.imshow('Adaptive Gaussian', thresh2)
![jos|375x500](upload://dCZNqy1H7PN5OmnKSrRdbhPo0yq.jpeg)
# De-allocate any associated memory usage
if cv2.waitKey(0) & 0xff == 27:
cv2.destroyAllWindows()
Thanks for your intrest.
I will add some pics below. They all play with the last 2 parameters of this line of code :cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C ,
cv2.THRESH_BINARY, 9, 18)
As you will see it never gets it totaly right.
Are there any better techniques i can use?
Is it for example possible to get a better result by doing the thresholding 2ce with different parameters and comparing for example?
When the filename is 3k-10. 3 is the first parameter and -10 is the second parameter (the constant).
You can try edge detection instead of thresholding, followed by findContours. Use boundingRect to filter too small (mounting holes) and too large (wall edges) contours. You will need still more filtering, perhaps with custom functions.
I only see results, that you don’t seem to like. I don’t see input. without seeing untarnished input, it’s hard to tell what is actually there… and you haven’t said which of those things you actually want to detect, and which you don’t. I think it’d be helpful to you to address that.
Hi,
I would like to detect the holds.
So all the coloured stuff you hold on to.
These holds are sometimes mounted to volumes (in this case the black things).
These should not be detected which will make it hard i assume.
Can you please clarify what you mean with input?
I use the picture that was attached as input to do the tresholding.
I assume this is not what you mean :).
all your pictures have green stuff drawn on them. that’s not source data. and it obscures what’s actually there. look at your posts. I don’t see source data.
mentally prepare yourself to the realization that nothing less than AI/DL will solve this satisfactorily.
No problem for me to learn this stuff.
I am a total noob in this however.
It would be very much appreciated if you could point me in the right direction.
At this moment i’m just brute forcing it with no idea how to improve.
I got something slightly better with this script:
import cv2
import numpy as np
import os
BASE_FOLDER = r'C:\Users\###\Downloads'
BASE_NAME = r'jos.jpg'
fname = os.path.join(BASE_FOLDER, BASE_NAME)
img = cv2.imread(fname)
def find_contours_and_centers(img_input):
img_gray = cv2.cvtColor(img_input, cv2.COLOR_BGR2GRAY)
img_gray = cv2.bilateralFilter(img_gray, 3, 27,27)
#cv2.imshow('piemels', img_gray)
#cv2.waitKey(0)
#(T, thresh) = cv2.threshold(img_input, 0, 100, 0)
edges = auto_canny(image=img_gray)
contours_raw, hierarchy = cv2.findContours(edges, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
contours = [i for i in contours_raw if cv2.contourArea(i) > 5]
contour_centers = []
for idx, c in enumerate(contours):
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
samp_bounds = cv2.boundingRect(c)
contour_centers.append(((cX,cY), samp_bounds))
print("{0} contour centers and bounds found".format(len(contour_centers)))
contour_centers = sorted(contour_centers, key=lambda x: x[0])
return (contours, contour_centers)
conts, cents = find_contours_and_centers(img.copy())
#circles = [i for i in conts if np.logical_and((cv2.contourArea(i) > 1),(cv2.contourArea(i) < 4000))]
teller=0
circles = []
for i in conts:
#print(cv2.arcLength(i,False))
if cv2.contourArea(i) > 5 and cv2.contourArea(i) < 1000 and cv2.arcLength(i,True)<900 and cv2.arcLength(i,True)> 20 :
circles.append(i)
teller=teller+1
#print(circles)
cv2.drawContours(img, circles, -1, (255,255,0), 2)
#for x in cents:
#print(x[1][3])
#print(cents[x][0:519])
cv2.imshow('', img)
cv2.waitKey(0)
if cv2.waitKey(0) & 0xff == 27:
cv2.destroyAllWindows()