I would like to subtract a mask (blue) from a contour (red):
I’m using the following code, simplified here but the methods are the same:
# Get first image and its mask
img1_gray = cv.cvtColor(cv.imread("image1.png"), cv.COLOR_BGR2GRAY)
threshhold, theMask = cv.threshold(img1_gray, 127, 255, cv.THRESH_BINARY)
# Get second image, its mask and contours
img2_gray = cv.cvtColor(cv.imread("image2.png"), cv.COLOR_BGR2GRAY)
threshhold, theContourMask = cv.threshold(img2_gray, 127, 255, cv.THRESH_BINARY
theContours, hierarchy = cv.findContours(theContourMask, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
# "Subtract" first mask from second contour
nonzeroMask = cv.findNonZero(theMask)
inner = []
outer = []
for contour in theContours[0]:
if contour in nonzeroMask:
inner.append(contour)
else:
outer.append(contour)
And get the following lengths for the lists:
len(inner): 198
len(outer): 2
This doesn’t make sense as it should be a lot closer to 50/50.
I’m new to OpenCV and can’t figure it why.
Any help is appreciated. Thanks!
Also, if you’d like to take a look, these are the input images used: input images