Position coordinates on geometric image

Hi,

It’s the first time I post in this forum and to be honest the first time I post to ask help on a forum for a data science project at all !

So here is my issue. I want to position 6 points on an image of a cylindrical shape to apply a transformation to unbend it (I should be able to do this once I’ve found all my pixel point positions). The image looks like this after a few pre-processing steps:

img = cv2.imread('./img/wine2.jpg')

gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray_image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 21, 4)

contours, hierarchy = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours]

cnt = max(contour_sizes, key=lambda x: x[0])[1]

# Create a mask of the label
mask = np.zeros(img.shape,np.uint8)
cntr = cv2.drawContours(mask, [cnt],0,255,-1)
edges = cv2.Canny(cntr ,140, 210)

Ideally I want to position my points like this:

The problem is that I struggle to find a way to do this.
As an example, I’ve tried to detect the left edge to then apply simple loops to get the first pixel at the top and the first pixel at the bottom with some for loops like so:

ddepth = cv2.CV_8U
borderType=cv2.BORDER_DEFAULT
left = cv2.Sobel(mask,ddepth,1,0,ksize=1,scale=1,delta=0,borderType=borderType)
right = cv2.Sobel(mask,ddepth,1,0,ksize=1,scale=-1,delta=0, borderType=borderType)
top = cv2.Sobel(mask,ddepth,0,1,ksize=1,scale=1,delta=0,borderType=borderType)
bottom = cv2.Sobel(mask,ddepth,0,1,ksize=1,scale=-1,delta=0,borderType=borderType)

# Remove noise from borders
kernel = np.ones((2,2),np.uint8)
left_border = cv2.erode(left,kernel,iterations = 1)
right_border = cv2.erode(right,kernel,iterations = 1)
top_border = cv2.erode(top,kernel,iterations = 1)
bottom_border = cv2.erode(bottom,kernel,iterations = 1)

xa = 0
ya = 0

for row in range(left_border.shape[0]):
    if left_border[row,:].sum() > 0:
        ya = row
        break

for col in range(left_border.shape[1]):
    
    if left_border[:,col].sum() > 0:
        xa = col
        break

# Getting F point

xf = 0
yf = 0

for row in reversed(range(left_border.shape[0])):
    if left_border[row,:].sum() > 0:
        yf = row
        break

The problem is that even if it worked for the A point, the left border obtained earlier with cv2.erode (hand draw in green) has some noise on top and bottom and as a result when i try to get the yf coordinate by “scanning” my image from the bottom to the top It stops on a little bottom pixel rather than detecting the first bottom pixel of my left geometric border.

So I need your help, should I try to find a way to simplify the representation of my borders ? (smoothing? And if so how ?), or is there a beter method to find a way to automatically position these wanted coordinates ?

I hope I’ve been clear enough, I wish you all to have a nice day !

If your shape more or less looks the same (straight lines left/right, curved lines top/bottom) from run to run, I’d consider fitting a line to the left/right edges and an ellipse to the top / bottom edges (assuming an ellipse models it correctly) and then computing the intersection of the lines with the ellipses to get your corner points. Once you have those you could use cv::cornerSubpix() on those points to get a more accurate location.

1 Like

Thanks for your awnser that’s actually a solution I considered but I wouldn’t know ho to start fitting regression on a ndarray and still the noise would be present and thus could bend the fit.

It’s been a while, but I did something similar (2D curved surface fit) with (I think) the Gnu Scientific Library (GSL). I don’t remember the details, but I think I used a function called QRSolve() (a QR decomposition method) to find the coefficients to a polynomial. I’m not at all saying this is the best way to approach the problem, but it worked well for me.

As far as your outliers, you could fit your curve, filter the bad outliers, and fit the curve to the inliers only (iterating a few times)…or maybe a RANSAC approach.

It seems workable to me.

Thanks for your insights !

math.hypot through all contours from top right of image, or somewhere in that quadrant. Where the hypot length is the longest is where that corner is.

Or math.hypot from somewhere above where you drew the “F(o” on the image, and take the shortest length.

Thanks for your ideas Bren, I will try to look into this !
Actually I’ve been going back a bit in my process because now it seems that I’m rarely capable of extracting the blue geometric pattern from all my samples images.

I saw a post on this forum where they recommend to train a U-net with manually labeling masks so the neural net is able to recognize the pattern.