Detecting curved line

Hi,

My first serious foray into OpenCV and, as part of a bigger project, I need to detect the curved line in an image based on a couple of mouse clicks. Essentially trying to find the closes continuous line to the points selected with the mouse. I’ve added below the result of a bit of pre-processing and running Canny edge detection. I’ve added a few arrows to point out the line that I want to detect.

I’m not looking for a step-by-step answer, just a few tips. My thinking is taking advantage of the black region underneath the line and essentially assume the line will always be there, just at the edge of the black region.

Thanks!

With some further processing have managed to get a pretty good separation. With ROI I should be able to extract the exact line.

source data please. I’d like to see for myself whether the filters you applied are a good idea here.

It’s a fixed camera on a race track. I’m interested in the line just below the red/yellow band on the track. It’s a very thin white line.

okay so the first step is semantic segmentation.

NB: chuck Canny. it’s a newbie trap. I always struggle to convey the flavor of “wrong” to newbies. consider that the tutorials teach you how to use a wood saw, an axe, and a blow torch. now you’re faced with a leaking pipe. applying the axe and wood saw to a leaking copper pipe isn’t just gonna “do nothing” (so doing it harder doesn’t hurt anything). it’s actually worsening the situation, even though a murdered pipe and a dull saw look like progress. applying the blow torch isn’t gonna do anything either, apart from boiling the water in the pipe, the steam escaping from the leak if not building pressure, which will cause other trouble… that is how wrong Canny is, in most situations. (what to do with the torch: solder the hole… if you have the solder, and you drain the pipe first, and… all that)

back to segmentation. I’m saying semantic because segmentation basically just means labeling pixels (true/false or with more choices).

your picture might be segmentable with “good” old thresholding in specific color spaces. the asphalt would be brownish/gray and dark, while the lane markings are gray and bright, and the red/yellow stuff is saturated and bright-ish…

or, if the camera view doesn’t move, you can “paint” the areas yourself once in some kind of image editor. sounds dumb but that’s exactly what “data annotation” is, people sitting there and painting “ground truth” on top of images.

or you can throw a neural network at it.

I’m gonna try boring low level operations.

Saturation component:

selecting some ranges in HSV space

false color (previous three composited as RGB… they may overlap or not!)

(this could all look a little smoother with… a smoothing filter)

now… assuming you have cars on that (the individual segmentation maps), you could just test if the car’s mask intersects with what labels (blank asphalt, white marking, colored marking) and use that for your decision.

2 Likes

Nice way of putting it with the tools. It’s exactly how it feels, 99% of tutorials only talk about Canny, simple thresholds and going to grayscale straight away.

Let me clarify if I understood correctly and ask some follow up questions. If I change the picture to a hsv representation I should be able to use threshding to segment the image into a few basic categories (asphalt, lines etc). I get that. What I don’t get is how I can actually check if the car is on the asphalt above the line rather than on the asphalt below it. During segmentation they’ll both look like the same class. Also, what sort of representation is best to use for the individual segments? Just save them as images with the other classes blacked out?

color spaces… pick whatever space makes the space of values align well with the cube spanned by inRange. often HSV/HSL fit the task better than RGB. sometimes they don’t, and maybe RGB fits better, or some other space fits better.

the simplest representation, here, is a binary image. that’s like grayscale except no shades of gray. that’s easy to do math on (per-pixel boolean operations, and counting).

well, segmentations/masks could also be represented by vector data, i.e. polygons… but what’s more useful (or useful at all) depends on the situation. this situation is served well with raster images.

as for how to do an intersection test on such masks, check the other thread.

1 Like

Thanks a lot @crackwitz ! I’m still a bit lost on one topic. I know there’s the option of manually labeling of the image segments, but just curios if there isn’t maybe another tool to fill in the entire area above the line as essentially that’s where the object should never be.

Just by using segmentation in color space I’ll get the same labels both below the line (where objects are allowed to be) and above the line - which is the condition I’m trying to detect. Being only one frame I can manually process it in any image editor to label it correctly, but this is more a learning exercise for me so I’d like to know if there’s a proper way of doing it. I’ve manually modified one of the pictures to clarify what I mean. Essentially I need to make sure the object doesn’t go above the blue line. I was able to use the bucket fill tool from Paint to highlight a big part of the area.

PS: Could you recommend a more correct path of going deeper into OpenCV? Maybe a course or a book to get at least an overview of the various tools and how they can be used? Just so the next time I need to change the flooring in a room I’m not tempted to bring out the blow torch :slight_smile:

hmmmm, this is specifically for this picture/view… if you have a mask for that line (colored areas of the picture… except for the tires on the bottom), you could, for each pixel column, just find the lowermost mask pixel, and then paint upwards, i.e. draw a line, or take a slice (numpy) and assign the label/color.

that’ll get in trouble in the lower left quarter of the picture because of the colored tires…

this kinda goal contains high level semantics (scene understanding, where’s the road, where is off-road) so, regardless of the “how” (how to calculate a mask or whatever), you always need something that says “what’s the goal”, and that is a human or an AI.

PS: no idea. I took a course at uni. that gives breadth. the rest is curiosity, having some problem you want to solve, and some reason to justify spending time on it… and practice/experience.

1 Like

Roger :slight_smile:

Have manually edited the mask and all is working well. After making a step back I now understand what you mean by how tricky it would be to make a system that automatically detects the line for any situation.

Will try to find a Udemy course to at least give me an overview of what is possible. As you pointed out as well, starting with “I have problem X” and then searching on forums/stackoverflow is not really the best approach.