okay so the first step is semantic segmentation.
NB: chuck Canny. it’s a newbie trap. I always struggle to convey the flavor of “wrong” to newbies. consider that the tutorials teach you how to use a wood saw, an axe, and a blow torch. now you’re faced with a leaking pipe. applying the axe and wood saw to a leaking copper pipe isn’t just gonna “do nothing” (so doing it harder doesn’t hurt anything). it’s actually worsening the situation, even though a murdered pipe and a dull saw look like progress. applying the blow torch isn’t gonna do anything either, apart from boiling the water in the pipe, the steam escaping from the leak if not building pressure, which will cause other trouble… that is how wrong Canny is, in most situations. (what to do with the torch: solder the hole… if you have the solder, and you drain the pipe first, and… all that)
back to segmentation. I’m saying semantic because segmentation basically just means labeling pixels (true/false or with more choices).
your picture might be segmentable with “good” old thresholding in specific color spaces. the asphalt would be brownish/gray and dark, while the lane markings are gray and bright, and the red/yellow stuff is saturated and bright-ish…
or, if the camera view doesn’t move, you can “paint” the areas yourself once in some kind of image editor. sounds dumb but that’s exactly what “data annotation” is, people sitting there and painting “ground truth” on top of images.
or you can throw a neural network at it.
I’m gonna try boring low level operations.
Saturation component:
selecting some ranges in HSV space
false color (previous three composited as RGB… they may overlap or not!)
(this could all look a little smoother with… a smoothing filter)
now… assuming you have cars on that (the individual segmentation maps), you could just test if the car’s mask intersects with what labels (blank asphalt, white marking, colored marking) and use that for your decision.