Find distance from color blue

For a project I need to find the distance from a blue line of tape.
Some background info is I am trying to create an autonomous model car and need the distance between the camera and the lane.
How would I go about doing this?



got the camera calibrated? know the width of the tape? can you find the tape (segmentation mask, measure width in pixels)?

Sorry for being very in specific. I have never done anything like this before due to me being 12. That said this is the camera I would use:
Hope that helps,

oh ok.

the geometry is basically just similar triangles or intercept theorem.

you can skip more involved camera “calibration”. just take a picture of something you can measure physically (width of tape, distance to camera) and in pixels (literally selection rectangle in mspaint will do). choice of physical units doesn’t matter, just has to be consistent.

focal length [px] = width [px] / width [m] * distance [m]

that’s the single constant for the camera (and lens) that’ll relate angles to pixels. angles because it’s a camera, that’s all it sees, and because there’s an angle hidden in the projection equation (see also: camera matrix):

width[px] = f[px] * width[m] / distance[m]

where tan(angle) = width/distance. the calculations don’t involve actual angles most of the time, just f.

amazon link says 72 degrees field of view (diagonally), so I would expect f ~ 2230 or so, because f * tan(72/2 * pi/180) == hypot(2592/2, 1944/2)

I’d recommend first working on a single picture. since that’s a “pi camera”, there are many ways to take a single picture. when your code works on a single picture, you can easily extend it to work on (live) video. trying to develop while everything moves is just needless complication.

for the programming, you might want to start with

or whatever you need that leads up to that.

that should give you a “mask” image of your colored tape. you can measure on that by slicing one row of pixels (plot that, see what it looks like) and finding the indices of the first and last positive elements. you can write your own loop for that, or use numpy

all of that relates to “augmented reality”, where the task is to find the pose (distance, orientation) of a square of known size. OpenCV comes with an aruco module. that’s basically what you’re doing, just with a dimension more/less (tape vs square…)

from the equation you’ll see that distance depends strongly on how accurately you can measure pixel distances. since that has practical limits, you can and should use larger physical objects, i.e. make the tape wider rather than narrower.

practical tip: work with jupyter notebook/jupyter lab, rather than plain python scripts. notebooks are a bit more interactive and exploratory than trying to develop an entire script by restarting it all the time. notebooks are also usable from Visual Studio Code. a less complicated notebook of my recent ones (inline matplotlib graphs and images): so-72854648.ipynb · GitHub. to show numpy arrays as images inline, I use a bit of custom code in my PYTHONSTARTUP script that works similar to the cv2_imshow shim available in google colab. oh, yes, you can work in google colab too, but that’s running on a server, so you won’t have access to any local devices (your camera).

1 Like

I thought about that… and if that blue tape is for lane markings, and the camera doesn’t look straight down on the road, then you don’t have one distance to the tape, you have a view along the blue tape, like so:


(5,451 Dash cam Stock Video Footage - 4K and HD Video Clips | Shutterstock)

and in such a view, if you wanted to work with road markings… it’s not as easy anymore.

for a model car in a racetrack or something, you could make the situation simple and give it continuous/solid left and right boundaries, or one center track.

then you could do like a Lego Mindstorms lane following approach, take a horizontal slice of the picture, find the left and right lines, and steer so they’re symmetric in the view.

real road markings are often dashed or missing entirely. some simple, and unsuccessful, approaches still try to handle this with low level image processing, find the lines, extract curves delineating lanes, and feed that into some path planning… and then you have Tesla, implementing serious machine cognition.


Could I use a two cameras one on each side of the model car to measure the lanes, and if so would that make it easier?

Dev 101

you can but that won’t change anything. multiple cameras are required if you need actual depth perception, and lane detection doesn’t need that.

start by putting down some tape and snapping a picture from the model car’s view. picture from a phone camera is just fine for prototyping. it’s ok to post it here.

check this out for an impression: An image-processing robot for RoboCup Junior - Raspberry Pi

that’s doing the “row of pixels” thing, but apparently then follows the line a few steps ahead to react better to bends in the road. also that’s just evaluating a single lane-center line. doing that for two lane boundary lines is pretty much the same idea though.

Ok, I get what the machine is supposed to do by calculating the curvature of the lines.

Originally what I was going to do was measure the distance between both lines, subtract it and then turn it a little and move if it was off center - but i guess this is a better approach.

How would I program this though, I am only 12 years old and have not explored image recognition a lot so if there is a very simple free tutorial that would be very useful.

Thanks So much for helping me with this.

Dev 101

I did some more exploring and I think it is a more viable option to calculate the curvature of the lines rather than following it.

There is nothing specific on this though so can you explain how ai would do this (in simple terms)

The resource that most explains my idea is this article: Advanced Lane Detection for Autonomous Vehicles using Computer Vision techniques | by Raj Uppala | Towards Data Science