How to detect a (LED) Light Source on an image

Hello OpenCV Community,

The project I am currently working on requires me to be able to detect (LED) light sources on an image or video feed. To do this, I want to use OpenCV since it looks like a very capable library. I am new to OpenCV however an am unsure how to proceed.

I watched some basic youtube tutorials on OpenCV and understand the basics, but am far from able to implement this on my own. Therefore, I searched similar projects on GitHub but unfortunately can’t get them quite to work or don’t really understand what is going on.

This is why I make this post, to ask for tips / advice on how to tackle this project, as well as some explanations if possible.

As a practical example, I have this image of a LED light source as input on which I want to detect the light source and draw a circle around the LED to highlight it. So in short:

Input: original image → OpenCV processing → Output: image with circle around detected LED light source.

The input and output image examples (imgur link):

How can I achieve this effect with OpenCV ?

Thank you for your help !

With Kind Regards,



that very much depends on how your light source looks in a picture. I can tell you right now, whether it’s LED or something else, will be hard/impossible to distinguish.

there is always the possibility that you won’t detect the “light source”, or that you will detect something as “light source” when it’s something else. approaches can be judged in how likely they are to make errors of these kinds.

you can approach this with varying complexity and quality. that can go as far as training a neural network… if you have the data.

simplest thing you can do is to find areas that are overexposed, i.e. almost perfect white. inRange or threshold, findContours, boundingRect or minEnclosingCircle.

1 Like


Thank you for your reply.

Building a neural network is not the intention for this project, so I would like to detect the light source using built in methods in the OpenCV library.

In my current attempts at detecting the light source, I tried using the “HoughCircles” method on a gray-scale version of the input-image for detecting the light source, but its not working.

You mentioned several functions included in the OpenCV library. Which one do you think suits my project best ? Or should these operations be used together to detect the light ? I’m not sure what each one does.

Thank you for your help !

With Kind Regards,


Your question is more about what is the programming logic to be implemented… therefore:
If you have 16 bit images (or frames) the “intensity” goes until 65 535 in each pixel, when a pixel is saturated (65535) or near of this value means that there is a source of light. So implement a “if” with this statement for example:

if (matrix[n][m] == 65535) {
//draw a circle centered in this region see the functions cv::circle();

or even you can detect light source and classify them in intensity:

for (n = 0, n< img.rows, n++){
for (m = 0, m< img.cols, m++){
if (matrix[n][m] > 45 000 || matrix[n][m] < 50 000 ) {
//draw a circle centered in this region
// put text in the circle: Source of light with intensity 45 000 to 50 000

(If you calibrate your camera you will be able to relate the distance of the source and its intensity. That will work for the same source. For each source a new calibration.)

You should define what is a source of light to the program establishing conditions that characterize, the most similar to the reality, the behavior of a source of light in front of a sensor. If you want specifically detect only LED source, I imagine that you will need some bandpass filter in front the camera lens and an interferometer to get values of each wavelength and similarly tell to the program with statements what is a LED source light behavior in front the sensor.

I do not know if this is the best approach for your case, nevertheless you can start with this simple suggestion.

1 Like

that’s a newbie trap. newbie traps: Canny, any Hough transform, matchTemplate

these algorithms only give the subjective feeling of getting you anywhere. they often don’t. in the vast majority of situations, they will harm you and your data. they are the wrong tool for most tasks. they are a Fata Morgana.

to be clear: I’m telling you not to use these here because they will not solve the problem and any use of them here will do things to your data that are counter-productive (harmful).

I told you a bunch of functions. wherever I said “or”, it’s a choice, but of the sequence, you need all steps

use OpenCV: OpenCV modules to find out what they do (search box in the docs, top right) and look at the tutorials section.

also feel free to ignore the post above this. it doesn’t apply to you (it’s safe to assume that you don’t have 16 bit data) and also leads you astray. you do not want to touch individual pixels. OpenCV has functions that work on whole images at once.

1 Like


Thank you for your reply.

I used the OpenCV procedures that you proposed in order to detect the light spot on an image, using the function “treshold” on a grayscale image, findContours to find the spot contour and minEnclosingCirlce to draw a circle the size of the light source. This seems to work quite well.

I have an extra question about treshold however. In the current iteration in my code, I have set the thresh and maxval values manually, but this depends on the “brightness” or whitest pixels in the grayscale image: If the original input image is yellow indstead of pure white light, the grayscale image white brightness of the light source will be less than pure white as well.

My question: Is there a way in OpenCV to detect the average brightest white on a grayscale image ? this way I can dynamically set the thresh value in the treshold procedure which depends on the average whitest pixels in the grayscale image instad of manually hardcoding these values as I do now.

Example of what I use now, for a specific input:

thresh = cv2.threshold(grayImage, 240, 255, cv2.THRESH_BINARY)[1]

The problem with this is that the thresh value (240 here) is not dynamic.

To remedy this, I also tried using Otsu’s method together with gaussian blur (in order to reduce noise) as Otsu’s method calculates the thresh value in treshold on its own, and this seems to work well also. The issue with this method is however that I cannot define a “lower bound” as if a minimum “brightness” value for the thresh parameter in treshold. With the Otsu method, it doesn’t matter what you enter as the algorithm will try to find the optimal thresh value itself.

threshvalue = 150 # arbitrary with otsu method
maxvalue = 255 # pure white
gaussianBlur = cv2.GaussianBlur(grayImage, (5, 5), 0)
ret, thresh = cv2.threshold(gaussianBlur, threshvalue, maxvalue, cv2.THRESH_BINARY + cv2.THRESH_OTSU)

The reason why I want this lower bound, is because the Otsu method still detects the shape of the light source (the lamp) when it is turned off.

For clarification: I have a video file of a light source as input, which turns on and off. When it is turned on, I want to detect the light. When it is off, no light should be detected. With the Otsu method, the turned-off lamp is still detected when the light is turned off. The reason for this being that the Otsu method finds a low enough thresh value in order to be able to detect the turned off light, which is still the brightest object in the frame (even when turned off).

Is it possible to the light only when it is turned on ? If so, how can I do this ?
Is the Otsu method a correct path forwared, or should I use something else ?

Thank you very much for your help !

With Kind Regards,


don’t use otsu here. it picks a threshold using criteria that are entirely unsuitable to your situation.

simple solution: find the maximum value in your picture, work relative to that. either subtract a fixed offset or multiply by some factor (0.99? 0.95?).

if you want to explore, consider calculating a histogram. then you can say “I want the top 1% value”, and that’ll give you the threshold that selects the brightest 1% of all pixels.