# Adjusting brightness in a BGR color space

C++ OpenCV 4.3 VS 2017 Windows 10

Is there a way to dim an image if I am in the BGR color space? (Don’t ask but I cannot switch to HSV or any other color space to do this.) Given the BGR, I know that (255, 255, 255) is white and, I assume, bright. But (253, 254, 255) is also white and bright. If I look to see if all of the channels are above 200 say, and dim each by a percentage that would probably work. But what about instances of (190,255,255) or something similar?

Is there a standard formula for identifying bright pixels and dimming them?

Ed

ignoring the issue of gamma curves, you can just multiply each channel by the same factor.

if you care about gamma mapping, you’d first have to move from gamma-compressed RGB to linear RGB. a good approximation is taking x^{2.2} where 0 \leq x \leq 1 (work with floats, scale CV_8U values accordingly), do your business, and then move back (x^{1/2.2}

more truth, including (a reproduction of the) official definition of sRGB gamma curve: chilliant: sRGB Approximations for HLSL

I appreciate it. I had just found an article on converting color spaces at
https://www.codeproject.com/Articles/19045/Manipulating-colors-in-NET-Part-1#rgb2
and in the section converting from RGB to HSL it suggested the average of the sum of the RGB values could be considered a brightness indicator as you suggest.

What I did was, considering an average of 255 to be the max brightness and I want to lower the brightness by 10%, for instance, I look for any average greater than 90% and then lower each RGB value but a uniform amount so that the new average will be 90%.

For what I am doing, this seems to be fine and going into gamma mapping would just be overkill for me. At least I believe that average method will work for me. I am still in the implementing phase. If worse comes to worse I will have to study that article you point to

Thanks again.

Ed

adding/subtracting is technically wrong. at least use multiplication.

you can check it yourself. you’d expect any color, dimmed to nothing, to become black, right? if you just subtract, you’ll reach one channel’s zero, but some other channel will still be positive… or worse, you get negative values, which is nonsensical.

what do you need this for anyway? don’t you need this to make some sense?

Yes, when I say that I “lowered” the values what I did was get the percentage it would take to bring the average to 90% in this example, and then apply that percentage to each value such as redXpct, greenXpct, etc. And then, of course, I check for any negative values and raise them to zero.

I am tracking lasers on a projection screen and I need to make sure that nothing in the video competes with the laser light as far as brightness or redness. It gives the program a better chance of not having false positives.

Ed

You may not need to do the gamma-correct adjustment for your use case, but for anyone else who comes across this (especially when working with real world images, image blending, etc.) taking into account gamma can be really important.

For your use case, if I understand it, would it be possible to take a reference (“background”) image, and then subtract the background image from the target image, leaving only the laser dot?

Or reduce your exposure value until you don’t have any pixels that are near 255?

For me, this is a pre-processing function for the video frames that get projected onto the screen. There may be videos, for instance, that have sun light twinkling through the trees or a flash of a reflection off of a car mirror that, left as is, might get mistaken for a laser on the screen.

There is a camera that streams what is on the screen, and it is those frames that I process to look for lasers. Having the original video pre-subdued helps the live feed laser stick out more and reduces the chance of false positives.

I’m hoping my simple color averaging and reducing will be sufficient. I will have to finish the user interface and then test with various videos before I know for sure.

Thanks for the input.

Ed

just give the presenter a more powerful laser pointer. something that clearly is brighter than the projection.

years ago I did laser pointer extraction where I needed to work with faint laser pointers, on mostly static images fortunately.

I’m working with laser cartridge inserts that go into pistols and they automatically come only in a <5mw power. Both red and near-IR for the cartridges and near-UV for a “flashlight” feature. The videos can be anything including user videos over which this pre-processing is the only control we have at that point. Always fun…

assuming (near) IR leds,
maybe you a add some diy hardware “non IR” filter to the camera ?
(to better seperate the projected video from the laser signal)

old floppies or developed film negatives do pretty well at this …

Normally that would be good except I let the users use pretty much whatever webcam they have except when using IR laser and then, of course, they need an un-filtered or “IR camera” which is also needed when using a UV laser for the “flashlight”.

For our own videos we pre-process them through Premiere Pro before bundling them with the software. We are now adding the option for users to use their own videos so I am having to guess what Premiere Pro did behind the scenes with their Lumetri Color Curves. I believe that I have it working but I need to do the user interface first before final and more complete testing.

Ed