My application is to process the incoming image input from the live video stream and detect certain objects from that. I think changes in lights are affecting my result’s accuracy. So I need to fix the value of the brightness, and to do so I am using the V parameter from the SHV color model.
I am deriving a mean value of the V parameter from the input image and try to keep that value nearer to the predefined setpoint value. I’m using the bellow mentioned python code for this process.
ret, frame = cap.read()
hsvImage = cvtColor(frame, COLOR_BGR2HSV)
h,s,v = split(hsvImage)
live_value = mean(v)
brightness_factor = predefined_setpoint / live_value
hsvImage[...,2] = hsvImage[...,2] * brightness_factor
frame = cvtColor(hsvImage, COLOR_HSV2BGR)
From this method, I’m getting a better result. But still, I want to verify that whether I’m applying this logic in the right way or not.
More importantly, my main concern is with the execution frequency of the code. This method is reducing the execution frequency of my python code drastically.
Is there any lightweight process or inbuilt method available to achieve the same goal?