I saw this implementation of cv2.multiply() function and I am struggling to understand it. Nothing is mentioned in the docs as well. I am a beginner in opencv, just learning basics.
Implementation:
scaled_img = cv2.multiply(img, (1,1,1,1), scale=1.5)
result = scaled_img.astype('uint8')
Context:
The implementation is of image scaling by scalar multiplication. It feels like brightness/ contrast scaling for color images itself is a deep topic but this is one of the ways I came across which seems to be working (in the eyes of a beginner), but I canât understand why.
What I am confused about:
Firstly, I canât seem to understand why the below code wonât work for brightness scaling. The below code just gives me a blue image. I would highly appreciate if someone could explain why.
scaled_img = cv2.multiply(img, 1.5))
result = scaled_img.astype('uint8')
I am just multiplying a scalar to all the pixel values in the image matrix right? so why would it set the Green and Red values to 0?
Second. Why are they using a (1,1,1,1) tuple to multiply the source image array? [r,g,b] are just 3 values right? so why do we need a tuple of 4 values? And why are they multiplying with 1, it wouldnât change anything right?
Third. I guess I am confused about how a tuple is being interpreted as an array. When I do something similar like this, it works.
scaled_img = cv2.multiply(img, np.ones(4), scale=1.5)
result = scaled_img.astype('uint8')
OR
scaled_img = cv2.multiply(img, np.array([1,1,1,1], dtype='float64'), scale=1.5)
result = scaled_img.astype('uint8')
Is there a better way to implement brightness scaling in opencv-python?
Thank you in advance!