Confusion in cv2.multiply() function

I saw this implementation of cv2.multiply() function and I am struggling to understand it. Nothing is mentioned in the docs as well. I am a beginner in opencv, just learning basics.

Implementation:

scaled_img = cv2.multiply(img, (1,1,1,1), scale=1.5)
result = scaled_img.astype('uint8')

Context:
The implementation is of image scaling by scalar multiplication. It feels like brightness/ contrast scaling for color images itself is a deep topic but this is one of the ways I came across which seems to be working (in the eyes of a beginner), but I can’t understand why.

What I am confused about:
Firstly, I can’t seem to understand why the below code won’t work for brightness scaling. The below code just gives me a blue image. I would highly appreciate if someone could explain why.

scaled_img = cv2.multiply(img, 1.5))
result = scaled_img.astype('uint8')

I am just multiplying a scalar to all the pixel values in the image matrix right? so why would it set the Green and Red values to 0?

Second. Why are they using a (1,1,1,1) tuple to multiply the source image array? [r,g,b] are just 3 values right? so why do we need a tuple of 4 values? And why are they multiplying with 1, it wouldn’t change anything right?

Third. I guess I am confused about how a tuple is being interpreted as an array. When I do something similar like this, it works.

scaled_img = cv2.multiply(img, np.ones(4), scale=1.5)
result = scaled_img.astype('uint8')

OR

scaled_img = cv2.multiply(img, np.array([1,1,1,1], dtype='float64'), scale=1.5)
result = scaled_img.astype('uint8')

Is there a better way to implement brightness scaling in opencv-python?

Thank you in advance!

let’s start here:

>>> help(cv2.multiply)
multiply(src1, src2[, dst[, scale[, dtype]]]) -> dst

Second, Third:

(1,1,1,1), np.ones(4), np.array((1,1,1,1) – it’s all the same to the underlying c++ api. src2 will get converted either to a cv::Scalar (if it’s a tuple or a number) or to a cv::Mat (if it’s an image)

so, if you want to use the scale value, ,you still have to give a [1,1,1,1] tuple as the 2nd arg, since “positional arguments” are mandatory in python

Firstly

that’s unfortunately, how the cv::Scalar constructor behaves, “missing” values are initialized with 0. so :

cv2.multiply(img, 1.5)) === cv2.multiply(img, (1.5,0,0,0))

idk, if it’s better, but you can still use a simple builtin multiplication:

scaled_img = img * 1.5 # warn: type conversion 8u -> 64f
1 Like

That cleared up a lot of things, Thanks!

I still am struggling to understand this thing:
Why do we have to pass in a tuple/array of length 4?
[r , g, b] => The format of the pixel values are 3 values.
So why does the multiply function give me an error when I do this:

cv2.multiply(img, (1.5,1.5,1.5))

error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:652: error: (-215:Assertion failed) type2 == CV_64F && (sz2.height == 1 || sz2.height == 4) in function ‘cv::arithm_op’

My question is what does the last value ‘x’ in this tuple ( _ , _ , _ , x), we are writing, stand for?

Also I tried this:

scaled_img = img * 1.5 

However, numpy operation being a modulo operation, it does not correctly work for me.
I think I need to do a lot of conversions from uint8 → float64 → uint8 again. Or I would have to normalize the values 0-255 to 0-1.

well, again, since src2 is not a 2d Mat of same size than src1, it is checking, if src2 is a valid Scalar (1d, 4 double elems), and with only 3 elems it isnt.

it could be [b,g,r,a] , no ?

no. not always, see ex. below …
however, there is an implicit coversion to double, that might need reversal, also numpy follows different rounding rules, than opencv’s multiply()
(so, careful !):


a = np.array([[[1,2,3]]],np.uint8)

b = a*1.5
#array([[[1.5, 3. , 4.5]]])  #dtype=np.float64 !!

c = b.astype('uint8')
#array([[[1, 3, 4]]], dtype=uint8)  # surprise !

d = cv2.multiply(a, (1,1,1,1), scale=1.5)
#array([[[2, 3, 4]]], dtype=uint8)

# be *extremely* careful !!
b = a*150 #integer mul -> overflow -> modulo, indeed !
#array([[[150,  44, 194]]], dtype=uint8)  

b = a*150.0 # double mul -> overflow, no modulo here
array([[[150., 300., 450.]]])

c = b.astype('uint8') # but here !
#array([[[150,  44, 194]]], dtype=uint8)


1 channel (mono), 3 channel (rgb/bgr) and 4 channel (rgba/bgra) are the most common image formats, that’s why the opencv pixel color type is a 4 channels value, where are the channels are used which are present in the actual image.

If you know that your image only has 3 channels, you can also use cv::Scalar(1.5, 1.5, 1.5) in C++, but I dont know about the python wrapper.
Scalar channels which are not present in the image are just ignored.

Thank you so much for the case wise explanation! The types are pretty clear now.

I couldn’t see the alpha value when I printed the image array, so it caused confusion. So, the wrapper for python always checks to see if a 4th element is present in the Scalar array because of the alpha in BGRA? Even if my image array does not contain an explicit alpha value?

Thank you again with the explanations!

Yeah, my image array does not have an alpha value from what I can tell. So I guess, it still requires me to give it the 4th value even if it’s not relevant to the corresponding image array.

Thank you!

1 Like