How to divide an image by another image in opencv-python?

Thank you @berak .

I looked into cv2.normalize() and here’s a note from me.

Evidently, I was loose with the language.

First, I should’ve said that I wanted to implement local normalization. By that I would mean division of each pixel’s intensity of the original image by an intensity value unique for this particular pixel. I.e., I would have an original image which may be considered as the matrix of pixel intensities and I have a corresponding matrix of the same size where every entry is the normalizing intensity. That second matrix can be considered as an image. That is why I wanted to divide an image by another image.

Second, personally, if somebody said to me “normalization” (not local, as I mentioned above, but just normalization), I would, probably, assume that I need to divide each pixel’s intensity of the original image by some value which is the same for all the pixels. Usually, that value is the norm of the image (i.e. the norm of the matrix representing the image). But, evidently, that is not the case with cv2.normalize().

Third, as stated in the cv2.normalize() manual, the cv2.normalize function converts the original image img to another image dst such that the norm (e.g. L_2 norm) of the dst image is equal to the supplied parameter alpha. Personally, this is an absolutely new definition of the term “normalization” to me. I have never heard of it before. I don’t see how it can help me with my task. I’m going to adhere to the numpy solution.