How to divide an image by another image in opencv-python?

Hello dear OpenCV community.
I’m wondering if there is a command in OpenCV-Python that can divide an image by another image.
Matlab has one which is called imdivide. I’ve been failing to find the analog in Python in general and in OpenCV in particular.
Thank you in advance.
Ivan
P.S. If there is none, I guess, I will go with numpy to divide the images …
P.P.S. If anybody is interested, I need it to implement a min-max filter. Which is also not implemented in OpenCV. According to the forums, the general consensus on the min-max filter is to subtract erode from dilate. But, actually, min-max filter is more than that. Thus, one of the other parts of the filter is normalization, that’s where I need image division.

since it is all about numpy arrays, you can simply use the builtin / operator :

>>> a = np.array([[1,2],[3,4]])
>>> b = np.array([[2,2],[2,2]])
>>> a / b
array([[0.5, 1. ],
       [1.5, 2. ]])

and, before you start writing a min-max filter, have a look at cv.normalize()

1 Like

Thank you @berak .

I looked into cv2.normalize() and here’s a note from me.

Evidently, I was loose with the language.

First, I should’ve said that I wanted to implement local normalization. By that I would mean division of each pixel’s intensity of the original image by an intensity value unique for this particular pixel. I.e., I would have an original image which may be considered as the matrix of pixel intensities and I have a corresponding matrix of the same size where every entry is the normalizing intensity. That second matrix can be considered as an image. That is why I wanted to divide an image by another image.

Second, personally, if somebody said to me “normalization” (not local, as I mentioned above, but just normalization), I would, probably, assume that I need to divide each pixel’s intensity of the original image by some value which is the same for all the pixels. Usually, that value is the norm of the image (i.e. the norm of the matrix representing the image). But, evidently, that is not the case with cv2.normalize().

Third, as stated in the cv2.normalize() manual, the cv2.normalize function converts the original image img to another image dst such that the norm (e.g. L_2 norm) of the dst image is equal to the supplied parameter alpha. Personally, this is an absolutely new definition of the term “normalization” to me. I have never heard of it before. I don’t see how it can help me with my task. I’m going to adhere to the numpy solution.

indeed, normalize(NORM_MINMAX) uses a single, global min/max value.
(simply, for each pixel, subtract the global min, then divide by global (max-min))

what would that be ? some local neighbourhood ?

which is, exactly ? please explain (on a higher level) !
what’s the concept behind the min-max filter, and what do you need it for ?

i wonder, where you found that idea ? it’s for sure not the ‘general’ case.
(and would only work on binary images)

Here is what I want to do.

Take into account that cv2.dilate and cv2.erode can work with gray scale images and can be considered as max and min filters (I found that out in numerous posts on StackOverflow).
Let originalImage be original image.
Let low be eroded original image.
Let upp be dilated original image.
Then I want to compute (originalImage - low) / (upp - low).

All those operations are pixel by pixel operations. They can be done with numpy as you mentioned. I just wanted to see if there were cv2 native division function (like, e.g., there is cv2.subtract for subtraction).

If you compare those operations with what cv2.normalize do, you’ll readily see that cv2.normalize is not of use here.

perhaps I don’t understand what you’re asking because clearly there exists this:

https://docs.opencv.org/4.x/d2/de8/group__core__array.html#ga6db555d30115642fedae0cda05604874

and you surely must have found that already, right?

and beyond that, OpenCV supports “matrix expressions” in C++, same as numpy does in Python.

1 Like

Thank you @crackwitz , that’s exactly what I was looking for!

still curious, what is your use-case for this ?
what kind of data are you processing ?

(as i havent encountered this in real life, ever !)

I can give an example.

say someone has a sheet of paper with text on it, it’s unevenly lit, and they take a photo of it. the intensity of a “sheet” pixel depends on its illumination, as a scale factor. well lit area: pixels are 200 (blank) or 20 (toner), badly lit area: pixels are 50 or 5.

now you could apply a sufficient median to estimate illumination, assuming it’s mostly text (ink/toner covers less than 50%)

then you divide, to get “normalized” intensity, which leaves you with pixels being 100% and 10% in both cases, resulting in a good picture, except for the inevitable noise.

subtracting instead (and adding white, but let’s ignore that) would distort the picture, leaving dark parts with less contrast. you’d get [200, 20]-200 = [0, -180] vs [50, 5]-50 = [0, -45]

1 Like

I’m processing PIV data.

PIV stands for particle image velocimetry. That’s the main tool in experimental fluid mechanics created in 1984s. This method gives you velocity field in a flow. Velocity field is when you know velocity at each point in a flow (turns out velocities at different points in a flow are different in general). All our equations describing fluid motion are written in terms of velocity fields. Once you’ve measured velocity fields, plug them into the equations and you are good to go uncovering new physics of flows. The way to measure velocity field is to seed the flow with tiny particles, illuminated them with a thin laser sheet, take two photos with a high-speed camera. Knowing the time between the photos and, provided you can identify the distance each particle has moved between the photos, you divide the distance by the time and get the velocity of the particle which you assume equal to the velocity of the flow at the point where the particle is located.

Identifying individual particles on the photos reliably enough to get the acceptable measurement accuracy can be difficult (due to uneven laser illumination, reflections and so on). Therefore, before you go ahead and measure velocity fields, you have to preprocess (precondition) the images. The preprocessing scheme can be different. For instance, for some images it is enough to just subtract the background. For others, you have to low-pass-filter the image, subtract it from the original image, then high-pass-filter the image, then binarize it and only then you can estimate the velocity field.

One of the preprocessing steps suggested by the guru (and creator) of PIV is image enhancement. Which is pretty much enhancing the contrast of the image. The contrast of the image (as suggested by the guru) can be enhanced either by the means of histogram equalization of by the means of min-max filter. Both of these methods have the purpose to increase the contrast of the image to make the seeding particles more distinct relative to the background.

The min-max filtering algorithm is suggested in the guru’s book (Adrian & Westerweel, “Particle image velocimetry”, 2011, Cambridge university press, see p.250 for the algorithm). The algorithm is presented in a MATLAB version in the book. I’m trying to convert it to Python for my needs.

As an example, I’m attaching one of my PIV images (not the best one)

1 Like

thanks a ton for that detailled & helpful explanation !

so, we’re talking about (re)implementing this ?

def methlab_minmax(image, ksize, mincontrast):
    N = ksize ** 2
    kernel = np.ones((ksize, ksize), np.uint8)
    upp = cv.dilate(image, kernel)
    low = cv.erode(image, kernel)
    #upp = cv.filter2D(upp, ddepth=cv.CV_32F, kernel=kernel) / N
    #low = cv.filter2D(low, ddepth=cv.CV_32F, kernel=kernel) / N
    upp = cv.blur(upp, ksize=(ksize, ksize)) # faster
    low = cv.blur(low, ksize=(ksize, ksize))
    contrast = np.maximum(upp - low, mincontrast)
    return (image - low) / contrast

(edit: forgot the lowpass step, it’s amended)

why they would go with 64 instead of anything else is a mystery to me. or why they would not calculate N = FiltSize^2

the erosion/dilation can be done quicker but OpenCV only supports arbitrary kernels, not specifically square uniform ones. maybe there’s a fast path in the code… I hope.

2 Likes

@crackwitz , multiplication by 64 is MATLAB specific, you don’t need to do it in Python. I looked for why to multiply by 64 and found a couple of answer on StackOverFlow if I’m not mistaken. I don’t remember the explanation, but you have to do it in MATLAB.

@berak yep. Here’s my ResearchGate question and here’s my gist.

the 64 figure is not matlab-specific. the author just felt like it. that’s all there is to it. ok the book mentions some histogram equalization but those numbers are just pulled out of someone’s you-know-what.

did you know http://www.openpiv.net/