Median Blur is adding new colors to the image. Any way to avoid this?

Median Blur: OpenCV: Image Filtering

I have an image with exactly 12 colors. When I run median blur, it ends up introducing new colors because it works across each channel. If I collapse to grayscale, I risk losing colors that end up merging. Any suggestions, or another library I could use?

[Warning, here I mistakenly speak about blur while hbm question was about medium blur]

@hbm

If you use a lens on an image with exactly 12 colors, the out of focus image will have infinite colors. Blur does kind the same.

If you need to narrow the number of colors in your output, you can replace each pixel color with a palette, or apply a clever function to round them.

If this answer doesn’t help you, please narrow your question, put some examples, express what you expect and what for.

yes, the median isn’t calculated on color tuples but on each color plane individually. to achieve this “atomicity” of colors, the “trick” is to use a palette.

in more detail:

  1. kmeans or other clustering, to calculate a palette of the picture
  2. turn picture from BGR to palette values (nearest neighbor lookup in the palette, for every pixel)
  3. median filter the palette image
  4. apply palette to get BGR picture
2 Likes

Thanks Alejandro, I’m familiar with how blur works. The concept of MEDIAN blur/filter, is that it ideally maintains the pixel values already in a photo. Directly quoting opencv documentation:

  • Median Filter

The median filter run through each element of the signal (in this case the image) and replace each pixel with the median of its neighboring pixels (located in a square neighborhood around the evaluated pixel).

That said, OpenCV insists on treating each channel separately, thus leading to my issue.

Each channel of a multi-channel image is processed independently.

Thanks crack, that’s exactly the approach I have in mind.

I already have the list of colors/palette in the photo, and I don’t even need to do nearest neighbor lookup. I can just map the colors to index values out of a list, since I know there’s exactly 12 colors. ie. I can make (50,50,50) to be 1, (25,65,80) to be 2, etc.

My current issue is implementing this in numpy, and mapping each pixel in a performant manner. Going pixel by pixel with a switch statement isn’t very efficient I think.

Nonetheless, thanks for all the help!

that is precisely what I meant by nearest-neighbor lookup. that means finding the palette entry that is the best match for whatever pixel.

that isn’t done with a “switch” statement.

you can do that with numpy… it’s just a bunch of reshaping, broadcasting, argmin, … give it a try. I haven’t done this in a long time.

@hbm

You are right, I confused median with average in my answer. OpenCV apply median on each channel, not preserving original colors, and this was exactly your original point.