LANCZOS4 downsampling

LANCZOS4 interpolation gives expected results with upsampling but downsampling works as bad as nearest neighbor. I compare it with PIL’s LANCZOS and upsampled images look identical, but there’s a huge difference between downsampled images.

I use cv2 with INTER_AREA as a workaround but It would be great if I could use only LANCZOS4 for everything.


Is this expected or should I report a bug?

code and source data to replicate, please

downsampling Lena looks okay

im = cv.imread(cv.samples.findFile("lena.jpg"))
scale = 0.5; out = cv.resize(im, None, fx=scale, fy=scale, interpolation=cv.INTER_LANCZOS4)
cv.imshow("out", out); cv.waitKey(); cv.destroyAllWindows()

lena 0.5 lena 0.25

found your clock, it’s 2812 x 3692 pixels:

the picture is huge. OpenCV’s lanczos does nothing special, just applies the sampling… PIL apparently downscales before applying lanczos.

you see, resampling has a certain “neighborhood” of source pixels.

  • NN: 1x1 pixels
  • linear: 2x2 pixels
  • cubic: 4x4 pixels
  • lanczos: 8x8 pixels

and if the scale exceeds that, source pixels will be ignored

clock scaled 1/16

you can use pyrDown (repeatedly) until the data is within the range of lanczos

pyrdown thrice, then resize to 1/2 pydown twice, resize to 1/4

I am surprised that it still does kinda badly if the scale for LANCZOS is 1/4… but CUBIC with scale 1/4 looks the same, and only INTER_AREA makes it look okay

so… best to pyrDown until you’re ~one octave away.

1 Like

thanks for the explaination , it’s best for me to continue using INTER_AREA for downsampling and LANCZOS4 for upsampling.

I believe the application of lanczos or cubic merely gives you a sample at the given position… but it doesn’t take care of the aliasing that happens when downsampling with high frequency content still there. to prevent that, downsampling always needs a lowpass step. I guess PIL does that implicitly, while OpenCV doesn’t. pyrDown contains a lowpass step before the decimation but it’s fixed to a scale factor of 0.5.

I’m not into the math. you could apply a gaussian filter of a certain \sigma that is proportional to the scale factor… pyrDown suggests that you’d want a gaussian with \sigma = 2 \cdot \text{scale}.

pyrDown (i.e. scale 0.5) appears to use something that isn’t quite a gaussian, but approaches a gaussian with \sigma = 1: OpenCV: Image Filtering

here’s a gaussian for \sigma = 1 for comparison:

im = np.zeros((5,5))
im[2,2] = 1
out = cv.GaussianBlur(im, ksize=(5,5), sigmaX=1.0) * 256
# array([[ 3.04027,  6.81278, 11.23237,  6.81278,  3.04027],
#        [ 6.81278, 15.26638, 25.17   , 15.26638,  6.81278],
#        [11.23237, 25.17   , 41.49832, 25.17   , 11.23237],
#        [ 6.81278, 15.26638, 25.17   , 15.26638,  6.81278],
#        [ 3.04027,  6.81278, 11.23237,  6.81278,  3.04027]])

I made a little investigation and realized that although PIL has some reduce (pyrDown) calls in resize method, they’re not called in my case.

Python code calls the resample function in the C library with the “full sized image”. Resample function also doesn’t reduce image, rather increase the filter size.

If I upsample the image, filter size (kernel size) is always (7, 7). If I downsample it however kernel size increases as the output size decreases.

For instance (61, 61) filter is used for 10x downsample and (121, 121) filter is used for 20x downsample.

Finally If I make the kernel size constant (7, 7), I get the exact image as OpenCV version. Here is it.


So my takeaway is that OpenCV uses constant kernel size and PIL uses variable in case of downsampling.

I would verify it with OpenCV code but I really dislike C++.

Thanks again.

Edit: The code is actually pretty clear, my bad.

1 Like