Let’s say we have a 3840x2160 image, but this is a low-quality image with lots of artifacts. If we were to downsample it to i.e., 960x540, the image would look near identical with no loss in quality.
On the other hand if we had a very highly detailed image, downsampling it by a factor of 1/4 would lose a lot of detail, so we’ll want to leave that image in its original size.
Is there a method to detect the “optimal” size of the image based on the quality?
I’ve tried slowly decreasing the image resolution and comparing it to the original image with SSIM until the score goes below a threshold. This method works some of the time but it’s not a generalised solution.
you could do that with a fourier transform. plot the spectrum. it’s 2D, so you’ll get a picture. I’d recommend mapping the values logarithmically and “windowing” the range (10^{-6
} to 10^{0} or something, who knows, depends on lots of things)
I did that once with frames from some upscaled video (1920x1080. one could roughly tell that the source material was DVD quality. you won’t see this too clearly but you can see it. you also see some effect in the vertical, which is due to interlacing, or rather, bad _de_interlacing. it kinda looks like there’s a stronger half/third/quarter-height resolution block (or worse), on top of a full-height resolution.
Thanks for the advice! I can’t seem to find any implementations on estimating image bandwidth. Are there any resources / research papers you could share on this?
I’ll be implementing the method in Python, so if there’s any code you could share / point me to that would be awesome
I don’t know of anything that precisely explains what you need. just learn about fourier transform, spectrum, etc. play around. start with the first step, which is loading the image, then applying a 2d fourier transform. then display the spectrum.