Hi,
I was trying to upsample some images using DnnSuperResImpl_create.upsample()
I have used the EDSR_x4 model on 1024x768 image but the process crashed with the error message.
(-217:Gpu API call) out of memory in function 'ManagedPtr'
I’m using opencv built with cuda support and an Nvidia RTX 3080 GPU with 10GB of memory. Opencv is properly utilizing the GPU for DNN but I’m wondering if the EDSR_x4 model really needs so much memory that 10GB aren’t enough to upsample a 1024x768 image.
Are there ways to automatically upsample an image in chunks with opencv?
Found this code to upsample using chunks.
import cv2
import numpy as np
# Load the input image
input_image = cv2.imread('input.jpg')
# Define the size of the chunks
chunk_size = (200, 200)
# Create a DNN super-resolution object
sr = cv2.dnn_superres.DnnSuperResImpl_create()
sr.readModel('models/EDSR_x4.pb')
sr.setModel('edsr', 4)
sr.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
sr.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)
# Compute the new image size based on the chunk size and scale factor
new_size = (input_image.shape[1] * 4, input_image.shape[0] * 4)
# Create an empty output image with the same number of channels as the input image
output_image = np.zeros((new_size[1], new_size[0], input_image.shape[2]), dtype=np.uint8)
# Loop over the image chunks
for y in range(0, input_image.shape[0], chunk_size[1]):
for x in range(0, input_image.shape[1], chunk_size[0]):
# Extract the chunk from the input image
chunk = input_image[y:y+chunk_size[1], x:x+chunk_size[0], :]
# Upsample the chunk
upsampled_chunk = sr.upsample(chunk)
# Insert the upscaled chunk into the output image
output_image[y*4:(y+chunk_size[1])*4, x*4:(x+chunk_size[0])*4, :] = upsampled_chunk
cv2.imshow('Upscaled image', output_image)
cv2.imwrite('upscaled.jpg', output_image)
cv2.waitKey()
cv2.destroyAllWindows()