Should DNN backend and target default to CPU if no GPU found?

I’ve been using the following configuration:

net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA_FP16)

Initially, I did not have OpenCV built properly so it could not use the GPU and it ran on the CPU instead. This was running on an AWS instance with an Nvidia V100 attached.

Now I’ve tried to test my Python script again in another environment in a container. This environment does not have access to any GPU.
It now has the following error:

cv2.error: OpenCV(4.5.1) /opencv_compile/opencv-4.5.1/modules/dnn/src/cuda4dnn/csl/memory.hpp:54: error: (-217:Gpu API call) no CUDA-capable device is detected in function ‘ManagedPtr’

So, I’ve had to force it to use the following:

net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)

Please can anyone help me to understand why it worked previously and switched to using CPU but now does not?

so you cannot use net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)

@berak Ok, as I understand the first time I used it there was a GPU just not configured properly.
Please can you explain how exactly the backend identifies if a GPU is attached?

successfully using CUDA for the dnn depends on a few things:

  • the cv2.so must have been compiled on a cuda capable machine, wth a cuda SDK, also with cmake flags like WITH_CUDNN=1, WITH_CUBLAS=1, WITH_OPENCV_CUDA_DNN=1

  • the current machine actually cuda capable hardware