I’m currently building OpenCV from source and using the DNN module.
My end-user would like us to deliver an application that only requires the use of an inference script and a Conda environment.
Please can someone explain what the limitations are with using opencv-python instead of building from source?
@crackwitz Thanks for the info.
As far as I understand Cuda is required to accelerate the computing on the GPU.
Does that mean it will work but just not as fast for inference?
@crackwitz thanks for the info.
I’m installing opencv-python with pip and my end-user has an Nvidia GPU (v100).
I’ll try using OpenCL for the backend and see if it can use the GPU.
your end user should be willing to build OpenCV with cuda on their system. there’s a ton of recipes out there, of various quality, but I’d stick with cmake-gui for all the configuration needs.
a dnn::Net can be told at runtime what backend and target to prefer. you say your_net.setPreferableBackend(cv.dnn.DNN_BACKEND_CUDA) if that’s available.
@crackwitz thanks again.
Ideally, yes we would like them to build from source and further down the line we will suggest that. The issue is that they already have a framework in place and are restricted as to which software they can install.
For the purpose of testing our model we’ll have to use opencv-python initially. I’ve added the option in my script to first check for a GPU, then check if it’s CUDA enabled. If not, it should use OPENCL for the backend and if no GPU is found it should use the CPU.