I am working with Emgu.CV(4.5.3.4725) DnnInvoke.ReadNetFromDarknet based on OpenCV 4.5.3. I create several Net instances from DnnInvoke.ReadNetFromDarknet that they use Nvidia Cuda GPUs. My concern is that each instance is using at least 1GB of the Operating System Memory and only 0.4 GB of GPU Dedicated Memory. I would like to know if there is a way to only use GPU Memory with 0 GB of Operating System Memory. I think that solution would also improve the response time.
Sorry,
Maybe I explained myself wrong. I know that Emgu.CV is not part of OpenCV but they are a cross platform .Net wrapper to the OpenCV image processing library. I asked about this topic to them and basically they directed me to the OpenCV group.
Can you answer the exact question removing all the Emgu.CV references? Is it possible to generate Net instances from cv::dnn::readNetFromDarknet using OpenCV (not EMGU) compiled for CUDA that uses only GPU Memory instead of OS Memory? If that the case could you tell me how?
host memory is cheaper than device memory, so nobody will think much of keeping a host-side allocation.
it might be necessary to keep copies of whatever data on the host side.
it might be general behavior of the dnn module or just the cuda backend you seem to be using… so if it’s possible to save memory, it might be generally beneficial.
the source code for that module or even just the one backend is probably not trivial. consider if you’d be willing to spend time investigating this… or if it’s not worth the trouble.
if you’re lucky, you might get a response from someone with actual experience with the code of the cuda backend and dnn module.