DnnInvoke.ReadNetFromDarknet GPU Memory Only

Hi,

I am working with Emgu.CV(4.5.3.4725) DnnInvoke.ReadNetFromDarknet based on OpenCV 4.5.3. I create several Net instances from DnnInvoke.ReadNetFromDarknet that they use Nvidia Cuda GPUs. My concern is that each instance is using at least 1GB of the Operating System Memory and only 0.4 GB of GPU Dedicated Memory. I would like to know if there is a way to only use GPU Memory with 0 GB of Operating System Memory. I think that solution would also improve the response time.

Best,

just to “dampen your expectations”:

emgu is not maintained from opencv, whatever API problems you’re facing
– we probably cannot help !

Sorry,
Maybe I explained myself wrong. I know that Emgu.CV is not part of OpenCV but they are a cross platform .Net wrapper to the OpenCV image processing library. I asked about this topic to them and basically they directed me to the OpenCV group.

Can you answer the exact question removing all the Emgu.CV references? Is it possible to generate Net instances from cv::dnn::readNetFromDarknet using OpenCV (not EMGU) compiled for CUDA that uses only GPU Memory instead of OS Memory? If that the case could you tell me how?

don’t expect anything.

  1. host memory is cheaper than device memory, so nobody will think much of keeping a host-side allocation.

  2. it might be necessary to keep copies of whatever data on the host side.

it might be general behavior of the dnn module or just the cuda backend you seem to be using… so if it’s possible to save memory, it might be generally beneficial.

the source code for that module or even just the one backend is probably not trivial. consider if you’d be willing to spend time investigating this… or if it’s not worth the trouble.

if you’re lucky, you might get a response from someone with actual experience with the code of the cuda backend and dnn module.

Thanks @crackwitz for your answer!

Could you help me so that someone from OpenCV with the required readNetFromDarknet coding experience can answer these questions? Also I suppose that using only the GPU Dedicated Memory from OpenCV could be done because I have had that experience with GitHub - AlexeyAB/darknet: YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet ). I don’t remember using so much Operating System Memory but yes I remember using more than 2 GB of GPU Dedicated Memory.

Best,