Hi! How to apply BlobFromImages for gpuMat?
There is no specific function for CUDA. Just set Target and backend
net = dnn::readNet("hed_deploy.prototxt", "hed_pretrained_bsds.caffemodel");
net_cuda.setPreferableTarget(DNN_TARGET_CUDA);
net_cuda.setPreferableBackend(DNN_BACKEND_CUDA);
net_cuda.setInput(inp);
Then You can make a benchmark
Tps cuda :12.2785ms
Tps opencv :360.703ms
Tps inference :95.4307ms
Tps vino :232.38ms
for my configuration
My model has different format. BlobFromImages is just function for preprocessing
I don’t understand.
You make what you want with your data and use setInp.
But to infer with GPU it is setPreferableBackend() and setPreferableTarget()
I ment that I don’t use dnn for net parsing and inference. In my pipeline I need to preprocess gpu-frame with BlobFromImages (or find some similar function) and then feed it to tensorrt-inference.
tensorrt-inference? That’s not opencv function
Is your question about opencv? → google is your friend