Using the same model, isn’t supposed to C++ inference using OpenCV DNN be faster than using OpenCV DNN in Python?
I did some tests with BlobFromImage and BlobFromImages and the best results I got it’s about equal results or a little bit faster in Python. I know this could be an abstract discussion about the implementation in both languages, but somehow am I testing in a wrong way? Or am I expecting a result non equivalent to the reality of both languages?
blobzin = cv2.dnn.blobFromImages(images, scalefactor=1.0/255, size=(104,104),mean=0, swapRB=False) modelONNXPeB.setInput(blobzin) start = time.perf_counter() output = modelONNXPeB.forward() end = time.perf_counter()
Result Means ~ 1.4 ms
cv::Mat blobzin = cv::dnn::blobFromImages(img2Vec, 1.0 / 255, cv::Size2i::Size_(104, 104), 0.0, false, false, CV_32F); pModelONNXPeB.setInput(blobzin); auto start_time = std::chrono::high_resolution_clock::now(); auto outputPeB = pModelONNXPeB.forward(); auto end_time = std::chrono::high_resolution_clock::now();
Result Means ~ 1.6s
OpenCV version: 4.8-dev
Note: I can upload the model, if it’s necessary.