Time inference difference between C++ and Python

Using the same model, isn’t supposed to C++ inference using OpenCV DNN be faster than using OpenCV DNN in Python?
I did some tests with BlobFromImage and BlobFromImages and the best results I got it’s about equal results or a little bit faster in Python. I know this could be an abstract discussion about the implementation in both languages, but somehow am I testing in a wrong way? Or am I expecting a result non equivalent to the reality of both languages?
Code examples:

blobzin = cv2.dnn.blobFromImages(images, scalefactor=1.0/255, size=(104,104),mean=0, swapRB=False)


start = time.perf_counter()
output = modelONNXPeB.forward()
end = time.perf_counter()

Result Means ~ 1.4 ms

cv::Mat blobzin = cv::dnn::blobFromImages(img2Vec, 1.0 / 255, cv::Size2i::Size_(104, 104), 0.0, false, false, CV_32F);


auto start_time = std::chrono::high_resolution_clock::now();
auto outputPeB = pModelONNXPeB.forward();
auto end_time = std::chrono::high_resolution_clock::now();

Result Means ~ 1.6s
OpenCV version: 4.8-dev
Note: I can upload the model, if it’s necessary.

no, is not.

the language you use doesn’t matter.

you just call OpenCV functions. how you call them is irrelevant. how they’re implemented, that matters.

opencv’s DNN module uses one of various backends. they’re all optimized for their respective device types (CPU, GPU).

1 Like