When I use cv::dnn::blobFromImage() as pre-processing, then do inference, then do post-processing, the process is serial. Input one image and count the running time of cv::dnn::blobFromImage(), and repeats 1000 rounds, and takes the data of the later 700 rounds. In order to see the problem more easily, I repeat cv::dnn::blobFromImage() for 5 times, but it does not affect the process.
The same code used in the above operation is run on liunx and windows platforms respectively. For details about the opencv version and cpu models of windows and linux, see the following table. The linux OS is ubuntu18.04. In the last 700 rounds of the running result, the cv::dnn::blobFromImage() running time is counted. The minimum time of running on windows platform is 20.40ms and the maximum is 69.56ms. On the linux platform, the minimum time is 7.80ms and the maximum time is 8.61ms. As you can see, from the cpu model, you can see that windows has more powerful cpu performance than linux, but it runs longer when running cv::dnn::blobFromImage() and is more unstable than on the liunx platform. So I want to ask that Why the running time of the windows platform is so unstable, and then running on different platforms, how can there be such a big difference in speed of cv::dnn::blobFromImage(), and how to solve this problem? thank you~
It should be noted that the speed is relatively stable when running cv::dnn::blobFromImage() alone on windows, but the above problems occurs when combined with later programs, including inference model with onnxruntime, and post-processing.
opencv version:4.5.5（build not with cuda）
windows cpu:Interl® Core™ i7-10700K CPU@3.80GHz
linux cpu:Intel® Core™ i7-9700K CPU@3.60GHz