Speed improvements by using OpenVINO

Hello,

I have some questions regarding OpenCV’s support for OpenVINO. First of all, it requires an OpenVINO installation while compiling OpenCV and to set up OpenCV’s build configuration accordingly. Assuming that this is done, it is not clear to me what’s actually the advantage at runtime and surprisingly I found neither documentation nor tutorials to clarify my questions, therefore I’d like to ask here. The most informative tutorial I found is this article.

But still, taking a closer look at this article raises more questions than answering my existing ones. First, only the GAPI and the DNN module depend on OpenVINO if support for the latter was enabled while compiling OpenCV. I don’t know exactly which parts from GAPI are improved (if somebody can clarify this, it will be helpful as well), still accelerating the DNN module presumably is the main aim of integrating OpenVINO into OpenCV, as also explained in the article. In particular, the workflow with OpenVINO can be summarized as following: Export the model into the inference engine format, which creates a .bin and .xml file and load these two into OpenCV using the readNetFromModelOptimizer() function. The article presents a short speed comparison yielding up to 1.8x times speed improvement, depending on the actual task.

However, looking at the presented code shows that the code does not load a model from an inference engine format (.bin and .xml file) but instead a regular Caffe model. Here, I would like to understand which parts are accelerated by OpenVINO and which are not. Therefore my first question is: Does only enabling OpenVINO at compile time already yield a speed improvement from activating DNN_BACKEND_INFERENCE_ENGINE at runtime? Even though this seems to be the conclusion of the article (because the model is not loaded from an the inference engine format), there could be other explanations. For example, OpenVINO uses TBB so in case OpenCV was not compiled with TBB support as well, the whole improvement could simply be from TBB in OpenVINO and directly activating TBB in OpenCV as well could already yield the same performance… ?

Next, I would like to understand the inference engine format (.bin and .xml file). Is this just another format to export models for the DNN module (besides Caffe, ProtoBuf, ONNX etc.) or does using the inference engine format in combination with OpenVINO yield a better performance than exporting the same model into an arbitrary other format? I already tested that I cannot load an inference engine format model if OpenCV was not compiled with OpenVINO, however this does not say anything about the respective performance.

I hope that somebody here can clarify my questions. Thanks in advance!