Gapi and TensorRT and arm #22342

Hey,

Following OpenCV Webinar 9: OpenCV 4.5 Graph API - YouTube, there is a support to run an inference on NN within a graph node on the IntelVino backend. Is there a support for the same operation but in non intel environments i.e arm architecture? moreover, can the “inference operation” allow the use “TensortRT” as the inference engine of the specific “node”?

Note: G-API looks amazing!

Thanks

Hi @Oak,

Yes, inference (in any form – we have four right now) is an Operation, what means it can be implemented (provided) by various backends.

Today’s community version of OpenCV hosts three backends supporting inference:

There were also some proprietary backends supporting inference which haven’t been published in public.

can the “inference operation” allow the use “TensortRT” as the inference engine of the specific “node”?

Yes, if a TensorRT backend will be implemented. If you’re interested in developing one, you may want to check the ONNX Runtime backend source to figure out the structure (which is pretty basic there). You’ll need:

Also, note that adding a new infer backend alone is only the first step. Normally, efficient execution path assumes a vertical integration where input, preprocessing, inference, and post-processing could work as best as they can. For example, for OpenVINO backend, we were adding support of Intel’s oneVPL for decode (as input) and preprocessing to be done on GPU to build a full GPU pipeline. You can find this example here: opencv/modules/gapi/samples/onevpl_infer_single_roi.cpp at 4.6.0 · opencv/opencv · GitHub

Glad you liked G-API!
Dmitry

1 Like