I’ve implemented a facial detection + recognition code on the Jetson Nano, the facial detection part is implemented using MediaPipe 0.8.5 (BlazeFace model) and runs on the GPU of the Jetson nano, while the recognition is implemented using DLib’s face_recognition library and runs on the CPU of the Jetson Nano.
Commenting out the facial recognition part of the code, the facial detection part of the code runs at about 30 fps. But, when I try running both the detection+recognition the overall frame rate drops to about 5 fps.
Need help with how I can avoid this drastic drop in the overall frame rate.
ok cool but this is the forum for OpenCV. you might get answers but if you expect them, you should find a more fitting place to ask.
It is not bad for Jetson Nano. In my experience, 10 fps would be possible with suitable models for your task. Pay attention to architecture of your model, e.g. if it is ResNet50 look for solution with ResNet18…)
Thank you for the revert. However, I’ve been able to get an overall frame rate of about ~23 fps for detection using MediaPipe (on GPU) and recognition using DLib’s face_recognition library(on CPU) on the Jetson Nano.
There are just two factors: the model size and usage of GPU. I would recommend compiling OpenCV 4.9 which supports the mediapipe models with GPU support and coding an analogical solution as you have for the mediapipe using onnx equivalents from the Opencv zoo. Then, you can compare apples with apples.