Hi there.
I’m enjoying YUNet face detector from here: opencv_zoo/models/face_detection_yunet at main · opencv/opencv_zoo · GitHub
I noticed that raw .onnx
file has input shape of 640x640
, but in demo.py
the input shape was set to 320x320
. Moreover in demo.py
in line 98
input size is reset to the frame size. I’m struggling to understand on what actual image size the inference goes on.
I tried to read C++ code from here: opencv/modules/objdetect/src/face_detect.cpp at 4.x · opencv/opencv · GitHub
but to my shame it’s quite difficult for me to understand preprocessing and postprocessing steps.
Could you please clarify for me what’s going on during YUNet inference? What are preprocessing/postprocessing steps?
Thank you so much!