Hello I am doing a project using LBF landmark detector using the sample landmark_demo.py as a starting point.
the detector is able to successfully detect faces on the 512 x 512 image of lena as well as a custom 640 x 477 image. however when I tried to run the detector on a 320 x 240 image I get the below error.
img = array([[[171, 169, 163], [170, 170, 164], [170, 170, 164], ..., [160, 160, 161], ...[151, 152, 148], ..., [142, 143, 167], [143, 144, 168], [144, 144, 164]]], dtype=uint8) def get_face_landmarks(img): faces = FACE_DETECTOR.detectMultiScale(img, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE ) _, landmarks = FACE_LANDMARK_DETECTOR.fit(img, faces=faces) E cv2.error: OpenCV(4.8.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\matrix.cpp:1193: error: (-15:Bad number of channels) The total width is not divisible by the new number of channels in function 'cv::Mat::reshape'
I know it has something to do with my image resolution but I am unsure how to diagnose it further.
My questions are:
- what is causing this exception?
- is there a parameter I can set in the facemark.fit function?
- Is there any way to resolve this issue without having to resize the image? (I want to have low latency and reshaping the image would probably add computational delays.)