Trouble inputting image to ONNX net

I have an ONNX net that should be able to receive an image of a person’s face and output the the facial landmarks. When I try to feed images to it, I get the following error messages:

[ERROR:0@0.044] global /home/ian/Downloads/opencv-git/src/opencv/modules/dnn/src/net_impl.cpp (1168) getLayerShapesRecursively OPENCV/DNN: [Permute]:(onnx_node!StatefulPartitionedCall/landmark/conv2d/BiasAdd__6): getMemoryShapes() throws exception. inputs=1 outputs=0/1 blobs=0
[ERROR:0@0.044] global /home/ian/Downloads/opencv-git/src/opencv/modules/dnn/src/net_impl.cpp (1174) getLayerShapesRecursively     input[0] = [ 399 432 ]
[ERROR:0@0.044] global /home/ian/Downloads/opencv-git/src/opencv/modules/dnn/src/net_impl.cpp (1184) getLayerShapesRecursively Exception message: OpenCV(4.6.0) /home/ian/Downloads/opencv-git/src/opencv/modules/dnn/src/layers/permute_layer.cpp:161: error: (-215:Assertion failed) (int)_numAxes == inputs[0].size() in function 'getMemoryShapes'

terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.6.0) /home/ian/Downloads/opencv-git/src/opencv/modules/dnn/src/layers/permute_layer.cpp:161: error: (-215:Assertion failed) (int)_numAxes == inputs[0].size() in function 'getMemoryShapes'

Aborted (core dumped)

Below is some code that demonstrates what I am trying to do and reproduces the error.

#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/dnn/dnn.hpp>
using namespace std;
using namespace cv;
using namespace cv::dnn::dnn4_v20220524;
int main() {
    auto image = imread("face.jpg");
    auto blob = blobFromImage(image, 1.0, Size(128, 128));
    auto net = readNetFromONNX("net.onnx");
    net.setInput(image);
    auto output = net.forward(String("dense_1"));
    cout << output << endl;
}

Any idea why this doesn’t work?

Here is some more information, just in case:

  • OpenCV version: 4.6.0
  • Operating system: Manjaro Linux

As for the neural net, I got it from here: GitHub - yinguobing/head-pose-estimation: Head pose estimation by TensorFlow and OpenCV. Then, using the following command, I converted it to ONNX so I could get OpenCV to load it:

python3 -m tf2onnx.convert --saved-model assets/pose_model --output net.onnx

it probably should be blob in the input, however, you wont even get happy with that, since the model expects [NHWC] input, while the blob has it in [NCHW] order.
checked different input shapes:

(128,128,3) // what you do above
(-215:Assertion failed) (int)_numAxes == inputs[0].size() in function 'getMemoryShapes'

(1,3,128,128) // like dnn blob
(-2:Unspecified error) Number of input channels should be multiple of 3 but got 128 in function 'getMemoryShapes'

(1,128,128,3) // like main.py (this should be the correct shape, imo)
(-215:Assertion failed) start <= (int)shape.size() && end <= (int)shape.size() && start <= end in function 'total'

in the end, it’s a long way from tf to opencv. saved_models from tf2 are already not properly supported (you need to make a 1.x graphdef from it first) this might need some (tf) preprocessing, before you can export a onnx.

personally, i’d put this on hold, and look out for an alternative landmark model, - opencv has a lot to offer here, already:

Awesome! That should be much easier. Thank you.