ONNX loading error after conversion from Tensorflow 2 - Inconsistent shape for ConcatLayer in function 'getMemoryShapes'

Hi, I converted the Tensorflow 2 model found here : arbitrary-image-stylization-v1 | Kaggle
I Unzipped the file in a folder.
Using the tf2onnx.convert utility found here : GitHub - onnx/tensorflow-onnx: Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
I used this command line : python -m tf2onnx.convert --saved-model ./magenta_arbitrary-image-stylization-v1-256_2 --output model.onnx --opset 11
The onnx file was created.
Using this simple code :

cv::String model = "../models/model.onnx";
cv::dnn::Net net;
net.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);;
net = cv::dnn::readNetFromONNX(model);

I get this error :
[ERROR:0@9,927] global onnx_importer.cpp:1061 handleNode DNN/ONNX: ERROR during processing node with 3 inputs and 1 outputs: [Concat]:(onnx_node!StatefulPartitionedCall/InceptionV3/Mixed_6a/concat) from domain=‘ai.onnx’
terminate called after throwing an instance of ‘cv::Exception’
what(): OpenCV(4.8.0-dev) /media/Data-pI7/Install/OpenCV/opencv/modules/dnn/src/onnx/onnx_importer.cpp:1083: error: (-2:Unspecified error) in function ‘handleNode’

Node [Concat@ai.onnx]:(onnx_node!StatefulPartitionedCall/InceptionV3/Mixed_6a/concat) parse error: OpenCV(4.8.0-dev) /media/Data-pI7/Install/OpenCV/opencv/modules/dnn/src/layers/concat_layer.cpp:109: error: (-201:Incorrect size of input array) Inconsistent shape for ConcatLayer in function ‘getMemoryShapes’

Any idea ?

I think input size is not fixed you need to fix it

python -m onnxruntime.tools.make_dynamic_shape_fixed --dim_param unk__5237 --dim_value 1 "model.onnx" model.onnx
python -m onnxruntime.tools.make_dynamic_shape_fixed --dim_param unk__5238 --dim_value 480 "model.onnx" model.onnx
python -m onnxruntime.tools.make_dynamic_shape_fixed --dim_param unk__5239 --dim_value 640 "model.onnx" model.onnx

python -m onnxruntime.tools.make_dynamic_shape_fixed --dim_param unk__5240 --dim_value 1 "model.onnx" model.onnx
python -m onnxruntime.tools.make_dynamic_shape_fixed --dim_param unk__5241 --dim_value 480 "model.onnx" model.onnx
python -m onnxruntime.tools.make_dynamic_shape_fixed --dim_param unk__5242 --dim_value 640 "model.onnx" model.onnx

Found this minutes ago, this the same issue: Opencv loads models of dynamic input types · Issue #19347 · opencv/opencv · GitHub

Thanks for this, I will try when I get back home from work.

Fixed sizes are a serious limitation in this precise case, because I wanted to be able to vary the size of the input style image.

shape must be fixed in opencv. Yes it is a limitation but in the meantime opencv is used to process video or image in an application. You don’t change camera size or image size all the time.

So, I tried your guess… this is not that. This gives me the same error code, exact text.

Could it be instead that the model needs 2 inputs ? Do you think OpenCV supports that ?

I tried and it works but not for inference there is a new error with padding layer
I downloaded model here

cv2.error: OpenCV(4.8.0-dev) C:\lib\opencv\modules\dnn\src\layers\padding_layer.cpp:154: error: (-213:The function/feature is not implemented) Only spatial reflection padding is supported. in function 'cv::dnn::PaddingLayerImpl::forward'

OK, thanks. I guess I’ll have to open a bug report to maybe make it work later…

By the way, how do you pass the input, it is multiple (two entries, one for the content image, one for the style). blobFromImages ?

net = cv.dnn.readNet("model.onnx")
image = cv.imread(cv.samples.findFile("classroom__rgb_00283_1024.png"))
image_mod = cv.imread(cv.samples.findFile("starry_night.jpg"))
paramSAMEncoder = cv.dnn.Image2BlobParams()
paramSAMEncoder.datalayout = cv.dnn.DNN_LAYOUT_NHWC;
paramSAMEncoder.ddepth = cv.CV_32F;
paramSAMEncoder.mean = (0,0,0);
paramSAMEncoder.scalefactor = (1, 1 , 1 );
paramSAMEncoder.size = (480, 640);
paramSAMEncoder.swapRB = True;
paramSAMEncoder.paddingmode = cv.dnn.DNN_PMODE_NULL;
blob_opencv = cv.dnn.blobFromImageWithParams(image, paramSAMEncoder) 
blob_opencv2 = cv.dnn.blobFromImageWithParams(image_mod, paramSAMEncoder) 
net.setInput(blob_opencv, "placeholder")
net.setInput(blob_opencv, "placeholder_1")
blob = net.forward()

Thanks for that but I didn’t even pass the loading step.
The place you downloaded the files from have the same md5 than the first link I gave, so it doesn’t change anything for me.
I just reproduced step 1 : convert to onnx with the command line I provided.
Then I updated the input shapes with the commands you gave.
I still have the same problem loading this model. I can’t reproduce your example.
Is there anything else you did but didn’t mention ?

Yes it does not work I run again my command. Variable names seem change
I run this command (windows)

python -m tf2onnx.convert --saved-model . --output model0.onnx --opset 11

in netron for model0.onnx I get when i click in placeholder node

varaible name are unk__5235, unk__5236 and so on

python -m onnxruntime.tools.make_dynamic_shape_fixed --dim_param unk__5235 --dim_value 1 model1.onnx

Ah now I understand better, I could reproduce all you gave me.
Thank you so much, I learned a lot from you.

By the way, no need to install netron, there’s an online version here: https://netron.app

Now I have to ask for support of the unknown function!

Here is the code in C++

    // load ONNX model
    cv::String model = "../models/arbitrary-image-stylization-fixed-1024x768.onnx";
    cv::dnn::Net net;
    net.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
    net.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);
    net = cv::dnn::readNetFromONNX(model);
    // prepare images
    cv::Mat content = cv::imread("../content.png", cv::IMREAD_COLOR);
    cv::Mat style = cv::imread("../style.png", cv::IMREAD_COLOR);
    // parameters for blobs
    cv::dnn::Image2BlobParams paramSAMEncoder;
    paramSAMEncoder.datalayout = cv::dnn::DNN_LAYOUT_NHWC;
    paramSAMEncoder.ddepth = CV_32F;
    paramSAMEncoder.mean = cv::Scalar(0,0,0);
    paramSAMEncoder.scalefactor = cv::Scalar(1, 1, 1);
    paramSAMEncoder.size = cv::Size(1024, 1024);
    paramSAMEncoder.swapRB = true;
    paramSAMEncoder.paddingmode = cv::dnn::DNN_PMODE_NULL;
    // get blob for content - fixed param 1024x1024
    cv::Mat blobContent = cv::dnn::blobFromImageWithParams(content, paramSAMEncoder);
    // get blob for style - fixed param 768x768
    paramSAMEncoder.size = cv::Size(768, 768);
    cv::Mat blobStyle = cv::dnn::blobFromImageWithParams(style, paramSAMEncoder);
    // feed blobs to inputs
    net.setInput(blobContent, "placeholder"); // have to use netron to get this name
    net.setInput(blobStyle, "placeholder_1");
    // inference
    cv::Mat output = net.forward();

My question was answered, what are the rules here to indicate that the subject is closed/resolved ? I can’t find a button for that, I also read the FAQ.

no rule. you can add [solved] in question title