Errors when running exported deep neural network to onnx format

Hello

I have encountered a problem with my trained deep neural network inference. I did transfer learning a Yolov3 object detection model using Matlab to detect one category ‘person’. The resulting network works perfectly. I successfully exported the trained model to onnx model using the exportoONNX function.

For deployment purposes, I tried to run this .onnx model using OpenCV:

model = cv2.dnn.readNetFromONNX("…\\TrainingObjectdetection\\DeployOnnx\\Yolov3FHD600.onnx")

Running the model fails and return this error

File "…\DeployOnnx\deployonnximage.py", line 16, in <module>
model = cv2.dnn.readNetFromONNX("…\\TrainingObjectdetection\\DeployOnnx\\Yolov3FHD600.onnx")
cv2.error: OpenCV(4.5.3) C:\OpencvInstall\opencv-4.5.3\modules\dnn\src\onnx\onnx_importer.cpp:2146: error: (-2:Unspecified error) in function 'cv::dnn::dnn4_v20210608::ONNXImporter::handleNode'
> Node [Conv]:(conv2d_2) parse error: OpenCV(4.5.3) C:\OpencvInstall\opencv-4.5.3\modules\dnn\src\layers\convolution_layer.cpp:96: error: (-213:The function/feature is not implemented) Unsupported asymmetric padding in convolution layer in function 'cv::dnn::BaseConvolutionLayerImpl::BaseConvolutionLayerImpl'

After checking my onnx network using netron I found asymmetric padding [1 1 0 0] in many convolution layers conv2d_2, conv2d_5, conv2d_10, conv2d_27 and conv2d_44.

Using protoc command found in guithub

I converted my onnx file to txt file and edited these convolution layers to [1 1 1 1] and then transferred back my modified network to onnx

The modified network run but the bounding box is incorrect at all. When I did the training, I resized the images to input network size and scaled my image to [0,1]. I respected that when I prepared the blob image using cv2.dnn.blobFromImage.

Opencv 4.5.3 build from source with extra modules and Cuda support
Windows 10

import cv2
import numpy as np
# load class names
with open("…\\TrainingObjectdetection\\DeployOnnx\\AIMxproject.txt") as f:
    class_names = f.read().split('\n')

# load the DNN model
model = cv2.dnn.readNetFromONNX("…TrainingObjectdetection\\DeployOnnx\\Yolov3FHD600_mod.onnx")
# read the image from disk
image = cv2.imread(…TrainingObjectdetection\\testpeddetect\\img002000.jpg')
image_height, image_width, _ = image.shape
print(image_height,image_width)

# create blob from image
blob = cv2.dnn.blobFromImage(image=image,  size=(608, 608),  scalefactor=1.0/255, mean=(73, 70, 70),swapRB=True, ddepth=cv2.CV_32F) 
# scalefactor=1.0, mean=(73, 70, 70), mean=(0.28, 0.27, 0.27), cv2.CV_8U
 
# create blob from image
model.setInput(blob)
# forward pass through the model to carry out the detection
output = model.forward()

# loop over each of the detection
for detection in output[0, 0, :, :]:
    # extract the confidence of the detection
    confidence = detection[2]
    # draw bounding boxes only if the detection confidence is above...
    # ... a certain threshold, else skip
    if confidence > .5:
        # get the class id
        class_id = detection[1]
        print(class_id)
        # map the class id to the class
        class_name = 'person' #class_names[int(class_id)-1]
        #print(class_name)
        color = (255,0,0)
        # get the bounding box coordinates
        box_x = detection[3] * image_width
        box_y = detection[4] * image_height
        print(detection[3],detection[4],detection[5],detection[6])
        # get the bounding box width and height
        box_width = detection[5] * image_width
        box_height = detection[6] * image_height
        # draw a rectangle around each detected object
        cv2.rectangle(image, (int(box_x), int(box_y)), (int(box_width), int(box_height)), color, thickness=2)
        # put the FPS text on top of the frame
        cv2.putText(image, class_name, (int(box_x), int(box_y - 5)), cv2.FONT_HERSHEY_SIMPLEX, 1, color, 2)

cv2.imshow('image', image)
cv2.imwrite(…TrainingObjectdetection\\DeployOnnx\\image_result.jpg', image)
cv2.waitKey(0)
cv2.destroyAllWindows()

In the link below you can find the exported DNN and the modified version of it and some test images

Any help is very appreciated.
Thanks in advance

1 Like

@KamalLagh

This is what I understand: You made a model in MatLab, it works. Export to onnx failed, so the onnx model doesn’t work. You found one error in padding (can there be more?), corrected it and packed into an onnx file format by hand. The result kinda works, but not so well.

The problem is that there are so many untraceable points in this process, I would try another approach.

For example, try to export your MatLab model in another opencv compatible format, one you can test that works.

You can even test the exported format by opening it with MatLab.