A question about converting Python code into C + + code wehn calling onnx model in OpenCVDNN

Thanks the administrator for helping to convert u2net into onnx mode and successfully called it on Python using OpenCV(see Integrate u2net into model zoo · Issue #13 · opencv/opencv_zoo · GitHub), but I have new problem when rewriting it to C + +, specifically

 # Norm
    pred = normPred(d0[:, 0, :, :]) 
    # Save
    save_output('test_imgs/sky1.jpg', pred)”

Translated into C + + code, where


def normPred(d):
    ma = np.amax(d)
    mi = np.amin(d)
    return (d - mi)/(ma - mi)

def save_output(image_name, predict):
    img = cv.imread(image_name)
    h, w, _ = img.shape
    predict = np.squeeze(predict, axis=0)
    img_p = (predict * 255).astype(np.uint8)
    img_p = cv.resize(img_p, (w, h))
    print('{}-result-opencv_dnn.png-------------------------------------'.format(image_name))
    cv.imwrite('{}-result-opencv_dnn.png'.format(image_name), img_p)

Hope to give some help and advice, thank you!

what is your network doing ?
what is the output shape in c++ ?
may we see the c++ code you have, so far ?

U2net is a new network structure based on UNET. It is also based on encode decode. Referring to FPN and UNET, the author proposes a new module RSU (recurrent u-blocks). After testing, it has achieved amazing results in segmenting the background before objects. It also has good real-time performance. After testing, the forward time on P100 is only 18ms (56fps).

https://codeload.github.com/NathanUA/U-2-Net/zip/master

python code is

import os
import argparse

from skimage import io, transform
import numpy as np
from PIL import Image
import cv2 as cv

parser = argparse.ArgumentParser(description='Demo: U2Net Inference Using OpenCV')
parser.add_argument('--input', '-i')
parser.add_argument('--model', '-m', default='u2net_human_seg.onnx')
args = parser.parse_args()

def normPred(d):
    ma = np.amax(d)
    mi = np.amin(d)
    return (d - mi)/(ma - mi)

def save_output(image_name, predict):
    img = cv.imread(image_name)
    h, w, _ = img.shape
    predict = np.squeeze(predict, axis=0)
    img_p = (predict * 255).astype(np.uint8)
    img_p = cv.resize(img_p, (w, h))
    print('{}-result-opencv_dnn.png-------------------------------------'.format(image_name))
    cv.imwrite('{}-result-opencv_dnn.png'.format(image_name), img_p)

def main():
    # load net
    net = cv.dnn.readNet('saved_models/sky_split.onnx')
    input_size = 320 # fixed
    # build blob using OpenCV
    img = cv.imread('test_imgs/sky1.jpg')
    blob = cv.dnn.blobFromImage(img, scalefactor=(1.0/255.0), size=(input_size, input_size), swapRB=True)
    # Inference
    net.setInput(blob)
    d0 = net.forward('output')
    # Norm
    pred = normPred(d0[:, 0, :, :])
    # Save
    save_output('test_imgs/sky1.jpg', pred)

if __name__ == '__main__':
    main()

my c++ code so far

#include "opencv2/dnn.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
#include "opencv2/objdetect.hpp"
using namespace cv;
using namespace std;
using namespace cv::dnn;
int main(int argc, char ** argv)
{
    Net net = readNetFromONNX("E:/template/sky_split.onnx");
    if (net.empty()) {
        printf("read  model data failure...\n");
        return -1;
    }
    // load image data
    Mat frame = imread("e:/template/sky1.jpg");
    Mat blob;
    blobFromImage(frame, blob, 1.0 / 255.0, Size(320, 320), cv::Scalar(), true);
    net.setInput(blob);
    Mat prob = net.forward();   
    ………………
    return 0;
}

I have encountered problems in translating Python code into C + + code. I hope to get some advice.Thanks!

issues is

assuming, there’s a single image in your prob array:

    Mat prob = net.forward();
    Mat slice(prob.size[2], prob.size[3], prob.ptr<float>(0,0));
    normalize(slice, slice, 0, 255, NORM_MINMAX, CV_8U);
    resize(slice, slice, frame.size());   
    imwrite('test_imgs/sky1.jpg', slice);

that’s it :wink:

thank you!
below is my result

#include "opencv2/dnn.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"

#include <iostream>

#include "opencv2/objdetect.hpp"

using namespace cv;
using namespace std;
using namespace cv::dnn;

int main(int argc, char ** argv)
{
	Net net = readNetFromONNX("E:/template/sky_split.onnx");

	if (net.empty()) {
		printf("read  model data failure...\n");
		return -1;
	}
	// load image data
	Mat frame = imread("e:/template/sky14.jpg");
	Mat blob;
	blobFromImage(frame, blob, 1.0 / 255.0, Size(320, 320), cv::Scalar(), true);
	net.setInput(blob);
	Mat prob = net.forward("output");  
	Mat slice(cv::Size(prob.size[2], prob.size[3]), CV_32FC1, prob.ptr<float>(0, 0));
	normalize(slice, slice, 0, 255, NORM_MINMAX, CV_8U);
	resize(slice, slice, frame.size());

	return 0;
}