I converted a Tensorflow Model to ONNX and then load it using OpenCV DNN. This model expects a [-1, 100, 100, 1] input and gives a [-1, 100, 100 ,1] output
I tested in Python, creating a dummy input and I got exactly what I expected, a 100x100 output.
...
params = cv2.dnn.Image2BlobParams()
params.mean = (0,0,0)
params.scalefactor = (1,1,1)
params.size = (100, 100)
params.ddepth = cv2.CV_32F
params.datalayout = cv2.dnn.DNN_LAYOUT_NHWC
params.paddingmode = cv2.dnn.DNN_PMODE_NULL
...
batch_size = 1
h = 100
w = 100
c = 1
x = torch.rand(batch_size, h, w, c, requires_grad=False)
x_npy = x.detach().numpy()
x_opencv = x_npy.reshape(h, w, 1)
blobPB = cv2.dnn.blobFromImageWithParams(x_opencv, params)
modelopenCV.setInput(blobPB)
output = modelopenCV.forward()
print(output.shape) # = (1, 100, 100, 1)
But I’m having issue in acessing my output elements in C++, my cv::Mat output from model.foward() gets -1 rows and -1 cols. Any idea of how can I acess this elements? Why this difference between Python and C++ result? I know that OpenCV Python uses np.arrays instead of cv::Mat for data storage, but I guess it should has the same dimensions of a cv::Mat, using a input with the correct dimensions and the same model… no?
...
params.scalefactor = 1.0 / 255;
params.size = cv::Size2i::Size_(100, 100);
params.mean = 0.0;
params.ddepth = CV_32F;
params.datalayout = cv::dnn::DNN_LAYOUT_NHWC;
params.paddingmode = cv::dnn::DNN_PMODE_NULL;
cv::dnn::blobFromImageWithParams(imgpb, blobTest, params);
/* trust me imgpb it's in correct dimensions, imgpb.rows = 100, imgpb.cols = 100, imgpb.channels() = 1 */
pModelONNXPeB.setInput(blobTest);
cv::Mat outputsPB1 = pModelONNXPeB.forward();
outputsPB1 has 4 dimensions, -1 rows and -1 cols