We have 3 channel input image of size 128x128 on which we want to run object detection Model (in ONNX format).

On printing inputImg Mat in Java, we get :

*// Mat [ 128 x 128 x CV_8UC3, isCont=true, isSubmat=false, nativeObj=0x76e1a34500, dataAddr=0x76e28ba400 ]*

Then we convert the intensity values in the range (0-1) :

**inputImg.convertTo(inputImg, CvType.CV_32F, 1.0 / 255.0);**

On printing inputImg in Java, we get :

// *Mat [ 128 x 128 x CV_32FC3, isCont=true, isSubmat=false, nativeObj=0x76e1a34500, dataAddr=0x76e226ca00 ]*

Now, we want to convert this image in blob for inferencing through object detection algorithm using OpenCV DNN module

**Mat blob = Dnn.blobFromImage(inputImg, 1,**

**new org.opencv.core.Size(128, 128),**

**new Scalar(0, 0, 0), false, false, CV_32F);**

**System.out.println(blob);**

// Mat [ 1 x 3 x 128 x 128 x CV_32FC1, isCont=true, isSubmat=false, nativeObj=0x76eca921c0, dataAddr=0x764bab8840 ]

Now we reshape this blob mat because our model expected 3 channels as Input instead of 128 (i.e 1x128x128x3)

**Mat reshapeBlob;**

**int[ ] new_blobShape = {1, 128, 128, 3};**

**reshapeBlob = blob.reshape(1, new_blobShape);**

**System.out.println(reshapeBlob);**

*// Mat [ 1 x 128 x 128 x 3 x CV_32FC1, isCont=true, isSubmat=false, nativeObj=0x7649b27d20, dataAddr=0x764a5c2980 ]*

Then we set the reshapedBlob as input to the ONNX model and do forward pass :

**net.setInput(reshapeBlob);**

**Mat detections = net.forward();**

We have tested this blob as input (in Python) to the same model and it works (i.e. the detection mat output as 1 x 896 x 140), but in Java, this blob is not giving the expected result it provides (1 x 896).

**My question is if this is the correct way to reshape the blob and does**

**Mat [ 1 x 128 x 128 x 3 x CV_32FC1 ] represents blob of shape (1,128,128,3) ?**

Kindly help with this issue.