I found out the issue. You see, if I resize the image with blobfromimage, the data is distorted, but if I use the strategy mentioned in the following link Detecting objects with YOLOv5, OpenCV, Python and C++ | by Luiz doleron | MLearning.ai | Medium I discovered that the confidence passes from 25% to 79% of confidence. The strategy is:
-
Create a square numpy array with the maximum side of the image (height or width)
-
Print the image exactly as it was captured (16:9) and print it in the numpy array
-
This way the numpy array is a square array where the image stays at the top of it
-
Since the images I’m working are 16:9 and 4K the height is only 2160, so the rest of the height needs to be black (np.zeros).
-
Now that I have the square image (originial image + black pixels), I can resize it to 1280x1280
The code that helped is the following:
col, row, _ = source.shape
_max = max(col, row)
resized = np.zeros((_max, _max, 3), np.uint8)
resized[0:col, 0:row] = source
I still don’t know why there is still a discrepancy of 5% confidence, but I’m on the right track.
Anyway, thank you for your help. Believe it or not the word you wrote “outdated” helped me a lot as I tried to find a more recent tutorial of the code.
Regards