I’m trying to use MatchTemplates but the results but all 6 modes output unintellegable results. I setup a demonstration like the one in the documentation but I can’t get similar results. I wrote this in Go using the Cgo bindings “gocv.io/x/gocv”. I get the same behavior when I implemented this in python.
I was expecting to see a similar greyscale map to what I saw in the tutorial but each one resembling my images. Instead some of the results came back as all black or all white and the ccoeff was the only one not to be all black were all white, but there was still no way to make sense of where the template matched to the image. I’m aware of the minmaxloc step, but it doesn’t make sense to do it now because the template match data doesn’t make any sense.
did you expect the result to be scaled into the range of 0 to 255? it’s not. it’s scaled (or not scaled) according to the equations given in the documentation.
if you need scaling to a specific range, you do that explicitly.
I did expect the outputs to be between 0-255 because I guess I didn’t see where in the docs was a range specified. I guess I should have gotten that from the math equations. When I multiplied each pixel by 255 I got an output I can recognize. Thank you for pointing that out, I’ve resolved the problem <3