However, I want this function to take video input instead of just an image and I’m struggling with the implementation. I have tried something but keep getting an error message (see end of post for details). I don’t really know what I’m doing so if anyone could link me to some resources or explain how to do this, it would be really appreciated!
def operation(self):
cap = cv2.VideoCapture(path)
searchFor = cv2.imread('sampleIMG1.png', cv2.IMREAD_UNCHANGED)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
print("Can't receive frame (stream end?). Exiting ...")
break
result = cv2.matchTemplate(cap, searchFor, cv2.TM_CCOEFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result)
print(max_loc)
print(max_val)
if max_val > 0.9:
print("Found it")
if cv.waitKey(1) == ord('q'):
break
cap.release()
cv.destroyAllWindows()
Here is the error message which I get:
File "main.py", line 75, in operation
result = cv2.matchTemplate(cap, searchFor, cv2.TM_CCOEFF_NORMED)
cv2.error: OpenCV(4.5.3) :-1: error: (-5:Bad argument) in function 'matchTemplate'
> Overload resolution failed:
> - image is not a numpy array, neither a scalar
> - Expected Ptr<cv::UMat> for argument 'image'
hey @Shocks, as a rule of thumb:
everyone hates noobs posting images of text anywhere.
please replace with, uhmmm, TEXT, thank you
(and no fear, we’ll help with the formatting)
Yes, would make sense to include them, sorry. Here they are but I will also include them in the main post now!
File "main.py", line 75, in operation
result = cv2.matchTemplate(cap, searchFor, cv2.TM_CCOEFF_NORMED)
cv2.error: OpenCV(4.5.3) :-1: error: (-5:Bad argument) in function 'matchTemplate'
> Overload resolution failed:
> - image is not a numpy array, neither a scalar
> - Expected Ptr<cv::UMat> for argument 'image'
Yes that works! Thank you so much for your help, I really appreciate it.
Just for my understanding how is it that I can use templateMatch on images that have color but once I pass frames from a video in, I have to grayscale both input and ‘the needle’?
oh, interesting, both being 3-channel actually works… I just tried it.
anyway, both haystack and needle have to have the same number of channels. you specified IMREAD_UNCHANGED and that may read a grayscale image as grayscale.
it has nothing to do with where the frames came from, only what shape and type they are.
look at the .shape and .dtype of both the arguments you supplied originally (before cvtColor)
It sounds like you want to read the frames from the file faster than the framerate of the encoded video file, but when you call cap.read() it blocks until enough time has elapsed since the last read? I haven’t used a cv videoCapture object for reading video from file, but maybe you can set the FPS on the video device (and override the file’s encoded FPS)? Or maybe you can set the position in the sequence explicitly, and then call read()? (I’m hoping the read call would return immediately in this case.)
impossible. OpenCV doesn’t look at FPS when reading video files. there is no mechanism that would “pace” the reading because nobody wants that. I’m sure you’ll get an error code when you even try to set such a property on a VideoCapture of a file.
the only backend that is even capable of causing such shenanigans is gstreamer and even then one would need to do something explicitly and specifically wrong. I saw one instance of such an issue in the past 5+ years. nobody else ever reported that again. probably because the person back then wrote their own gstreamer pipeline and made a mistake there.
the more likely explanation is that OP generated a hypothesis that’s close enough to reality (video only appears to “play back” close to realtime), then latched onto the hypothesis without considering alternatives.
especially when you haven’t used VideoCapture on files before, please be careful with generating more unsupported hypotheses, or believing and feeding into the first unsupported hypothesis. doesn’t help anybody, only adds confusion to the both of you, and any passers-by looking for answers that’ll find this later.