So i’ve read the documentation for d3dshot (thanks crackwitz!)
am i correct in understanding that in order to obtain object detection from a video, we would need to have a buffer(which captures 60-100 different photos), and then we will convert these photos into a numpy array (or whatever format)… and then compare them to existing numpy arrays from our previous photos… is that generally correct?
what happens if the numpy array we have in our database is of the object at a certain size/resolution, does object from the screen capture have to be the exact same size as our object in our “database” ? Sorry if im using the wrong nomenclature, kinda new to this library.