Is there a recommended evaluation framework to evaluate and tune detection models?

Is there a recommended framework for conducting evaluation/test and parameter tuning in OpenCV similar to other frameworks such as scikit-learn?

I’m building an object detection framework that may involve some tracking of multiple objects in order to reduce false positive detections (ie. ignore it unless it is present in multiple frames, ignore it unless it is growing in foreground, etc.). Am testing multiple background subtraction methods.

Would like to loop through multiple parameter inputs and also background subtractors which use different parameters in order to produce a final score which is likely a precision metric. Something like this pseudocode using sklearn techniques and methods?


 import cv2 as cv
 from sklearn.metrics import precision_score
 from sklearn.model_selection import ParameterGrid
 
 #frame by frame labeled ground truth on presence or absence of detected object
 y_true = [True, False, True, False, ..... etc.]
 
 def detect_something(gray, **kwargs):
       ## background subtraction evolution, find contours, draw boxes, etc
       pass
  
 parameters = dict(
     param1 = (1,2, 5),
     param2 = (500, 700, 900)
     param3 = (400, 500, 600)
 )   
 
 
 scores = []
 for param in ParameterGrid(parameters):
     cap = cv.VideoCapture('vtest.avi')
     # maybe initialize something like background subtractor
     y_pred = []
     while cap.isOpened():
         ret, frame = cap.read()
         pred = False
         # if frame is read correctly ret is True
         if not ret:
             print("Can't receive frame (stream end?). Exiting ...")
             break
         gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
         
         ret, img = detect_something(gray, **param)
         if ret:
             pred=True
             y_pred.append(pred)
         
         cv.imshow('frame', img)
         if cv.waitKey(1) == ord('q'):
             break
     score = precision_score(y_true, y_pred)
     scores.append(score)
     cap.release()
     cv.destroyAllWindows()

so, how do you detect objects ?

also, evaluation would for sure require ground truth data, what’s your plan here ?

Yes, made that deliberately vague as I’m evaluating several detection methods that include various background subtraction methods (custom, MOG2, KNN, etc.).

my ground truth data is a hand labeled frame by frame array of my video that I’m testing where I label each frame with True/False meaning my object is either there or not. So, am trying to see if there are any best practices similar to other frameworks such as Sklearn to do parameter grid searching. An example would be scikit-learn’s GridSearchCV which uses ParameterGrid under the hood

GridSearchCV (sklearn)

ideas:

  • Video handler decorator that takes filename as input and a pointer to ground truth data for video… This decorator instantiates the background subtractor progressively with its range of parameters (eg. cv.createBackgroundSubtractorMOG2 vs. KNN, history = (100, 200, 400), etc).
  • A parameter grid search which runs test video over multiple loops and then logs the precision result scores.

So, the “detect_something” is deliberately abstract but could be 10 different approaches all with differnt background subtractors, trackers, and min_area filters, etc. It is part of a framework for testing multiple approaches and logging its test scores. Am looking for whether there is such a best practice or framework?

to my knowledge, there is no scaffolding that would support you in searching through hyperparameters