First few frames of KNN background subtraction mostly foreground

I followed the example here:

https://www.ccoderun.ca/programming/doxygen/opencv/tutorial_background_subtraction.html

I create the subtractor using
backSub = cv.createBackgroundSubtractorKNN(detectShadows=False)

I read from a file, looping through frames, passing each to
fgMask = backSub.apply(frame)

My sample video has nothing moving for the first minute or so, but the first four fgMask outputs are marked almost all foreground. After that, it works great.

I’m guessing the algorithm takes a few frames to generate a stable background estimate. Is there a way to handle this correctly? Even just skipping the first N frames would be fine for me, but I don’t know whether 4 is the right answer. It’s obvious that the first frame will not yield a usable result, and I searched for guidance, but I couldn’t find anything in the documentation.

I’m certain that my first few frames are good.

I also tried MOG2. That was a little worse, I got some mostly white frames, then it settled for a few more, then I got another burst around 40 frames in before it settled in and gave good results.

I’d recommend a combination of hysteresis and dead-time.

Hysteresis: mark the point at which “foreground” prediction (of a static scene) falls below some noise threshold.

Dead-time: consider a few frames after this point to also be suspect, hence to be discarded.