Slight differences in output of MOG2 Background Subtraction in Python vs Swift (iOS)

Hello,

I have developed some code in Python that uses the MOG2 background subtractor and then ported that code to Swift to use on iOS. Ideally the port would be exact and the output of the code in Python would match exactly the output of the code on Swift. However, I have discovered that the MOG2 background subtractor (KNN as well) has a slightly different output on the Swift version and therefore changes the input to the rest of my code. Visually the Swift output appears more blurred. Am I wrong to expect that the background subtractor would produce the same output on both Python and Swift? Any help would be greatly appreciated.

In my Python environment I am on a MacBook and installed opencv with: pip install --upgrade opencv-python and have Python Version: 4.6.0.66

I am initializing the Python version:

bgs_obj = cv2.createBackgroundSubtractorMOG2(history=10, 
        varThreshold=25, detectShadows=False)

For iOS/Swift, I pulled the source code from the git repo and compiled it into the opencv framework version: 4.6.0
I am initializing the Swift version:

  var backgroundSubtractor = Video.createBackgroundSubtractorMOG2(
    history: 10,
    varThreshold: 25.0,
    detectShadows: false
  )

show us the data. maybe your phone screen is blurring the data. don’t blame OpenCV until you’ve narrowed it down.

Hi Crackwitz, thanks for the response. You were right it was too early to blame MOG2 specifically. What I have found is that loading a video in Python and Swift produces different images. Even different versions of opencv in Python has slightly different images. For example:

If I pick on opencv in Python in macOS using the version I built from source: 4.6.0-dev, when I load an image and then convert to grayscale and calculate the average pixel. I get Average: 122.1070775462963. If I run the same exact code on the same exact video clip using the version installed from pip I get: Average: 120.2203303433642

Code I am using:

    # Open video file 
    video = cv2.VideoCapture(test_case.video_fpath)
    # Check if camera opened successfully
    if (video.isOpened()== False):
        print("Error opening video stream or file")
        return output 

    ret, frame = video.read() 
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    print("Average: " + str(np.mean(gray)))