Slight differences in output of MOG2 Background Subtraction in Python vs Swift (iOS)

Hi Crackwitz, thanks for the response. You were right it was too early to blame MOG2 specifically. What I have found is that loading a video in Python and Swift produces different images. Even different versions of opencv in Python has slightly different images. For example:

If I pick on opencv in Python in macOS using the version I built from source: 4.6.0-dev, when I load an image and then convert to grayscale and calculate the average pixel. I get Average: 122.1070775462963. If I run the same exact code on the same exact video clip using the version installed from pip I get: Average: 120.2203303433642

Code I am using:

    # Open video file 
    video = cv2.VideoCapture(test_case.video_fpath)
    # Check if camera opened successfully
    if (video.isOpened()== False):
        print("Error opening video stream or file")
        return output 

    ret, frame = video.read() 
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    print("Average: " + str(np.mean(gray)))