I have developed some code in Python that uses the MOG2 background subtractor and then ported that code to Swift to use on iOS. Ideally the port would be exact and the output of the code in Python would match exactly the output of the code on Swift. However, I have discovered that the MOG2 background subtractor (KNN as well) has a slightly different output on the Swift version and therefore changes the input to the rest of my code. Visually the Swift output appears more blurred. Am I wrong to expect that the background subtractor would produce the same output on both Python and Swift? Any help would be greatly appreciated.
In my Python environment I am on a MacBook and installed opencv with: pip install --upgrade opencv-python and have Python Version: 126.96.36.199
I am initializing the Python version:
bgs_obj = cv2.createBackgroundSubtractorMOG2(history=10, varThreshold=25, detectShadows=False)
For iOS/Swift, I pulled the source code from the git repo and compiled it into the opencv framework version: 4.6.0
I am initializing the Swift version:
var backgroundSubtractor = Video.createBackgroundSubtractorMOG2( history: 10, varThreshold: 25.0, detectShadows: false )