Generally: Are all the background subtraction algorithms all about different ways to ‘learn’ the background image and once that ‘Golden Background Image’ is established, they then do effectively a straight pixel subtraction using that single leaned ordinary RGB image. Or do they also do algorithmic logic to the actual subtraction as well such that it is not ‘just’ a single ordinary RGB image?
Specifically: Put another way, If I was to save the ‘GetBackgroundImage’ function result as a typical 3 channel RGB ‘single image’ (or whatever type it is) then load it back in the Background Subtraction object with 100% learning rate, would I have an identical Background Subtraction object producing identical results as if I had trained that object from the original ‘n’ images?