General question on OpenCV Background Subtraction implementations

Generally: Are all the background subtraction algorithms all about different ways to ‘learn’ the background image and once that ‘Golden Background Image’ is established, they then do effectively a straight pixel subtraction using that single leaned ordinary RGB image. Or do they also do algorithmic logic to the actual subtraction as well such that it is not ‘just’ a single ordinary RGB image?

Specifically: Put another way, If I was to save the ‘GetBackgroundImage’ function result as a typical 3 channel RGB ‘single image’ (or whatever type it is) then load it back in the Background Subtraction object with 100% learning rate, would I have an identical Background Subtraction object producing identical results as if I had trained that object from the original ‘n’ images?

[quote=“nicholas, post:1, topic:19168”]
Are all the background subtraction algorithms all about different ways to ‘learn’ the background image

no, they’yre all about learning a background Model, any background Image will be a pure synthetical
(nice for human visualization, but useless for the actual process or serialization)

no matter, what you do with that, you still MUST (re)train the full model