I have a video from a relatively static camera. I let a user pick certain frames from the video for further processing, like blending them or similar (custom logic, not the point of this question). The camera can move a little bit between these frames so before letting the user blend them, I need to align the frames to make sure that all the frames have the same view (aligned background). I believe this is how video stabilization works and I’m wondering how I can achieve this with OpenCV as long as I’m already using OpenCV for image processing stuff.
for small transformations, use ECC refinement:
video stabilization uses other approaches… such as relying on IMU sensor data.
Thank you for your quick reply and sorry it took me a while to respond. My transformations are small, from frame to frame the change is relatively small (1-5%). I have tried all motion type transformations and they all gave me the unsatisfactory result, the aligned image was aligned incorrectly, often in the other directions of expected alignment. I have used the sample code provided in the article. Probably the reason is that my video frames capturing not only the slightly moving static background but also a moving object (a person) which can take up significant space in the frame (30-40%). Is there a better way to align these kinds of frames? Probably I should detect the moving object first and clear it before aligning the frames?
I’m gonna have to see data
Please find two frames of a single video with a slight displacement and moving object (me)
I was able to use the feature matching to align the frame of my video. Works really well on the data I was testing even without ecc refinement but I’m sure if I add this on top, it would provide even better results. Thanks for the help!