Global adjustment to avoid dragging error in Image stitching

I am working on an image stitching app. I have a camera fixed in the Z axis, so I only have movement in X or Y. The scale of the input image is always constant.

I am using Template Matching (ORB) to get the matches and so, the shifts of the camera in time.

I am getting very good results, but some misalignments appear after a while scanning.

I think it may be happening because I am dragging small error after every single match, that has an impact in the global shifts because of accumulated error.

I know that I need something to globally refine the match to correct this global error, but I can’t find the way.

Bundle adjustment looks promising, but I can’t find the way to use it in this context, where I don’t have a calibrated camera and I only have a single camera and 2D points only.

How can I do to correct my matches globally so everything fits as expected without dragging error?


I’m not sure I can help but I’d like to ask: have you calibrated your camera intrinsics, including lens distortion?

if the scene is flat, you could get away with affine transformations or less, instead of perspective transformations.

you should use the stitching module and its pipeline. it’s very advanced. more so than doing feature extraction, matching, and compositing “manually”.

bundle adjustment is a term from Structure-from-Motion. you aren’t doing a 3D reconstruction so the term in its specific meaning shouldn’t apply here. you do seem to need some kind of optimization that adjusts all pictures’ transformations.

Thanks for your answer. I am not using the stitching module pipeline, because I need more “control” over the displacements, to move a camera in real time, with more than 30FPS.

Yes, it seems like I need to find a way to get an optimization to adjust my transformations and avoid propagating the error.

look into Simultaneous localization and mapping (SLAM). they have similar problems when fusing point clouds. not the same but related.

1 Like