Possibility of using cv.findHomography and cv.warpPerspective in the stitcher_detail pipeline

Hello OpenCV community!

Recently, I have ran into an intriguing problem whereby the stitcher_detail pipeline is able to stitch together images that are of different scale but not when there is a difference in rotation between each overlapping image. I supplied two images, 90 degree rotational difference from each other, and this error popped out “(-4:Insufficient memory) Failed to allocate 20002997842208 bytes in function ‘OutOfMemoryError’”.

The error occured on this code line

corner, image_wp = warper.warp(images[idx], K, cameras[idx].R, cv.INTER_LANCZOS4, cv.BORDER_REFLECT)

I was thinking that the problem lies with the transformation matrix that was generated. All these while I have been using a homography matrix for stitching two images together, but this camera intrinsic matrix is rather complicated and may be corrupted. Use case: I am trying to stitch 2d robot maps together, and most of the time is it just the scale and the rotation that is different amongst the images.

Will converting the stitcher_detail pipeline to accomodate cv.findHomography be feasible? I only issue I see is how am I supposed to put all into a single coordinate frame like in the pipeline. Not too sure how to extract corners now that cv.warpPerspective is being used instead of cv.pyrotationwraper.warp.

Since there is little to no documentation for the functions in the stitcher_detail pipeline, it is pretty infeasible to migrate to a ‘newer’ form of stitching similar to that of dual stitcher codes. (OpenCV panorama stitching - PyImageSearch)

you don’t need homography, not even a full affine, just euclidean/similarity

the stitching module should be able to handle that. its SCANS mode claims to do a full affine transformation.

if you have issues with that, please present a minimal reproducible example.

the bad thing about pyimagesearch is that these articles show low hanging fruit, hardly any theoretical background, and they only focus on the code, and they don’t develop the code but rather go through a finished script line by line, and the script contains a lot of clutter that distracts from the core ideas. questionable teaching. I guess the primary goal isn’t teaching. it’s a blog.

the bad thing about this article, for a ton of newbies (his primary audience) who want “real” stitching, is that the “manual” approach using warpPerspective can’t handle panoramas that span even nearly 180 or more degrees. it’s tolerable for composing “wide” angle (under 180 degrees) shots. and it doesn’t show how to use the stitching module, which is capable of those things. and it’s from 2016.

if you wanted to ditch the stitching module and DIY instead, you would need feature matching and estimateAffinePartial2D and warpAffine