Hello OpenCV community!
Recently, I have ran into an intriguing problem whereby the stitcher_detail pipeline is able to stitch together images that are of different scale but not when there is a difference in rotation between each overlapping image. I supplied two images, 90 degree rotational difference from each other, and this error popped out “(-4:Insufficient memory) Failed to allocate 20002997842208 bytes in function ‘OutOfMemoryError’”.
The error occured on this code line
corner, image_wp = warper.warp(images[idx], K, cameras[idx].R, cv.INTER_LANCZOS4, cv.BORDER_REFLECT)
I was thinking that the problem lies with the transformation matrix that was generated. All these while I have been using a homography matrix for stitching two images together, but this camera intrinsic matrix is rather complicated and may be corrupted. Use case: I am trying to stitch 2d robot maps together, and most of the time is it just the scale and the rotation that is different amongst the images.
Will converting the stitcher_detail pipeline to accomodate cv.findHomography be feasible? I only issue I see is how am I supposed to put all into a single coordinate frame like in the pipeline. Not too sure how to extract corners now that cv.warpPerspective is being used instead of cv.pyrotationwraper.warp.
Since there is little to no documentation for the functions in the stitcher_detail pipeline, it is pretty infeasible to migrate to a ‘newer’ form of stitching similar to that of dual stitcher codes. (OpenCV panorama stitching - PyImageSearch)