Hi, I’m trying to do real-time video stitching.
Currently I have an issue where my 2nd image would wrap and flip from right end to the left end. I expect it to just extend the canvas to the right instead.
Above, is the stitched image. Below is to show the matching features.
The stitched image may be a few frames behind the matching images, as they are processed in separate threads.
Unfortunately, my code is very messy and highly possible that I am unable to share much of my code. So, I will try to explain my approach.
- Using CUDA ORB to detect features
- Using CUDA BFMatcher (NORM_HAMMING) and do a knnMatch
- Filter the matches further with cv::findFundamentalMat
- Further filter with cv::findHomography
- Warp the 2nd image with cv::cuda::warpPerspective
- Copy the warped image to 1st image
Appreciate if anyone can point me to the right direction or what I am missing.