I stitched two images together with scans mode. In order to better understand the process, I used
stitching_detailed.cpp for my work. A question arises in my mind about how to map points from the original images to the stitched images and vice versa. Additional information for reproducing the issue is provided below.
- I use the following configuration when I run
stitching_detailed.cppto pass the arguments( the configuration is intended to simulate
scansmode since the default is
panorama, I assume)
--features surf --matcher affine --estimator affine --match_conf 0.6 --conf_thresh 0.6 --ba affine --ba_refine_mask xxxxx --wave_correct no --warp affine 0.JPG 1.JPG
- Using the below program, I map the points of the original images and the stitched image.
cv::Point2f p(300, 300); //an example point from the stitched image cameras.K().convertTo(K, CV_32F); //for first image cv::Point2f p2 = warper->warpPointBackward(p, K, cameras.R); //warp points to the first input image p2 = cv::Point2f(p2.x - corners.x, p2.y - corners.y); // subtract point from corner
Despite this, the result is not what I wanted and the points are not the same in the original image and the stitched image… These images are attached below for your review.
As I couldn’t upload multiple images, I concatenated them together.