The reconstruction of my Scene looks very strange:
The idea is to create 3 Stereopairs with 3 Cameras.
Each camera is shifted by 120° to the next one and 71° from the XY-Plane down to the ground (all cameras are oriented to the same object centre).
So I calculated the Rotationmatrix to:
R1:
[-1, -2.5351817e-06, 0;
2.5351817e-06, -1, 0;
0, 0, 1]
R2:
[0.50000072, 0.86602503, 0;
-0.86602503, 0.50000072, 0;
0, 0, 1]
R3:
[0.50000072, 0.81884217, 0.28195187;
0.86602503, -0.47275963, -0.16278531;
0, 0.32557014, -0.9455179]
and the Translationvektors to:
tvec1:{0,293,-82.11}
tvec2:{-253.75,-146.5,-82.11}
tvec3:{253.75,-146.5,-82.11}
The Origin is oriented in the centre of the mounting.
After the calculation the extrinsic and intrinsic Parameters the following steps are performer:
1. Calculating the Projectionmatrices for all 3 positions by using:
cv::sfm::projectionFromKRt(K,rotation3,tvec_2,P3)
2. undistort the images with:
cv::undistort(all_images[0],Img_1_undist,K,distCoeff,cv::noArray());
- detect KeyPoints in every Image with:
akaze->detectAndCompute(dist_imges[i],cv::noArray() ,keypoints.at(i),descriptors.at(i));
3. Matching the Keypoints for every image pairs:
cv::BFMatcher matcher = cv::BFMatcher(cv::NORM_HAMMING);
std::vector<cv::DMatch> match1 = matchWithRatioTest(matcher, descriptors[0], descriptors[1],Match_Ration_Threshold);
std::vector<cv::DMatch> match2 = matchWithRatioTest(matcher, descriptors[1], descriptors[2],Match_Ration_Threshold);
std::vector<cv::DMatch> match3 = matchWithRatioTest(matcher, descriptors[2], descriptors[0],Match_Ration_Threshold);
// Filtern all way round
std::vector<cv::DMatch> match1Rcp = matchWithRatioTest(matcher, descriptors[1],descriptors[0],Match_Ration_Threshold);
std::vector<cv::DMatch> match2Rcp = matchWithRatioTest(matcher, descriptors[2],descriptors[1],Match_Ration_Threshold);
std::vector<cv::DMatch> match3Rcp = matchWithRatioTest(matcher, descriptors[0],descriptors[2],Match_Ration_Threshold);
4. Match the KeyPoints by Descriptor:
for (const cv::DMatch& dmrecip : match1Rcp)
{
bool found = false;
for(const cv::DMatch& dm : match1)
{
if(dmrecip.queryIdx==dm.trainIdx and dmrecip.trainIdx==dm.queryIdx)
{
merged1and2.push_back(dm);
txt_report<<dm.queryIdx<<" , "<<dm.trainIdx<<"\n";
found = true;
break;
}
}
if (found)
{
continue;
}
}
Showing the Matched KeyPoints with:
>cv::drawMatches(img_1,keypoints.at(0),img_2,keypoints.at(1),merged1and2,img_matches1to2);
The Result looks good to me.
5. Match the DMatchs to the KeyPoint:
for (size_t i = 0; i!=merged1and2.size();i++) { //Iteration über alle Keypoints des ersten Bildes for(size_t m = 0; m!=keypoints[0].size(); m++) { if(merged1and2[i].queryIdx==m) { keypoints_matched_12.push_back(keypoints[0][m]); } } for (size_t n=0;n!=keypoints[1].size();n++) { if(merged1and2[i].trainIdx==n) { keypoints_matched_21.push_back(keypoints[1][n]); } } }
6. Converting the KeyPoints to cv::MAT:
for (uint32_t i = 0; i <= keypoints_matched_12.size() ;i++)
{
pointmatrix12.at<float>(0,i)=keypoints_matched_12[i].pt.x;
pointmatrix12.at<float>(1,i)=keypoints_matched_12[i].pt.y;
}
7. Triangulate Points
> cv::triangulatePoints(P1,P2,pointmatrix12,pointmatrix21,points3D_12);
cv::triangulatePoints(P2,P3,pointmatrix23,pointmatrix32,points3D_23);
cv::triangulatePoints(P3,P1,pointmatrix31,pointmatrix13,points3D_31);
8. Write to PLY-File:
for(int i =0; i<=points3D_23.size().width;i++)
{
ply_file_3d<<points3D_23.at<float>(0,i)/points3D_23.at<float>(3,i)<<" "<<points3D_23.at<float>(1,i)/points3D_23.at<float>(3,i)<<" "<<points3D_23.at<float>(2,i)/points3D_23.at<float>(3,i)<<"\n";
}