I have different videos and want to find out which feature detector and descriptor combination would be the best for my use case. Is there a way to compare the quality of my detected keypoints? Maybe It is important to know that I am matching and filtering inliers by using the recoverPose function before I pick the inliers from the mask.
can you show us, what you are doing here ?
I will show everything I do here:
I am working on stereo visual odometry. Which means I have to detect key points of a left and right image and match the keypoints. Then I take the keypoints of the matches and match the left image with the previous left image and the right image with the previous right image. The resulting matches are now used to find the essential matrix between the current left and previous left image as well as the current right and previous right image.
The essential matrix is used to recover the pose. Next I convert the homogenous coordinates to cartesian. The mask I get from the recoverpose function is now used to filter the inlier of the 3D points.
//1. color conversion
cv::Mat grayL, grayR;
cv::cvtColor(currImageLeft, grayL, cv::COLOR_RGB2GRAY);
cv::cvtColor(currImageRight, grayR, cv::COLOR_RGB2GRAY);
//2. remapping
cv::Mat remappedL, remappedR;
cv::remap(grayL, remappedL, leftStereoMap1, leftStereoMap2, cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::Scalar());
cv::remap(grayR, remappedR, rightStereoMap1, rightStereoMap2, cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::Scalar());
//3. feature detection and description
//these methods just call the chosen detector or descriptor based on my choice
std::vector<cv::KeyPoint> keyPointVectorLeft, keyPointVectorRight;
cv::Mat descriptorLeft, descriptorRight;
detectFeatures(remappedL, remappedR, keyPointVectorLeft, keyPointVectorRight, selectedFeatureDetector);
describeFeatures(remappedL, remappedR, descriptorLeft, descriptorRight, keyPointVectorLeft, keyPointVectorRight, selectedFeatureDescriptor);
//4. Matching (left & right, left(t-1) & left(t), right(t-1) & right(t))
//Here I will only show how I match the descriptors.
//This matching is for ORB, BRIEF and BRISK
cv::Ptr<cv::BFMatcher> matcher = cv::BFMatcher::create(cv::NORM_HAMMING, true);
matcher->knnMatch(descriptorOne, descriptorTwo, matches, 1);
for(int i = 0; i < matches.size(); i++)
{
if(matches[i][0].distance != 0)
goodMatches.push_back(matches[i][0]);
}
//5. find essential matrix
cv::Mat E, mask;
E = cv::findEssentialMat(prevMatches, currMatches, camMat, cv::RANSAC, 0.999, 1.0, mask);
//6. recover pose
cv::Mat R, t, points3D_homogeneous;
int inliers = cv::recoverPose(E, prevMatchesGood, currMatchesGood, camMat, R, t, 50, mask, points3D_homogeneous);
//7. convert to cartesian
cv::Point3f point;
std::vector<cv::Point3f> points3D;
for(int i = 0; i < points3D_homogeneous.cols; i++)
{
point.x = points3D_homogeneous.at<double>(0, i) / points3D_homogeneous.at<double>(3, i);
point.y = points3D_homogeneous.at<double>(1, i) / points3D_homogeneous.at<double>(3, i);
point.z = points3D_homogeneous.at<double>(2, i) / points3D_homogeneous.at<double>(3, i);
points3D.push_back(point);
}
//8. filter inliers
for(int i = 0; i < int(keyPointsPrevious.size()); i++){
if(mask.at<bool>(i,0) == 1){
prevMatchesIn.push_back(keyPointsPrevious[i]);
currMatchesIn.push_back(keyPointsCurrent[i]);
points3DInliers.push_back(points3D[i]);
}
}
My Goal is to calculate my camera position from the 3D Points I find by using solvePNPRansac. Before that I have to check which combination of detector and descriptor I want to use. For this purpose I wanted to compare the quality of my keypoints after the Matching in 4. Do you know a good way to compare the quality of keypoints?
Kind regards