I have a template and an image. I want to find the location and orientation of the template inside the image. I am using SIFT to find features and description.
Problem is only one feature is consistently correct at recognizing the image. Homography requires at least 4 features to work. error: (-28:Unknown error code -28) The input arrays should have at least 4 corresponding point sets to calculate Homography in function 'cv::findHomography'
Since I am working with 2D image (with same scale), position and rotation of even one correct feature should be enough to provide the location and rotation of the template in the image.
From OpenCV Docs OpenCV: Introduction to SIFT (Scale-Invariant Feature Transform)
OpenCV also provides cv.drawKeyPoints() function which draws the small
circles on the locations of keypoints. If you pass a flag,
cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS to it, it will draw a circle
with size of keypoint and it will even show its orientation.
However the image I am working is too low resolution to actually see the circles and I need numbers which can compared.
All the other examples of finding orientation I find on the internet use edge detection. However There is no straight edge whose slope can be easily calculated exist in my template.
I have looked for tutorial, books, documentation on how to crunch the numbers in ‘keypoints’ and ‘description’, but I could not find any.
Perhaps I should use SURF -which is faster with 2d, same color images- but it is not available in latest opencv version.
sift = cv.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
print (des1)
# BFMatcher with default params
bf = cv.BFMatcher()
matches = bf.knnMatch(des1,des2,k=2)
# Apply ratio test
good = []
good_match = []
for m,n in matches:
if m.distance < .5*n.distance:
good.append([m])
good_match.append(m)
print('good matches are')
print(good)
print(good_match)
# cv.drawMatchesKnn expects list of lists as matches.
img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
plt.imshow(img3),plt.show()
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good_match ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good_match ]).reshape(-1,1,2)
M, mask = cv.findHomography(src_pts, dst_pts, cv.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv.perspectiveTransform(pts,M)
img2 = cv.polylines(img2,[np.int32(dst)],True,255,3, cv.LINE_AA)
draw_params = dict(matchColor = (0,255,0), # draw matches in green color
singlePointColor = None,
matchesMask = matchesMask, # draw only inliers
flags = 2)
img3 = cv.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params)
plt.imshow(img3, 'gray'),plt.show()