I want to compute a similarity measure between two images (if images are totally different then
similarity = 0, if images are exactly the same then
similarity = 1) using
I am trying to face this problem using feature matching. However, I think I am not doing it in the correct way. This is my actual code:
def get_similarity_from_desc(approach, query_desc, corp_desc):
if approach == 'sift':
# BFMatcher with euclidean distance
bf = cv.BFMatcher()
# BFMatcher with hamming distance
bf = cv.BFMatcher(cv.NORM_HAMMING)
matches = bf.knnMatch(query_desc,corp_desc,k=2)
# Apply ratio test
good = 
for m,n in matches:
if m.distance < 0.75*n.distance:
return len(good) / len(matches)
I am just wondering how can I get this similarity measure between two images in an appropiate way using image descriptors.
Thanks in advance.
there’s nothing much wrong with your code
trying to measure similarity with feature matching is just a bad idea
what is this for ? which problem do you try to solve ? the “use-case” ?
hm in CBIR such features are used but they’re usually clustered in one way or another, so either database lookup is cheap (indexable data) or pairwise comparisons are cheap.
The way you approach it is going to depend greatly on what you mean by similar. For example, if you ha two identical images except one was a 180 degree rotation of the other, what similarity score would you want? Or one is color the other grayscale? Or one is a picture of a rainforest, and another is a picture of a pine forest? The same scene with different lighting, the same scene with different perspective…etc.
I second the “what is the use case” question. For example, I had a situation where I wanted to know when the auto-exposure algorithm of a video feed had “settled” after a lighting change. For this situation I computed a similarity score by doing a histogram comparison between successive images. Once the score stabilized or met a threshold value, I concluded the auto-exposure was done adjusting. This was simple and effective for my situation, but it’s probably not what you are looking for. More detail on what you are trying to accomplish will probably get more helpful suggestions.
It is for comparing clothes. Basically if the shape (if it is a sneaker or shirt or sweatpant or sweatshirt, etc) and the color are similar, then the images are similar.
All the images have the same size and the same grey background. At the moment, I read them in gray scale for computing the features.
Apart from feature matching I have seen another approaches like: labelling the images, then compute features and get representstive features by clustering and then train a SVM model or similar.
I think that apporach won’t work in this case since I only have 110 images (not enough for all the possible labels).
So perhaps instead of computing keypoints there is a way to get color features, then get shape features and then find a way to combine both of them. And after combining, measure similarity using Euclidean distance or similar.
Could you provide a simple code example (or a link) about how this works?
c++ sample here
(however, it looks like the multi images version wont work properly from python)
((but i havent tried, so far))