Measuring image similarity with opencv

At my company we print out documents, make changes to those documents, and scan them back in. Sometimes the scans are subtly rotated and I use OpenCV to align the two images. Here’s the code I use to do this:

import sys
import cv2
import numpy as np

if len(sys.argv) != 4:
  print('  python3 ref.png new.png output.png')


def filter_matches(kp1, kp2, matches, ratio = 0.75):
  mkp1, mkp2 = [], []
  for m in matches:
    if len(m) == 2 and m[0].distance < m[1].distance * ratio:
      m = m[0]
      mkp1.append( kp1[m.queryIdx] )
      mkp2.append( kp2[m.trainIdx] )
  p1 = np.float32([ for kp in mkp1])
  p2 = np.float32([ for kp in mkp2])
  kp_pairs = zip(mkp1, mkp2)
  return p1, p2, list(kp_pairs)

def alignImages(im1, im2):
  detector = cv2.AKAZE_create()
  flann_params= dict(algorithm = FLANN_INDEX_LSH,
    table_number = 6, # 12
    key_size = 12,     # 20
    multi_probe_level = 1) #2
  matcher = cv2.FlannBasedMatcher(flann_params, {})

  kp1, desc1 = detector.detectAndCompute(im1, None)
  kp2, desc2 = detector.detectAndCompute(im2, None)

  raw_matches = matcher.knnMatch(desc1, trainDescriptors = desc2, k = 2)
  p1, p2, kp_pairs = filter_matches(kp1, kp2, raw_matches)
  if len(p1) < 4:
    print('%d matches found, not enough for homography estimation' % len(p1))

  H, matches = cv2.findHomography(p1, p2, cv2.RANSAC, 5.0)
  # the larger the number the better

  height, width = im2.shape

  imResult = cv2.warpPerspective(im1, H, (width, height))

  return imResult
refFilename = sys.argv[1]
imFilename = sys.argv[2]
outFilename = sys.argv[3]

imRef = cv2.imread(refFilename, cv2.IMREAD_GRAYSCALE)
im = cv2.imread(imFilename, cv2.IMREAD_GRAYSCALE)

imNew = alignImages(im, imRef)

cv2.imwrite(outFilename, imNew)

The problem is that sometimes the documents are a complete mismatch. eg.

When I try to run the above script two completely mismatched pages I get something like this:

My question is… is there a way I can detect these completely mismatched pages? Is there a way to score two different pages by similarity? I had that print(str(len(matches.ravel().tolist()))) might do the trick (it’s in the code) but sometimes two mismatched pages can have more matches then correctly matched pages, in my experience. If memory serves, one of the image alignment / object recognition methods kinda scores each match so maybe if I could do that with AKAZE I could score the pages by the ratio of number matches with a confidence > 20% vs the number of matches with a confidence of < 20%? But if that were a viable strategy how would I do that with AKAZE?


analyze the transform. it should not even be perspective, if you use actual scanner hardware, not photos. it should have an expected scale, near 0 rotation, no shearing of note, and the translation should be within some expected range as well.

you can throw OCR at both docs and compare. extracted text is a different kind of feature.