Image difference after Image registration and alignment

I am aligning two images using opencv and numpy in Python using the logic from below link .

I am able to get the alignment working but when I try to find image differences using absDiff all objects are shown as different even though there are only few differences.



Can someone please help me find the reason ?

code and output too please.

im1 = cv2.imread("Image1.png")
im2 = cv2.imread("Image2.png")
	
im1Gray = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
im2Gray = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)

orb = cv2.ORB_create(MAX_FEATURES)
keypoints1, descriptors1 = orb.detectAndCompute(im1Gray, None)
keypoints2, descriptors2 = orb.detectAndCompute(im2Gray, None)

matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)

matches = sorted(matches, key=lambda x: x.distance)

numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]

imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
cv2.imwrite("matches.jpg", imMatches)

points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)

for i, match in enumerate(matches):
  points1[i, :] = keypoints1[match.queryIdx].pt
  points2[i, :] = keypoints2[match.trainIdx].pt

h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)

height, width, channels = im2.shape
im1Reg = cv2.warpPerspective(im1, h, (width, height))

cv2.imwrite("AlignedImage.png", im1Reg);
#**Code to find difference between images**

imageA = cv2.imread("AlignedPython.png")
imageB = cv2.imread("Image1.png")

grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(imageB, cv2.COLOR_BGR2GRAY)

diff = cv2.absdiff(grayA, grayB)

thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)

for c in cnts:
  (x, y, w, h) = cv2.boundingRect(c);
  cv2.rectangle(imageA, (x, y), (x + w, y + h), (0, 0, 255), 2)
  cv2.rectangle(imageA, (x, y), (x + w, y + h), (0, 0, 255), 2)
	
cv2.imwrite("bailey-Difference.png", imageA)

ok so the result isn’t subpixel exact. that is to be expected. you’ve got a PDF drawn on a screen at unknown and arbitrary scale, and translation.

have you ever just displayed the difference image? what magnitudes do you see?

otsu will find something if there is anything at all. you should work with fixed thresholds here.

consider using morphology operations.

consider swapping findContours for connectedComponentsWithStats. that’ll give you bounding boxes directly, and you can use the label map to colorize gently.

you use warpPerspective without a specific interpolation mode, so you’re getting the cheapest, which is nearest neighbor.

you only rely on keypoint matches and findHomography. you should run ECC refinement on that homography too.

1 Like

Thanks for the inputs. Will try with these.

Below is the difference image I get with absDiff method.

I have two images which are similar but one image is having lighter shade of color compared to other. When I try to get difference between two images everything is shown as different even though objects are just having different shades of gray color. Is there any way to ignore the color / shade difference while finding difference?

Code:

imageA = cv2.imread(“Image1.png”)
imageB = cv2.imread(“Image2.png”)

grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(imageB, cv2.COLOR_BGR2GRAY)

diff = cv2.absdiff(grayA, grayB)

thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)

for c in cnts:
(x, y, w, h) = cv2.boundingRect(c);
cv2.rectangle(imageA, (x, y), (x + w, y + h), (0, 0, 255), 2)

cv2.imwrite(“result.png”, imageA)

Appreciate the help.