Aligning 2 images perfectly before subtraction

I am trying to Subtract two grayscale x-ray images. The first image is an image of an object before some some structures have been deposited on it using
a manufacturing process. The second image is an image of the same object, but taken after the structures have been deposited on the object.
The purpose of the subtraction is to eliminate some circular features within the object, which are present in the object before the manufacturing process
and also after the manufacturing process. Such features are interfering in the analysis of the object and the manufacturing process,
and that is why I am pursuing this subtraction method. My hope is I can subtract them out. My understanding is that there are parallels
of the approach I am taking in medical diagnosis industry.
But before I can do the subtraction, I need to make sure there is a very good alignment between the images, and perfect one, pixel to pixel, if possible.
So that I can subtract the intensities of corresponding pixels of the two grayscale images.
So the features in the images, I am trying to subtract out, are present in both the “before” image and the “after” image. I can see them with my naked eye.
I want to use them for obtaining alignment between the images also. The automatic feature matching using Opencv functions has failed very badly {Reference cv2.ORB_create() & cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)}.
So I can’t use that anymore. I want to specify the matching coordinates manually to opencv. But I fear that I may inadvertently introduce some human error,
in the determining the matching coordinates. Even though I can see the matching features with naked eye in both the images, determining their exact coordinates
and providing them as a match to opencv, will be difficult. Because the images have so many pixels and they are complex, they have many shades and other features
in the image with varying grayscale intensities. Also, the matching features are circular in shape and they do not have any sharp corners.
But I am confident that the I can supply the exact neighborhoods of the features to match, to opencv, for obtaining image alignment.
So, is there way in Opencv, where I can get the best of both the worlds? Where I provide my best estimate of the the matching neighborhood or the coordinates
to OpenCV, and OpenCV takes it from there and further confirms and determines the exact matching pixels?
I feel, to get the best results from the subtraction, I should match the two images perfectly pixel by pixel. Any Advice please ? I am very new to all this. Thank you in advance

please, add a pair of before/after images, ty :wink:

I am not authorized to publish the images on a public website. Can I please get some suggestions or thoughts on my approach ? Any references to code examples or article to read?