Using SIFT for Local Feature Extraction on Stars

Hey everyone!

I’m looking for some advice on how to improve upon it as I’m fairly new to CV.

The project involves aligning astronomical images nonlinearly. There is a package out there called astroalign, which uses triangulations to detect pattern matches between two images and that is a great technique. I wanted to take into account global distortions, which made me decide to do local invariant descriptors.

Here is my routine:

  1. Find the brightest stars (100 stars for now) in the image
  2. Do feature extractions on these bright stars using SIFT
  3. Brute-force feature match using SIFT
  4. RANSAC to filter out features
  5. Use Thin Plate Spline (TPS) Interpolation for nonlinear aspect

I don’t know if SIFT is the best method for doing feature extraction, but it’s one I have implemented.

I do have a few questions:

  1. Am I using SIFT properly here to account for global distortion? I do give SIFT points myself that I detect rather than the algorithm finding itself. It is suppose to be “scale-invariant” after all.
  2. Should I be finding a way to grab more stars from the outer edges of the images, or is the 100 brightest fine? Again, I am trying to account for global distortions.
  3. How is OpenCV using the size of the keypoint? Is there an equation I can look up to see how it computes scale/sigma from it?

Any advice is appreciated. Thanks!

to “generic” feature descriptors like SIFT, all the stars will look the same, especially if you have any lens/aperture effects (diffraction spikes)

expect that you won’t be able to distinguish stars that easily.

That is what I thought as well. But I noticed it did pretty well when it came to the matching stage as I’ll post a picture of it.

Do you possibly have any recommendation of what I should switch to instead as an alternative that could help me accomplish my goals of global distortion?

that picture does surprise me. if it works for you, keep using it.

the scale invariance comes from SIFT estimating the scale of the feature (laplacian pyramid and all that), which should correlate with the scaled appearance of a physical feature (of a certain size) in a camera picture.

SIFT then considers a local neighborhood around the point that is proportional to its scale (as figured from the octave in the laplacian pyramid and some intermediate levels between octaves), and as such is invariant/robust to the scaling of the appearance of the physical feature.

the 100 brightest have no reason not to be distributed randomly, so that’ll be good enough. you can go to extra trouble to collect features in the corners of the view. I wouldn’t worry about that unless there’s evidence of a homography “wiggling” or not sufficiently aligning two views.

you can always run a pass of ECC refinement. that’ll use both pictures in their entirety to refine the homography.

keypoint “sizes” are mostly implementation-defined, i.e. specific to the feature detection/description algorithm you’re using (SIFT, AKAZE, …) you can expect to find details in papers of the respective algorithms but opencv’s implementation might differ. hard to say. opencv docs might also not go into that much detail.

That makes sense to me. Apologies if I’m going to ask a slightly different question here:

Although it does “work” for now, what you mentioned about SIFT is true regarding its scale invariance. However, I don’t really give it the chance to do detect, just compute.

I’ve read about Delaunay triangulation as a method for handling local structures in images. Given that my project involves global distortions, I’m curious about whether combining Delaunay triangulation with SIFT would be a good approach. What do you think?

delaunay or not…

the graphs have a good chance of differing in parts. if you take the 100 brightest stars, that might happen to not be the same set in both pictures.

and then you’ve got yourself the problem of “graph isomorphism”. which, I hear, is an active and current field of research at top universities. there seem to be some useful heuristics for such problems, which you can use.

you could use a star’s “local neighborhood” in such a graph as a feature vector. then you’ll have to think about “feature engineering”, i.e. what information to extract and how to represent it such that the feature vector is invariant/robust to some things (rotation, scaling, slight variations in appearance/neighborhood).