Sift with Flann, results differs over iterations on same images

Hey !
I’m trying to create a Python script to find a homography in an image from a template but I’m stuck with a problem. If I run my function through a loop, results are evolving through time. It’s not completely random because each time i run the script, I get the same results at each loop index.

Here’s the code, then the output of the 3 prints :

def lambda_handler(event, context):
    s3 = boto3.client('s3')

    body = event

    # img1 & img 2 declaration here

    sift = cv2.SIFT_create(nOctaveLayers=3)
    # find the keypoints and descriptors with SIFT
    kp1, des1 = sift.detectAndCompute(img1,None)
    kp2, des2 = sift.detectAndCompute(img2,None)

    index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
    search_params = dict(checks = 50)

        flann = initialFlann
    except UnboundLocalError:
        flann = cv2.FlannBasedMatcher(index_params, search_params)
        initialFlann = flann.clone(True)

    matches = flann.knnMatch(des1,des2,k=2)


    good = []
    for m,n in matches:
        if m.distance < 0.7*n.distance:

    minInliers = 20; 

    src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
    dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)


    if len(src_pts) < 4 or len(dst_pts) < 4:
        return False

    mask = None
        M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,3.0)
        matchesMask = mask.ravel().tolist()
        inliers = 0;
        for i in mask:
            if i == 1:
                inliers = inliers+1
    except Exception as e:
        return False


    if inliers >= minInliers:
        return True
        print( "Not enough matches are found - {}/{}".format(inliers, minInliers) )
        return False

for x in range(50):
    lambda_handler(None, None)

Here are the results :

I was asking myself if there was some kind of cache about some training that might be performed and save, but if that’s the point, I can’t find a way to get rid of that. For now, I tried putting a flann.clear() call after it has been executed, but it seems to have no effect at all.

Does any of you have any idea? Thanks!

1 Like


May be you already found the reason: FLANN is random. It speeds up matching working on a random subset, giving approximate results.

This question arise from time to time, I believe this is a good thing to have a post on this, many users will find their answer before asking.

Thanks for your answer. I guessed it had something to do about that, but isn’t strange that I’ve got the same set of results each time I run it? Because if everytime I run the script I’ve got the same result on loop 1, on loop 2 etc… Makes me think it is not that random, I really felt like it was some kind of cache training or something like that.

@Alejandro_Silvestri Getting back to you about this.

How could this be random if I get the same results for each loop everytime I run my script? Looks like there’s something not that random in all this…

Thanks again,

My mistake. “FLANN is random” is an oversimplification.
FLANN is a library with a many algorithms, not all of them are deterministic.
The usual first step in FLANN is to automatically pick an algorithm based on the data.

Okay thanks!

Is there a way for me to get the same result for each iteration? Like forcing the algorithm which will be used?

thanks again,


Sorry I can’t help you with this, I recommend looking the reference for such characteristic in OpenCV FLANN based algorithms.

I am focusing on this problem and do you have solutions?