Random changes in the processing time of matching two images

I am currently using OpenCV in Python to compare two images.

I am using FAST to detect the key points and BRIEF to get the descriptors. Then using Brute Force Matches to detect if the images are a match.

To reduce the processing time, I am comparing a selected area of interest in one image, to the same area of interest in the second image. To test the method, I compared the same image but selected progressively bigger areas of interest (all areas selected start from the same coordinate points).

Theoretically, as the selected area gets bigger, more key points and feature matches are detected. Therefore, the processing time increases when the dimensions of the selected area increase.

This is true when I test an image with only printed text. However, when I test an “art” image (an image of flowers) the processing time varies from 90ms to 75ms with no relation whatsoever to the dimensions of the area selected.

My initial guess was that this could be due to the CPU running other programs. However, if this were the case, the same fluctuations should be observed for the printed text image. I also attempted to test the art image in grayscale, but it yielded the same scattered results.

Does anyone have an idea of why the processing time might be varying randomly? Is it due to any limitation of the algorithms used?

I would really appreciate the help :slight_smile:

that’s one scheduler time slice on Windows, i.e. the granularity of some time functions.

present your methodology. your numbers mean nothing without that.

Here is my code:

# Funtion to select the region of interest
def imcal_ROI():
    global h, w, h_2,w_2, x, y

    im_crop = None

    while im_crop is None:
        y = 689  # centre of image (text: 565 , art =689)
        x = 458  # centre of image (text: 384 , art = 458)
        dimensions = np.empty(shape=[0, 2])
        im = cv2.imread(test)

        for i in range(10, 690, 10):  # until height of 1380
            h_2 = i
            h = i*2
            w_2 = i
            w = i*2

            if w_2 > 460:  # Set limit as taller than wider
                w_2 = 458
                w = 458*2

            dimensions = np.append(dimensions, [[h, w]], axis=0)

            im_crop = im[y-h_2:y+h_2, x-w_2:x+w_2]  # coordinates from centre
            cv2.imshow("Cropped Image", im_crop)
            cv2.waitKey(100)

            cali = im_crop
            calikp = fast.detect(cali, None)
           calikp, calides = brief.compute(cali, calikp)
 
            n_kp = len(calikp)
            print('Number of keypoints {}'.format(n_kp))
             if n_kp >= 2:  # need a minimum of k=2 for analysis bf match
                  analysis_orb()


# Comparison Capture And Calculation
def analysis_orb():

    cv2.destroyAllWindows()  # Closes all current open windows 

        sec2 = time.time()

        cv2.imread(test_comp)  # clear buffer
        comp_src = cv2.imread(test_comp) 

        comp = comp_src[y-h_2:y+h_2, x-w_2:x+w_2] #Comparison image to the same dimensions and calibration image

       # Determination of Keypoints and Corresponding Binary Descriptors
        compkp = fast.detect(comp, None)
        compkp, compdes = brief.compute(comp, compkp)

        # 2 Nearest Neighbour Matching for Ratio Test
        matches = bf.knnMatch(compdes,calides,k=2)

        # Application of Lowe's ratio test to filter down to good results
        good = []
        for m,n in matches:
            if m.distance < ratio*n.distance:
                good.append(m)  

        cfd = 0 # Confidence
        if len(good) > minmatch:  # Ensures that RANSAC is passed enough matches

            # Extracting Keypoints from 'Good' Matches
            compgmp = np.float32([compkp[m.queryIdx].pt for m in good]).reshape(-1,1,2)
            caligmp = np.float32([calikp[m.trainIdx].pt for m in good]).reshape(-1,1,2)

            M, mask = cv2.findHomography(compgmp, caligmp, cv2.RANSAC,5.0) # Homography is found using RANSAC

            cfd = np.sum(mask)/len(mask) # Confidence metric is generated

            if cfd>cfdval: # Determines if confidence is higher than threshold

                print('Match - Confidence is ' + str(round(cfd*100,2)) + '%') # Data ouput for report
                match = 'pass'

            else:
                print('No Match - Insufficient Confidence: ' + str(round(cfd*100,2)) + '%')
                match = 'fail'

        else:
            print('No Match - Insufficient Confidence: not enough data points.')
            match = 'fail'

        sec3 = time.time()
        
        Totaltime = str(round(((sec3-sec2)*1000),7))

I saw your post, and while fixing up the styling, the forum made it disappear. it’s probably in some moderation/spam queue. I can’t do anything about that.

the parts of it I did see did not show timing measurements. I hope I just missed them… make sure the code contains timing measurements.

yes, please try again !

I believe that my code is now visible again.

time.time() and timing a single iteration.

yeah, no.

what I said earlier is now certain to apply.

don’t “wing” this. do it right. learn how. starting point:

https://docs.python.org/3/library/timeit.html

related: python - Random changes in the processing time of matching two images - Stack Overflow

That is not the issue… I do not care about improving that part of the code. Any changes on how I record the time will not answer my question.

I know it’s related, it is the exact same post uploaded by me.

you could ask for clarification. I believe I stated that how you measure is “causing” the issue you complain about, and the issue is purely a measurement artefact.

maybe someone else will take the time to convince you that you need to pay attention. you didn’t think I was worth paying attention to.

I do not think it is because with any other picture I do not encounter this issue. I did not pay attention to your comment because of the level of disrespect.

related: python - Random changes in processing time of matching two images - Stack Overflow