Help with feature detection in an image (finding the centre circle on a soccer pitch)

Hi everyone, after watching a YouTube video about ‘Extracting Player Tracking Data - Using Non-Stationary Cameras & Computer Vision Techniques’, I thought I would try to apply a similar technique to soccer to help find video clips to share with my son’s soccer team, that I coach.

It’s my first use of openCV so I’m very much learning as I go.

I’ve created a python functions to take a still from a video and then find the largest green area (assumed to be the pitch), mask it and then find the edges to highlight the lines on the pitch. I’ve attached an example before and after image. I hope I’m not celebrating too soon, but this felt relatively successful. My first aim is to work out what part of the pitch the image is looking at, the player and ball detection I plan to tackle later.

But since then I’ve struggled to write a python function that finds my reference / template image inside the black&white edge images I’ve created. I thought I would start simple and just try to find the centre circle on the pitch, but I haven’t even managed to crack that.

Below is my current code and also attached is my template / reference image, which is just a simple circle with a double-line around the outside, which I was hoping could be found inside the edges image.

I may have jumped to the wrong conclusion, but I was hoping that openCV would be able to find my 2D reference image inside my B&W edges images, even though it was from a different perspective?

Note: I’ve had to merge all the images into a single file because as a new member of the forum, I’m limited to only one attachment per post :slightly_smiling_face:

Any thoughts and ideas would be much appreciated.

Many thanks in advance,


def find_centre_circle(edge_image):

    template = cv.imread("centre-circle.png", cv.IMREAD_COLOR)

    template_gray = cv.cvtColor(template, cv.COLOR_BGR2GRAY)
    image_gray = edge_image

    orb = cv.ORB_create()
    keypoints1, descriptors1 = orb.detectAndCompute(template_gray, None)
    keypoints2, descriptors2 = orb.detectAndCompute(image_gray, None)

    bf = cv.BFMatcher()
    matches = bf.knnMatch(descriptors1, descriptors2, k=2)

    good_matches = []
    for m, n in matches:
        if m.distance < 0.75 * n.distance:

    if len(good_matches) > 10:
        print("matches found")
        # process the match

        print("Not enough matches found ")

why does everyone suddenly seem to want to do this?

here’s one of many existing discussions of this:

I can tell you right now that your approach is doomed. feature matching on an ellipse does not work. it has no features.

real Video Assistant Referee systems are a LOT more complicated. they also work with more reliable information than just the video feed.

1 Like