Struggling with Processing this Image for Detection

This is the major culprit, I am not sure how to go about processing these images, do note, I’m still pretty new to the OpenCV libraries. I’ve tried binary color but that still leaves some really large gaps. Afterwards I found some interesting code that fills and tries to fill out objects but that uhhh hasn’t been working to well in my case. Any tips or ideas?

void fillHoles(cv::Mat& img)
    if (img.channels() > 1)
        std::cout << "fillHoles !!! Image must be single channel" << std::endl;

    cv::Mat holes = img.clone();
    cv::floodFill(holes, cv::Point2i(0,0), cv::Scalar(1));

    for (int i = 0; i < (img.rows * img.cols); i++)
        if ([i] == 0)
  [i] = 0;

int main()
    // Load input image
    cv::Mat input = cv::imread("testedit2.png");
    if (input.empty())
        std::cout << "!!! Failed to open image" << std::endl;
        return -1;

    // Convert it to grayscale
    cv::Mat gray;
    cv::cvtColor(input, gray, cv::COLOR_BGR2GRAY);

    // Threshold the grayscale image for segmentation purposes
    cv::Mat thres;
    cv::threshold(gray, thres, 110, 255, cv::THRESH_BINARY);
    //cv::imwrite("threhsold.jpg", thres);
    imshow("test", thres);

    // Dirty trick to join nearby segments
    cv::Mat element = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(15, 15));
    cv::morphologyEx(thres, thres, cv::MORPH_OPEN, element);
    //cv::imwrite("morph.jpg", thres);
    imshow("test", thres);

    // Fill the holes inside the segments
    bitwise_not(thres, thres);
    //cv::imwrite("filled.jpg", thres);
    imshow("test", thres);

what are you trying to detect ?

also, can you try to change the lighting, so you get less “specular highlights”

I’m trying to detect those spheres within the channel, and yes I have been working to see if I can fix the lighting. I’d help a great deal I’m sure

each “sphere” reflects that light ring from around the camera.

take a crop of one such instance, but not too closely cropped, with enough area around it of course.

matchTemplate should be able to find all the instances.

That’s a good idea, but I was thinking about using frame by frame image subtraction to find it as I will want to be doing it for each frame of the video. I’m worried about the image template requiring some pretty aggressive calibration in the practical application. I’ve been testing subtracting two MAT’s based on a frame difference but that hasn’t been giving me too much over time