How to measure exact horizontal and vertical circle diameters?

Hi All, I’m successfully using HoughCircles to locate an indentation on an image of a piece of metal, but require also to validate whether the indentation is sufficiently circular (according to a defined criterion). My observation is that HoughCircles also detects indentations that are not sufficiently circular; therefore a 2nd validation step is needed.
What I have tried…
a) Using the output from FitEllipse to detect the ellipse that is in fact the same shape as the circle detected using HoughCircles. Because FitEllipse returns width and height, the idea was to use these to assess the circularity. This approach was a nearly successful but not quite, as there are 2 types of camera image to support and FitEllipse only detected the indentation on the images from one of them, in spite of extensive testing of different parameters.
b) I also tried the SimpleBlobDetector, but this only picked up very small blobs in the image, not the huge one right in the middle.

Any suggestions on where I might go from here?

In essence, it seems like a simple thing. we already have the centre point of the candidate circle and the radius, which may vary a bit depending on which point on the circumference the measurement is taken. So, is there a straightforward way to locate the circle edge and thereby measure the radius in each of the NSEW directions. If there is not, then I’m guessing I might need to…

  • Either run the canny edge detector (invoked previously to test FitEllipse) and process the output somehow (?) to detect the circle edges.
  • Or, perhaps copy and modify the HoughCircles function to achieve the same result?
  • Or ???

please add some example images to your question
it would be important to see, in which exact manner your blobs differ from a perfect circle

Here’s a source image (after applying blur), plus canny output and resulting ellipses. I’ve been experimenting with fitEllipse, fitEllipseAMS and fitEllipseDirect. There is some difference between them, but it’s not significant. As you can see, I’m getting noise and not the bit I’m interested in. As Adding enough blur removes the noise, but still doesn’t detect; and similarly with other configurables.

I don’t have a ‘real’ image that deviates sufficiently, so I generated those by drawing various degrees of deviation on a bitmap, then invoking HoughCircles to test, then tested varying configurations of HoughCircles param1/2 to see if there was a configuration that satisfied the criteria (no more than 1% deviation in width/height). I guess there was never going to be more than an outside chance in this respect!

EDIT - Just removed the canny and ellipses images. Apparently because I am a new user to the forum, it won’t let me add more than one image…

first things first.

you should change your scene to remove that shadow:

you should want that dent to be very nicely contrasting. see if you can play with the lighting to make the rest of the surface appear very differently from that dent, and the dent to look very uniform.

when you have that, getting a hold of that dent shape will be quite easy. findContours. you can do all kinds of statistics on a contour, including various measures of circularity.

no need for Hough Transform and Canny. those are nonsense.

@crackwitz - The shadow is surprisingly not created by external lighting, as it is in fact a feature of the image capture device. There is probably a good reason why this is the case and it’s unlikely I can change it directly, but I’ll try doing some pre-processing of the image to remove it that way and see if it helps.
Thanks for your reply though - I’ll try using findcontours directly (without Canny) and see what I can get.

Could you elaborate on your comment on Hough Transform and Canny? Was that a general comment on those features, or are they just useless for what I’m trying to achieve?

both. for one, the image, as it is, is hard to work with, and those algorithms enhance features you DON’T want enhanced, at all.

for another, they’re newbie traps. Canny, Hough Transforms, matchTemplate. they do something that looks useful to a newbie, but really isn’t, not in the slightest. they have very narrow use cases and newbies almost always use them for everything else, because that’s the only algorithms they see in random blogs and “video tutorials”. flashy nonsense makes for good blog content, which causes lots of views, which rakes in the adsense bucks. those blogs rarely teach good judgment.

it should be your heuristic to stay away from those algorithms.

and even before that, it should be your heuristic to get the best possible image as early as possible (in the pipeline). it’s like project management. fixing issues after the deliverable has been installed at the customer’s factory costs you a thousand times more than fixing it in the ideas stage. same here, fix the scene/camera/lighting, don’t wait until post-production.

“best” is a judgment newbies don’t have the experience to make yet. generally you’d have some high level (i.e. vague) plan how to get from picture to solution, and the operations you’d plan to use (or try) would inform in what aspects the picture can be judged to be better or worse.

forgive me if I don’t believe that this shadow is due to the camera/image capture device. it looks very much like a lighting artefact. I think you could only convince me of that if I saw the setup.

another heuristic newbies should have: preferring to show pictures/videos rather than using words, and sharing LOTS of data. 90% of the time people show up and don’t post any picture at all. 9% of the time they post just the picture from their camera (“inside” look), which is often impossible to make sense of. 1% of the time (honestly, never) they show what the whole setup looks like, as if a human were to inspect it from the outside.

1 Like

Maybe on an image like this, the laplacian operator would give better gradients, as it filters the high-frequency component.
If you have a candidate center of your circle, you could iterate in several directions from this point, and check the distance to the maximum gradient value. If this distance is constant in each direction, then it’s a circle.
To begin, you can take the horizontal and vertical directions (e.g. (x,y),(x+1,y)…(x+n,y) for the horizontal component), then add the diagonals, and if needed, other directions, too.

1 Like

@crackwitz

Many thanks for the helpful overview. Yes I am a newbie at OpenCV :slight_smile:

I understand your scepticism, but the camera / image / lighting is an aspect of Brinell Hardness Testing kit and is designed in such a way that the camera is embedded in a probe which sits recessed and in direct contact with the surface, obscuring ambient light. I verified this by a) manually covering the surrounding area, and b) rotating the entire rig. In both cases, it made no difference whatsoever to the captured image. Of course if the system had to work in all manner of varying ambient lighting conditions, then it would be much more difficult to get consistent results from the image analysis, hence the ‘half moon’ lighting you see in the image being one method that is used.

@kbarni - Thanks that’s great. Applying the laplacian operator first looks to be improving the accuracy of the HoughCircles detection, which was already pretty good for these images, but could be a bit intermittent (see subsequent posts below). That is a considerable bonus. But, could you clarify your suggestion:

iterate in several directions from this point, and check the distance to the maximum gradient value…

Now this seems to be the crux of what is needed. Are you suggesting a pixel-level iteration? e.g. One of the following methods
https://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html
If doing pixel iteration, then looking at the output of the laplacian operator below, I’d be nervous about the bit at the top of the circle where there is a ‘disconnect’. i.e. When exactly to stop? Is there a different type of iterator that might be more appropriate for this purpose?

Laplacian output:

Initial Hough detection (without laplacian):

Hough detection after Laplacian:

It can be something like (C++):

// we got cx,cy,r (center, radius) from HoughCircle
// let's check the horizontal distance to the rightleft and right
int r_left_measured, r_right_measured, r_up_measured,r_down_measured;
int tolerance=20; //20 pixel tolerance
float maxl=0,maxr=0;
for(int x=r-tolerance,x<r+tolerance;x++){
    if(image.at<float>(cy,cx+x)>maxr){ //check the image type and replace <float> if needed!
        maxr=image.at<float>(cy,cx+x);
        r_right_measured=x;
    }
    if(image.at<float>(cy,cx-x)>maxl){
        maxl=image.at<float>(cy,cx-x);
        r_left_measured=x;
    }
}
//same for y => image.at<float>(cy+y,cx)
//check the distances:
if((r_left_measured==r_right_measured)&&(...check the other directions too...))
    printf("Hooray! perfect circle");
else printf("Ellipse");

For the decision part this is an over-simplified solution (those values will never be equal); it’s better to compute the average and std. deviation of the values.

1 Like

@kbarni - Many thanks! I’m sure I can make that work for this application.

For interest, I fleshed it out a bit for my test harness which seems to do the job on basic testing, although I’ll need to turn it into something in C# along with likely further refinements in due course.

template<typename ELEM> void verify_circle(Mat image, int cx, int cy, int r, int range, float tolerance)
{
    // we got cx,cy,r (center, radius) from HoughCircle
    // let's check the horizontal distance to the rightleft and right
    int r_left_measured = 0, r_right_measured = 0, r_up_measured = 0, r_down_measured = 0;
    ELEM maxl = 0, maxr = 0, maxt = 0, maxb = 0;
    for (int x = r - range; x < r + range; x++)
    {
        if (image.at<ELEM>(cy, cx + x) > maxr)
        {
            maxr = image.at<ELEM>(cy, cx + x);
            r_right_measured = x;
        }
        if (image.at<ELEM>(cy, cx - x) > maxl)
        {
            maxl = image.at<ELEM>(cy, cx - x);
            r_left_measured = x;
        }
        if (image.at<ELEM>(cy - x, cx) > maxt)
        {
            maxt = image.at<ELEM>(cy - x, cx);
            r_up_measured = x;
        }
        if (image.at<ELEM>(cy + x, cx) > maxb)
        {
            maxb = image.at<ELEM>(cy + x, cx);
            r_down_measured = x;
        }
    }
    const int width = r_left_measured + r_right_measured;
    const int height = r_up_measured + r_down_measured;
    const float err = (float)abs(width - height) / 2 / r;
    printf("verify dia=%d, left=%d, right=%d, top=%d, bottom=%d, width=%d, height=%d, err=%f\n", 
        r * 2, r_left_measured, r_right_measured, r_up_measured, r_down_measured, width, height, err);
    if (err <= tolerance) printf("Hooray! perfect circle\n");
    else printf("Ellipse :(\n");
}