Circle Detection Issues

I have an image ill leave below but its one of the few issues I have been having so far.

I have tried Houghcircles, SimpleBlobDetector, and CannyEdgeDetector I have had varying results with them, so far SimpleBlobDetector has ended up being one of the most accurate but it still leaves some outliers like this exact image below sometimes it wants to recognize the circle and outlines it, specifically the center post (Usually constantly jumping back and forth not very accurate)

I was wanting to see if there is any better options someone can point me towards for specifically the center post I have branched out as far as almost training my own dataset for this model alone but I feel like that is kind of pointless if there is possibly an easier way this object will ALWAYS be the same lighting and background and position as shown any ideas are welcome!

I will need to be able to find the center of the post with X and Y as the machine needs to center over it which is why I opted for Blob Detection

image

Current results using BlobDetection:
image
Not easy to see from a still image but on a live feed the center circle is constantly jumping left right up and down sporadically.

Problems I’ve ran into:
1.) The background is almost pure white it is slightly out of focus on the camera to make sure it doesn’t cause excess noise but still does slightly
2.) The center post isn’t fully defined as it has 2 black pits in the plastic ring around it which has caused me issues.
3.) When I finally find something that works, it seems the live video feed causes some sporadic movement of the outlined circle usually jumping up to 10 pixels away from the center post.

Bump anyone have any ideas?

Just one idea. I think you should focus as exactly as you can, as exactness of an input image gives you most information to eork ith, and the best possibilities for testing any parameters, for example strenght of denoising - if any.

1 Like

stay away from Hough and Canny. they are newbie traps.

there are some algorithms to locate circular features. some of them are Hough-based (among them the “radial symmetry transform” family). others are not.

resolution looks marginal.

if you can, work with absolute contrast instead of edges: change lighting, scene, anything physical, such that background is white/black and foreground is black/white.

don’t try to locate two circles for the same object. I don’t see any point in that.

you want optical inspection, right? then you need to approach this more like Machine Vision.

that means fixing the position of your object, so it has very little variation. then you can locate it precisely with very little effort. typical MV applies 1D line sampling on the picture (from multiple sampling lines arranged like rays in a circle) and then extracts relevant features from those 1D signals, such as maximum gradients. then those points are passed to a circle fitting function, which runs quickly because it’s just a few points.

Is the feature you are trying to find (the center post) always in the same relative position compared to the outer circle, or do those move relative to each other? I ask because the outer circle seems to provide a lot of contrast (not to mention that it is larger / has more samples to fit a circle to). If you can find the large/high contrast circle and use that to determine the location of the center feature, you might have better luck.

You mention that the object isn’t well focused and that this is an attempt to reduce noise. If it’s sensor / imaging noise you are trying to address, you might consider using a lower gain / “iso” setting (which will require a longer exposure time). Or possibly a different lens (if that’s an option). If you don’t have control over the exposure settings, you could try averaging multiple (3? 5?) images together.

I had to solve a similar problem and tried to use the circle finding routines in OpenCV, but ended up making my own routine. It is slow*, but provides very accurate results for my use case.

  1. run canny edge detector**
  2. findContours
  3. filter contours based on aspect ratio, arclength, minEnclosingCircle size - the idea here is that you might get a lot of false positives from noise etc., so filter them early if possible.
  4. Fit a circle to the contour points - I used the “Modified Least Squares” method from this paper: http://www.cs.bsu.edu/homepages/kerryj/kjones/circles.pdf Equations II.8 - II.15

*Slow depends on many factors, and in my case I’m running on fairly slow (compared to a desktop CPU) embedded hardware, have variable lighting, and am using high resolution images. If you have a good estimate of where the circle will appear in the image, you can speed up processing time by using a small ROI instead of the full image. If your lighting / contrast is consistent, you might be able to tune your algorithm to your specific use case. I was able to make significant improvements by using an ROI and also by doing an initial pass on a downsampled image (which then gave a very good estimate for my full res ROI)

**I used canny and it produces good results, but I probably could have gotten away with a simpler edge detector. The big problems with Canny: It’s slow, and it has a magic parameter that’s rather sensitive. Since I have variable lighting, I have to try a range of canny magic numbers and use the one that generates enough edges/contours, but not thousands of contours. It’s not particularly efficient (not a big deal for my use case), but it does provide very good results (which is important for my use case).

So here’s the deal on the object the outer circle should always stay consistent to around 10.5mm diameter so I am using that as a PX per MM conversion

The center post can have variation as it’s not super stable it can move from the center to about .5 mm in any direction (why I was trying to grab it’s location inside of the outer circle)

I was using this to locate the largest area / gap between the center post and the outer ring to find the ideal area to insert a filling needle which in most cases only has about .3mm of room for error without crashing into the center post or the outer edge

So far I took most peoples advice and used the outer edge to calculate how far to move, this is working but about 2/10 times it will hit the outer edge or inner post and cause a scary moment of the needle bending and then finally slipping into the fill area this isn’t ideal because I wouldn’t want to worry about the needle crashing or causing damage to the object when filling or missing the object.

So far I have been told I need a more stable jig to hold everything in place to a fixed position as currently everything is in styrofoam trays which allows even more play which is good for when it crashes but bad for having a correct alignment

So far I have tried to maximize as many things as I can control such as light, distance from the object, focus but seems like It’s a bit more complicated with a mostly white area when converted to grayscale most things in frame of my ROI are 50-200 so it’s hard to cut out what I want and what I don’t want exactly

There is clearly a lot about this specific project that I don’t know, so I’m just shooting from the hip. Things I notice:

  1. the resolution is low - can you use a different camera? A different lens with a narrower field of view?
  2. The image appears compressed - is this the image you are processing, or is this just for show and tell? If you are operating on a compressed image, you would be better off using an uncompressed image.
  3. You said that the circle in the video feed “bounces around” - is the average position of the bouncing around circle pretty close to what you want? If so, consider using an average / median position of the bouncing circle.
  4. Is the position of the center post always left/right of center? (along the axis of the black rectangular “hole”) If so maybe yo would do better to find the black regions and choose the one with larger area to position your needle.
  5. Is your camera calibrated with respect to the workpiece (and the positioning system)? By this I mean, if you can identify the position in the image where you want to position the needle, can you determine precisely how to command the positioning system? Even small unknown rotations or translations can be problematic.
  6. Can you modify the part in any way? Make the center post a different color? Augment the center post with a small centered circle.

Currently I am running the camera in 1280x720 - 60Fps using this camera: https://www.amazon.com/ELP-Camera-Megapixel-Windows-Android/dp/B00KA7WSSU
Im sure I can get different lenses as I know the stock lens is a wide angle lens just wasnt sure what will be correct for this project.

Im currently using a calibration Yaml file to “Undistort” the image but heres the normal and undistorted image side by side


(Distorted Cap Frame)

(Cap without distortion)

Its pretty close problem I am having is I was trying to get something consistent I tried even changing the “Min repeatability” on blob detection to higher like 10-20 and it seems 7 was the sweet spot for consistency without losing detection all together.

The center post can lean in 360 degrees any way its pretty rugged but it does have give if it has been bumped previously

Currently this is setup on a Ender 3 v2 3D printer with minimal modifications added camera, lighting for the camera and a raspberry pi to control it through Gcode commands so by pulling the outer circle px per mm it consistently seems to perform accurate movements on the X and Y axis Z is fully independently controlled by the program as it always has to return back to the correct height for the camera to be in the correct position.

Honestly I would rather not have to modify the part physically but if you have any ideas im all ears I consider myself beginner to intermediate, on a lot of programming I learn what I find is relevant to my projects and learn it well as I can then try to apply it and learn from my mistakes I would post my code but honestly with what I said above you can imagine its a 1 loop wonder… Not very professional by any means but it does work which is what I am happy about so far.

To add a little more info I am trying to make a budget DIY build of this Vape-Jet | Fully-Automatic Cannabis Cartridge Filling Machine
As the cost of one of these is $35,000 and I feel like there is better options for DIY at much cheaper prices but it seems no one has taken on the feat of making one without a high cost.

I tested that camera (or something equivalent), but it wasn’t suitable for my application, so I don’t know much about it specifically. (Not suitable doesn’t necessarily mean not good, I had some specific needs that most vendors couldn’t support) I use an AR0330 based camera, which has similar SNR and DR, but it looks like the OV2710 might have better low light sensitivity. I’m not familiar with that lens, but from the full image you provided it looks like it has a lot of vignetting, so it might not really be appropriate for that sensor (which appears to be fairly large at 5.8mm x 3.3mm, 1/2.7" format. The pixels are 3.0um which is huge by today’s standards.)

I suspect you could get away with a much narrower FOV (longer focal length) which would give you much higher resolution in the area you care about. Probably something in the 4.0-6.0mm range, if not even larger.

How did you get the calibration data? Is it a file the manufacturer provided, or something you generated from a calibration procedure. Frankly it’s not great, assuming the rows/columns of cartridges are linear in the world. It might not matter much, but it could be a source of error for your positioning system.

I have done something very similar to this, and I’m able to get very accurate positioning - something on the order of 0.025mm / 0.001"

My suggestions:

  1. Get a different lens that is suitable for your sensor. The image circle for your sensor is fairly big (1/2.7" format) and there are a lot of good lenses that won’t fill the whole lens (or will have vignetting), so you might be stuck with a 1/3" format lens. Probably OK for your use case since (I think) you care about the central part of the image and not the edges. Depending on how far away the camera is (or can it be adjusted?) from the parts, you might consider getting a much narrower FOV lens and placing the camera further away. This will give you a better DOF so more can be in focus, if that’s a factor. You should be able to find a F2.0 or faster lens - I’m mentioning this because you said noise was a factor, so more light gathering might be good for you (Although since you can control the lighting and appear to have a sensor with good sensitivity, this probably shouldn’t be a big concern)

  2. Get good intrinsic calibration for your camera.

  3. You say the X,Y movements are pretty accurate, but I’d want to calibrate the camera (extrinsics) to the XY motion somehow. If you have calibrated intrinsics you can do something like:

  • Put a single circle target in the machine bed somewhere, at the same height as the workpieces. Move (with the XY motion system) the camera over the circle so it is imaged with in various locations.
  • Using the relative positions of the XY and the corresponding (accurately undistorted) image locations, compute a homography that maps between the two spaces. Basically you are using one fixed circle and your XY positioning system to create a calibration target with multiple points aligned to your XY control axes.
  • Use this H to compute the motion you need so the detected circle ends up at a the desired location in the image.

(This might sound unnecessary / overkill, but based on your positioning tolerances I suspect you will want to do this. The ability to accurately move to something you see in the image is important (and would allow you to mount the camera with arbitrary rotations / translations))

What you want to achieve is totally doable, and you might be able to get away with something less involved, but I suspect you’ll have better success if you get a well calibrated system to work with.

-Steve

I used the circle grid calibration to get a calibration file which is noticeably better when looking at straight lines as with calibration the straight edges are straight but without they’re very obviously contoured.

I do believe a smaller FOV will be beneficial as I am currently cropping just enough room from the frames to see the ROI I need which is a LOT smaller than what im currently getting.

Any references to this or tutorials id be willing to try, having it mounted to the machine seems to add a limitation to how accurate of a calibration I can get without removing it first, as I am limited to workspace area of the Ender 3 V2 I have it mounted to.

This seems quite advanced but interesting currently I do not think I have much room to make this work especially with a full tray on the bed I am left with about 2" of movement space on any side of the tray. I was using the cartridge opening as a reference to MM per PX conversion which is working well and when I need to advance to the next one I use the previously stored variable to make the movement to the next one with GCode which is pretty accurate so far.

Of course I believe this fully I am by no means a camera expert so I have little understanding of the actual technical side such as Focal lengths, Lenses and such but feel like this project may help me better understand these more than any YouTube video could. Which is the goal I love to learn by doing honestly I was going to order a set of M12 lenses as they go for fairly cheap and was recommended this may be the best way to just test and see what works for what I’m trying to get out of the camera. I’m not usually one for brute forcing parts usually I like to diagnose and figure out exactly what I need, but this seems to be more of a trial and error type of problem without a deep understanding of these parts.

Forgot to mention the distance can be adjusted but due to travel times I have a suspected min / max distance which is about 35-45mm away from the part.

Took your advice went ahead and got another lens, as this lens does seem to be causing most of my problems currently for detection. Now that I have reconfigured the system to have the camera a little closer (Distance mentioned above) I now cannot seem to get the camera to focus decently at all with the wide angle lens so I have ordered this one the only major difference I see is this is a 1.3mp lens and my sensor is 2.0mp but from a quick study online it seems 1.3mp will work fine with 720p video as long as I don’t plan on viewing in 1080p:

After hours of trying to learn how lenses work with sensors as most info online pertains to large sensors like those of photography cameras, I came to the conclusion with the info you provided this will probably be the best option to try. Just not sure about the working distance as it doesn’t mention it but when using a calculator online I found, I input my specs and the lens specs and came up with this info at 40mm working distance which seems fine as my ROI is about 1" x 1" area at distance 40mm.


Let me know if you see any errors in my ideas thank you!

I don’t have experience with that particular lens, but some comments:

  1. It’s a very fast lens (f1.4) which is great for light gathering, but such a large (small numerical) aperture will reduce your depth of field, so maybe that’s why you are having focusing problems (particularly if your object has some depth to it)
  2. I don’t see a minimum object distance in the specs for that lens, but in my experience 40mm would be very close. Most of the lenses I use have a MOD of 10cm or more. If I understand correctly you were having trouble focusing at 40mm distance with your wide angle (original) lens - this doesn’t surprise me. I also think you might have trouble at 40mm with the lens you ordered (again, I didn’t see that listed in the specs)
  3. The 1.3MP lens might be fine for you, especially since you are using 720p resolution, but in my experience higher optical resolution is almost always better. First of all the manufacturers tend to overstate the optical resolution - maybe picking the central area of the lens for their test, and not telling you that it’s much worse at the edges. I use a 3MP sensor and can tell a significant difference between the 5MP lens we use for some products and the 12MP lens we use for others. They are similar FOV, but the 12MP lens produces a much better and more consistent image. It’s not just the optical resolution, it’s a better designed / more sophisticated lens. It’s also about 6x the cost ($25 vs $4 in volume)
  4. This is a secondary consideration, but you should keep it on your radar. The chief ray angle varies from lens to lens - roughly speaking it is the angle of incidence of the light on the sensor - this is important because sensors are designed to work with certain CRA ranges, and if you get a bad mismatch your image quality will suffer. I’m not an expert on it, but I think a mismatch will result in vignetting, color shifts and sharpness - probably affecting the perimeter more than the central part of the image.
  5. If you have trouble finding a lens that will focus at 40mm, consider moving the camera further away and either accepting the larger imaged area, or selecting a lens with an even longer focal length.

The best supplier I have found is AICO in China. They do sell small quantity samples, but with shipping etc. the prices might be high. Also they are oriented at volume buyers, so they might not work with you:

They are pretty expensive as “lenses from China” go, but they are the best I have found.

If I get a chance I will look through my lens library and see if I can recommend something. Are you able to change the mount on the camera? Is it a plastic mount with two mounting screws? 18mm spacing? 20? 22? I ask because the lenses I have used with a close focusing distance are M8 lenses, so they require a different mount.

Edit: The OV2710 appears to have a CRA of 23.6 deg (I didn’t see a CRA spec for the lens you ordered)

Why chief ray angle matters: https://www.opticsdan.com/post/__cra

For depth I’m mainly only trying to get the top of the object in focus so depth isnt a huge concern as long as I can get the highest points of it in focus.

I’m not sure either, this is new to me as I thought this came down to just seeing more or less of the object / FOV when closer or further away from it I guess it does make sense that there should be a minimum distance that I guess will be determined when I receive it in a day or so.

Hmm I will have to see how bad it seems then, I understand this similar to running videos at 4k resolution even though you may have a 1080p monitor because you can still see a difference in quality.

Im not even going to act like I know what any of this means such as CRA ranges as I have no clue but sounds like a pain to deal with guess ill cross this road when the new one arrives and see how different it is again I’m shooting from the hip on this one.

This is possible but would make the job time go up fairly a good amount as the Z axis (Up and Down) does not move very fast on the ender 3 v2 so having to position higher each time it plunges will drastically increase job times unless I can find a way to increase motor speeds for the Z axis without causing other problems which I’m positive there is just haven’t experimented with how fast I can actually make it go without problems.

I’m not 100% sure on the spacing of the mount holes at the moment but yes it does seem to use 2 screws with a plastic mount which, I was worried also it may not be long enough to use the 6mm lens with enough clearance to adjust it closer or further away from the sensor as I’m not familiar with how that works but just assuming I should have enough travel for it to work decently if not that will be the next part I invest in.

I reviewed this but don’t 100% understand, it seems the only reason this would be problem is if the lens is offset from the sensor a fairly decent amount. In the diagram it seems as long as its lined up or centered reasonably CRA shouldn’t be a problem just based off that graphic alone.

I will also check out that AICO company may have to convince them I need to test some samples to get a few that may work for my need without having to put in a volume order kind of lame to do that but worth it for me if they’re as good as it sounds as the website I ordered from seems to be the only retailer online I found that doesn’t charge $150+ for a single lens… Sort of like EdmundOptics theirs seem ridiculously priced high but then again I’m no expert and they’re probably way more involved than I know about lenses.

Maybe that link I provided wasn’t clear or didn’t have sufficient context, but it’s isn’t about the lens being centered over the sensor (the diagrams in that linked article are of microlenses situated above individual pixels - they are offset intentionally to help account for a specific CRA range.)

The issues is that as the light passes through the lens and is focused on the sensor, the incident angle varies depending on the pixel location and the lens design. The rays hitting the center pixel of the sensor will arrive perpendicular to the sensor, but for pixels at the edges, the rays will not be perpendicular. The problem with non-perpendicular rays is that the physical structure of the sensor array reduces the amount of light that reaches the actual pixel as the angle of incidence increases (the more it deviates from perpendicular to the sensor, the less light the pixel sees). To account for this the sensors have a microlens array on top of the sensor - the position of each microlens (one per pixel) is shifted a varying amount depending on the pixel location within the sensor. This intentional shift is designed to account for a specific CRA value, and tends to work for some range of values. If the CRA of your lens doesn’t match the CRA that the sensor is designed to correct, then your image can suffer.

As I mentioned this is a secondary consideration - if you stick with fairly normal lenses, you will probably be OK. But if you end up struggling with bad vignetting or other imaging issues, it might be good to compare the CRA of the lens to the sensor.

Ok, yeah I had a complete oversight of what it was actually showing in the diagram I took “micro lens” as the actual lens I was using. Definitely something ill consider from now on just as a precautionary measure of compatibility definitely not something I even knew existed until now.