Adaptive threshold parameter estimation

My project requires choosing the right parameters for cv2.adaptiveThreshold(): blockSize and C for many images. The project consists of 21 directories with approximately 100 frames each. Every frame is unique. After the threshold, there should be an image left where the crystals (the white spots) are distinguished from the background (black). This threshold image is then used to find contours.

Right now, I’m manually looking for the right combination of blockSize and C and apply these parameters to all frames of a single set. This requires me to endlessly fiddle with the parameters (for 21 sets of data).

I can imagine this is possible with using gold standards and looking for the best similarity, but it’s time-consuming to draw the gold standards myself.

Is there a way to quickly calculate the right blockSize and C for every single image (or else for one image of every set of images) without making use of gold standard similarity?

See below for one frame to give you a feel of what I’m seeing. To get rid of the vignette, I cropped below image using cv2.selectROI() to only use the middle part for analysis.

welcome.

why do you do a threshold? what’s the goal? question your chosen solution. because we will. because that’s the most common issue people have, especially when they don’t know it.

I’m sorry if that was not clear, I edited the post.

Basically, I need to track the contour of the white crystals, but to find the contours, I need to apply thresholds for cv2.findContours(). The finding of contours works, but therefore I need proper threshold paramters for cv2.adaptiveThreshold().

your picture shows a “vignette”.

that’s gonna be a problem.

what can you do to remove that before light hits the camera’s sensor?

During data acquisition this vignette is minimized by applying enough immersion oil. New data acquisition sessions to even further reduce vignetting are no option.

That said, for now I cropped the image using cv2.selectROI() so that only the middle part is used for further analysis.

I’d recommend a “difference of gaussians” to remove the vignette, where one of the gaussians is not required.

calculate a strong lowpass (gaussian, median, box blur, anything), \sigma much more than 10. subtract that from the source image. values will vary around 0, so work with floats or signed integers.

this will let you catch those faint contrasts away from the center.

this method is basically an “adaptive threshold”.

it’s hard to make out in your picture what’s going on. there’s some seemingly bright pieces, then some reflections, …

plainly said, the picture is bad. magic (post-processing) will not save it. garbage in, garbage out. the acquisition needs to be rethought.

1 Like

Instead of doing a binary threshold, check for edge detection methods (Canny, Sobel etc.). Once you get the edges, mask it with the original input image to get the crystals. A single value of threshold will probably not work out for all the frames in this case.