Increase difference between two poorly distinguishable colors

Hello everyone,

my task is to segment metallic objects on red background (left image).

My approach so far is to convert the image to HSV color space (middle image) and cluster it using k-means (right image). Then I remove the background colors 1 & 2.

Unfortunately, the area of the screw head is often assigned to the background cluster due to the small color difference and thus deleted.

Do you have a suggestion on how to make the colors more distinguishable within preprocessing? Or perhaps you have a better approach to segmenting the screw while maintaining its shape?

Thank you for your time and best regards,

lukaskofler

img

If you already know the background color, no need for automatic (k-means) clustering. You can set manually the thresholds (if the lighting is constant).

The best way to increase the difference between the object/background is to take better photos. Try to make sure that the object doesn’t reflect the background.
If you can control the scene, you can put the screw on a translucent glass and use a backlight, to have a black object on a white background.

are you looking at saturation at all? because red (background) is clearly different from gray (screw head)

show your code, make your problem debuggable

Backlighting on a glass surface is a great suggestion. If that’s not an option, maybe change the background to something black and matte. Illuminate it from multiple angles? Can you illuminate it with a different color light (green, maybe?) to provide more contrast / reduce reflections?

Thank you all for your replies and your suggestions.
I would like to apologize very much for being too non-specific about my problem. Unfortunately, it is not possible to change the camera angle, camera quality or background.

The metal objects are to be segmented from many similar images with slightly varying red tones of the background, but also with significantly different lighting conditions (see image). The lightning conditions can not be controlled manually.

Because of this, the thresholds for segmentation using hue and saturation differ between images, which is why I tried the approach using k-means to create similar clusters between images (with the problem of the screw heads getting assigned to background clusters).

As long as the rough shape and the most important edges of the screw (incl. head) would be preserved, it would not matter if finer parts would disappear.

Would you have any suggestions on how to achieve robust segmentation even under varying lighting conditions?

@crackwitz
Except the trivial call of the k-means function, I don’t have any other relevant code and therefore can’t share any, sorry.

img

that right there is a bad picture because the orange-ish background is reflected in one of the faces of the screw’s head. you can’t do anything with that picture except extract a somewhat correct mask/contour/outline, which will be good around the threads and bad around the head.

nothing can solve that picture… except maybe AI, and even that will struggle.

tell your customer/boss, given the constraints, the task is impossible in the mathematical sense. likely your customer/boss then realizes that their wishlist is a wishlist and not actual constraints. it’s a negotiation.

saturation:

image

that should help in segmenting.