Using Open CV to detect greenhouse crop detection & infestation

If I understand correctly , I need to make the training image backgrounds to black, or testing image ?

No, I’m suggesting that you use color analysis as the primary process.

Then use GLCM as a -secondary- layer to sort out ambiguous results.

If you read my previous post with this in mind, it may become more clear.

As a start, how about trying to figure out which pixels are background, and replace them with black. See if you can get that far for now.

i understand now , do i need to turn the training images background images to black?

You’re getting a bit ahead. Just see if you can identify background vs foreground in any image. Things have a way of coming into focus after you figure out the first steps.

For the color histogram:

https://www.pyimagesearch.com/2014/01/22/clever-girl-a-guide-to-utilizing-color-histograms-for-computer-vision-and-image-search-engines/

i probably need deep learning to remove background of these kind images i think.

Jeffin, If I thought that was necessary, I would have advised you to go that route. Deep learning / CNNs have a rather steep learning curve, and most apps require a lot of training data. On top of that, you’ve got to make sure that the CNN’s notion of features is the same as yours. Your images are not distinct, like elephant vs giraffe, and they are rather amorphous. So there won’t be as much for a CNN to work with feature-wise. And it’s a black box. If it fails, it’s often very difficult to tell why.

And finally: The process you’re referring to is called ‘segmentation’. There are two primary forms of segmentation in deep learning: Semantic segmentation, and Instance segmentation. The notable example of the latter is called “Mask RCNN”. Guess what it requires for training images? It needs a copy of your main images with all the background pixels blacked out. That’s the ‘mask’ part. So you’re back to the process that I suggested anyway.

Having said all of that, I don’t want to discourage you. If you’ve got time, then try it. The two books that I’d recommend are “Hands on Machine Learing” by Aurelien Geron, ISBN 1492032646. And " Deep Learning with Python" by Francois Chollet, the author of the Keras library for Tensorflow. (Check Manning.com for news on the 2nd edition). PyTorch is also cool. You’ll find info on Mask RCNN for either platform.

Once your model is trained and working under Keras/Tensorflow or PyTorch, you could save off a file with coefficients that can be loaded in OpenCV’s DNN module. But aside from that, you’ll probably need to follow up on a deep learning forum.

Good luck!

uff, I’m totally confused, what i mentioned above is i probably need AI to remove background images and replace with black that’s what i meant. I’ll go with what you said.

“AI” is not synonymous with Deep Learning. DL, in the case of images, usually implies a convolutional neural net. And in the context of CNN’s, segmentation (which is what you’re asking about) requires a lot of training data. You’re not just asking whether there is a dog or cat somewhere in the image, like the simple neural net demos.

BTW, ‘segmentation’ just refers to separation of foreground (stuff you’re interested in), and background (stuff you’re not). It can be accomplished in a lot of ways without neural nets. The method that I recommended should be one of the quickest to code: Find pixels that have the range of colors that you’re interested in (green to brown). All other pixels are background. It’s kind of like the opposite of using ‘green screen’ in video effects.

If you’re confused about the deep learning side, pick up the books that I recommended. They cover the ground. There are some great books on OpenCV that cover segmentation without using neural nets.

But also re-read the replies in this thread. There’s enough there for you to make sense of this.

OKay , let me work on it. Thank you


result of converted image i just replaced the white pixel to black , the black dots on leaf’s that’s not a good sign because the glcm take the image as mites or something.

what’s your opinion about grabcut ?

Debug:
What was the process used to achieve that?
What is it that caused the black specs?
How did the bright blue escape (top left)?

I simply replaced white pixels using black , do I need to use image matting or grab cut ?

Don’t worry about the small artifacts yet. Rather than replacing white pixels, replace ‘everything that is not green’. See what happens. Then artifacts can be addressed by a number of methods: Median filtering, Bilateral filtering, Morphological ops (open, close) etc.
But you need to see what you’re working with first.
And post the ‘before’ pics as well as the ‘after’.

before


after

Here is the code
from PIL import Image

img = Image.open('D:/ai training/aphids/126APPLE/IMG_6105.JPG')

img = img.convert("RGB")

datas = img.getdata()

new_image_data = []

for item in datas:

    # change all white (also shades of whites) pixels to yellow

    if item[0] in list(range(190, 256)):

        new_image_data.append((0, 0, 0))

    else:

        new_image_data.append(item)

        

# update image data

img.putdata(new_image_data)

# save new image

img.save("D:/test_image_altered_background.jpg")

# show image in preview

img.show()

Here is a situation, that definitely gonna happen, the image in my hand is the plant infected with ‘mites’ they actually make white textures like this , so when i turned the white colors to black the mite white texture has gone .

@StringTheory I think i reached some point.

No idea why flesh tones would be escaping your filter. I mean, there’s such a thing as ‘having a green thumb’ but those are your fingers. :slight_smile:

I’m going to let you take this for a while until you can figure out the ‘Not-Green’ filter. But here’s something to work on:

The R, G, and B values in each pixel can be regarded as dimensions, like width, height, depth. So consider that as a three-dimensional array that looks like a cube. The cube has 256 values (indexes) per side. So the value of Red from one of your pixels may indicate a height (Y index), Green as width (X index), and Blue as depth (Z index).

This means that every pixel is a 3D index into a point within the color cube. Look here to visualize: Color cube

You need to figure out which points in that color cube that you want to accept as valid colors for leaves. Turn all the other pixels black. The flesh tones in your fingers won’t be anywhere near the green corner of that cube. Those pixels should be changed to black.

I’ll leave it to your ingenuity to figure out how to ‘map’ the pixels to your color cube, and how to figure out which parts of the cube should be valid pixels.

Don’t worry about introducing black specks (like the mites). There are ways to deal with that.

1 Like

BTW, there are projects that detect humans by detecting flesh tones. You may be able to find OpenCV code for something like that on Github. Then adapt the code to look for green rather than flesh colors.

what i did is this
i masked green yellow and brown thats the reason the fingers not turned into black
# find the green color

mask_green = cv2.inRange(hsv, (36,0,0), (86,255,255))

# find the brown color

mask_brown = cv2.inRange(hsv, (8, 60, 20), (30, 255, 200))

# find the yellow color in the leaf

mask_yellow = cv2.inRange(hsv, (21, 39, 64), (40, 255, 255))