Using Open CV to detect greenhouse crop detection & infestation

thanks for your suggestion

It’s tough to say how that would compare with deep learning approaches. I don’t think it would be difficult to try the non-DL approach though. The first thing that registered for me was that the dead leaves seemed to occupy a distinct chroma range. You’ve no doubt seen a lot of the camera output, so you’d be able to tell if some locations have objects that would conflict (same brown-gold color range).

The other factor would be whether lighting is consistent, or whether you’d need to compensate. That’s often a problem with DL systems as well though (ref to the famous ‘enemy tank identification’ stories).

If you went the DL route, you’d need to find a way to outline the dead leaves, etc. Unless you’re just looking to indicate that there are ‘some dead leaves present.’

2 Likes

Thanks for your suggestion let me work on it and will definitely post the results here!

1 Like

unfortunately, we cannot help with joblib or any other arbitrary python library ;(

(i also think, using haralick or glcm features is a dead end for this problem. you should not discard the color information, imho)

why? any suggestions ? i saved the trained files as .pkl , what i need is to load and call the pkl file

because it is no more an opencv related problem, and thus off-topic.

OKay :frowning_face: :frowning_face: :frowning_face:

i did some examples with the harlik , i got a question? does the ‘haralick textures’ detect the color variation in picture ?, example the good healthy leaf green color and nutrient deficient leaf got yellow does the harlick detect the difference in color or only just texture ? @berak @Alejandro_Silvestri @StringTheory


nutrient deficient leaf

again, we do not see your code, but you probably work on grayscale images – so, NO.

that’s what i would think.

it’s nice, that you try out all those ideas, but this might be another “dead end” ;(

Re GLCM: I had originally suggested that to augment color analysis. GLCM stands for ‘gray level co-occurence matrix’ so you’d need to use color as another dimension.

I have done similar in some analysis; using a combination of various methods to generate helpful clues for ID’ing objects in images. GLCM can be useful for segmenting images that do not have well-defined edges. I suspect that the aerial analysis guys use it in similar fashion, along with color analysis. In your case, you’d probably want to remove areas with less texture, like flat planes, solid background areas.

Did you get anywhere with pure color analysis?

Jeffin, Is this representative of one of your whole images? I had the impression that your cameras were at higher elevation.

I took this photo using my iPhone, what my goal is to use raspberry pi camera, does the raspberry pi cam does the job well ?

You’re converting the color images to gray-scale in your code, so it’s no doubt keying on something else.

From your initial post, I thought that there were only two categories. You’ve got five. So I’m guessing that color range alone won’t be sufficient. But you should look at the RGB range (actually BGR in OpenCV) of the pixels that you’re interested in. You should be able to figure a range, or multiple ranges, that represent the leaves. For example, you don’t need any of the black, white/gray, or spectral blue pixels from your cell phone image.

Also, I think that scaling will be important. IOW, distance from your camera to the objects. Again, I thought that the camera was going to be mounted higher above. Your test images should be representative of the camera distance, lighting, etc.

Yes, camera will be mounted higher above. Okay i got it i need to train images that took from high above from the distance. will do that

So you telling is there is no need to convert image to gray right? i changed the code gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) to gray = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
Would be great if you can give me a simple explanation, what I need to change in the code and what I need to search for.

And can you tell me , what type images is better for training? i mean what image size ? do i need to take images form higher distance or i only need to take images that i showed above?

you train on those views that you get during inference. same views.

which views give better results might lead you to changing how you mount your camera… unless other constraints already require the camera to have this particular view.

Re: Need to convert to gray: Yes, of course you need to convert to gray for GLCM. I was simply saying that if your GLCM code -appeared- to be generating results that reflected the health of the leaves, that GLCM was keying on some textural quality rather than color. That’s a good thing.

But this definitely does not preclude the need for color analysis. That will be your most valuable tool for the initial process.

I suggest that you try to eliminate all pixels that do not tell you anything about the health of your plants. In other words, anything in the background, like pipes, etc. In the images that you’ve shown, these show up as flatter areas with colors that are distinct from your plants. You could turn all background pixels to black (0, 0, 0). Then do a histogram on the remaining pixels to find out the color spectra. This should give you valuable info. Your color histograms of healthy plants should give sharp spikes in green spectra. Less healthy would be more brown. That should be simple.

Then use the same image (with blacked-out background pixels) to generate your gray-scale image to send to the GLCM function. The combination of the color histogram + GLCM should get you part way to where you want to go.

It would be good to keep image size consistent if possible, but it probably doesn’t need to be perfect. More important will be the lighting. Can you maintain some degree of consistency in lighting between your various camera locations?

BTW, some CNN’s (convolutional neural nets) require consistent image size. But that’s not even always the case any more. And you can crop or scale your images – within reason.