Can i find contours in a thresholded image?

I have

threshold(region2, region2, mediana(normalizehistogram), 255, THRESH_BINARY);
				erode(region2, region2, getStructuringElement(MORPH_ERODE, Size(3, 3)));
				dilate(region2, region2, getStructuringElement(MORPH_DILATE,Size(3,3)));

				vector<std::vector<cv::Point> > contours;
				
				findContours(region2, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);

				//Draw the contours
				Scalar colors[3];
				colors[0] = Scalar(255, 0, 0);
				colors[1] = Scalar(0, 255, 0);
				colors[2] = Scalar(0, 0, 255);
				for (size_t idx = 0; idx < contours.size(); idx++) {
					cv::drawContours(region2, contours, idx, colors[idx % 3]);
				}

but the result is that no contours are drawning in my image, it’s like my code threshold the contours too

that’s a funny typo :wink:

  • your median func calculated a histogram bin index , but you need an intensity value for thresholding (the value of that bin, not the index)
  • try THRESH_OTSU instead of the median
  • try to visualize the intermediate results (e.g. after the threshold)
  • contours need white things on black bg
  • and damn, show us an image !
2 Likes

sorry my friend i am a opencv rookie here hahahahah here i go

my work consists in a railway screw detection and counter. I thought the best way to do it was thresholded a grayscale image region by region (blue rectangles are analyzed one by one) and then find contours.

I don’t know what you were expecting.

the source image is already quite useless. it’s noisy, no contrast to work with. might only be good for throwing a neural net at.

those blue lines on top of it are also a problem.

you’re applying algorithms without understanding what their purpose is, how they work, what they are good/bad at, what is required of the source data.

once you realize that, and you discard all your chosen approaches you don’t have the experience to choose, you can ask useful questions.

but first, I’d very strongly recommend that you post source data, as unaltered as possible. if you think you can put those blue lines on the picture, I haven’t made myself clear.

Man I notice a certain pride in your words. I’ve already said I’m a rookie here. I just wanted help or advice for my school work I am 15 man I am learning I am not a professional.

I’d very strongly recommend that you post source data, as unaltered as possible.

and you should be willing to discuss approaches rather than insist on trying to make work the approach you chose.

yeah i am totally agree but you didn’t propose different approaches i am opened to another ideas. Blue boxes divide the image in different regions to get their histogram and calculate their median in order to obtain a threshold, each one with a different value. You need anything else (image, code, etc) to propose me another way?

@100367230

Hi Jorge, I can see the image hasn’t a good contrast between bolts and background, so threshold won’t give you direct results. In the same way, “color palletes” are the same for both, so histograms won’t give you direct results either.

For neural network approach, you will need a bunch of annotated images (a dataset), some say 1.000, other say you can start with 50, but as few as a dozen is out of the question. As this being a work to school, I don’t believe you want to go this way.

If you want to continue experimenting with threshold, I recommend:

  • threshold the whole image, with a scroll bar to control the threshold value so you can adjust it manually
  • drawing on images is called annotation. Don’t annotate images you are going to process. For example, you can put those blue lines on a copy for displaying purposes, leaving the original intact.
  • you can try to detect circles with Hough circle detection on the gray scale image

I’m curious, if you don’t bother I ask: what country are you from? Is this school project something you chose or you professor assigned it to everybody?

About @crackwitz’s on “understanding and applying algorithms”: it is a good idea to experiment with each algorithm, showing intermediate results and tuning them manually, you’ll begin to understand how they work and if they suit your problem.

1 Like

Hi Alejandro thank you so much!

you are totally right hahahaha

i have done it but the different ilumination makes it not work to apply it to the whole image, thats why i divide each video frame into those boxes.

yeah i was going to try this in my “findcontour” image (you can see it above) but i cannot got clear circles

I’m from Badajoz, Spain. I’m in a Artificial vision summer course and this is my final project :grinning:

1 Like

That’s a very good use of your summer!

I don’t know that. I’m based only on the images you posted.

  1. Use the original image, not the one with the blue lines, they interfere with the circles
  2. Use grayscale image, without threshold
  3. Annotate detected circles, so you can see if it is working or at least approximating
  4. Read the documentation, the explanation of each parameter, and experiment with them

You can easily measure by hand the approximate radius in pixels, so you can set minRadius, maxRadius and minDist, they help to discard ridiculous circles.

dp starts with 1.0, you can go up to 10.0, the greater the fastest, but with less precision.

param1 and param2 are Canny parameters, the first must be greater than the second (usually twice or more), use them with trackbars and experiment.

Please post some images with the results, good or bad.

1 Like

@100367230, It looks promising. There is place for improvements.

Put code as text, so it is easy to read and quote.

Not sure if you need to normalizehistogram, or even if it improves the image for detection. Try commenting out that section. normalizehistogram don’t generate new info, so the computer doesn’t see better. In my opinion in general it is good for humans only: for visualization, not for processing.

GaussianBlur needs a kernel, Hough circles documentation put an example of a 7x7 kernel with 1.5 sigma. You can play with sigma, looking for image smoothness. Size 1,1 does nothing.

And don’t forget to play with param1 and param2.

I calculated normalizehistogram for an old version. And GaussianBlur does not help for circles detection, thats why Size(1,1). My actual code is

#include<opencv2/core/core.hpp>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include
#include <math.h>
#include “opencv2/video/background_segm.hpp”
#include “opencv2/video/tracking.hpp”
#include “opencv\cv.hpp”
#include <Windows.h>
#include

using namespace std;
using namespace cv;

int main(int argc, const char** argv) {
VideoCapture video(“burgos.avi”);
//Mat image = imread(“via.jpg”);
if (!video.isOpened()) {
cout << “Error” << endl;
return(0);
}

Mat image0;
char Esq = 0;
int fps = 0;

while (Esq != 27 && video.isOpened()) {	// hasta que pulse Esq o el video se acabe
	bool nextframe = video.read(image0);
	fps = fps + 1;
	Mat image;
	resize(image0, image, Size(720, 420));
	String text = format("Frame %d", fps);
	int base = 0;
	Size size = getTextSize(text, FONT_HERSHEY_PLAIN, 1.0, 1, &base);
	putText(image, text, Point(10, size.height + 10), FONT_HERSHEY_PLAIN, 1.0, Scalar(255, 255, 255), 1, LINE_AA);

	float slidingrows = image.rows;			// quiero colocar esto fuera del while, pero si lo hago me da error
	float slidingcolumns = (image.cols / 2);

	int Round(float n);
	//cout << n << endl;
	int Round(float slidingrows);
	//cout << slidingrows << endl;
	int Round(float slidincolumns);
	//cout << slidingcolumns << endl;
	int Round(float step);
	//cout << step << endl;
	//cout << image0.rows << endl;

	Mat sliding = image.clone();
	Mat SWimage = image.clone();
	Mat grayimage;
	cvtColor(sliding, grayimage, CV_BGR2GRAY);


	Rect ventana(slidingcolumns, 0, slidingcolumns, slidingrows);
	rectangle(grayimage, ventana, (255, 255, 255), 1, 8, 0);
	rectangle(SWimage, ventana, (0, 0, 0), 1, 8, 0);
	//imshow("Step by Step", sliding);
	Mat region = image(ventana);
	//imshow("Region de interés", region);
	int filas = (slidingrows);
	//int Round(float filas);
	//rectangle(sliding, ventana2, (0, 0, 255), 1, 8, 0);
	Mat region2 = grayimage(ventana);


	//imshow("Region de interés", region2);
	//GaussianBlur(grayimage(ventana), region2, Size(3, 3), 0);// Efecto Blur

	int tamaño = 256;
	float rango[] = { 0,256 };
	const float* rangos[] = { rango };


	//threshold(region2, region2, mediana(normalizehistogram), 255, ADAPTIVE_THRESH_MEAN_C);
	GaussianBlur(region2, region2, Size(1, 1), 0);
	vector<Vec3f> v3fCircles;
	HoughCircles(region2, v3fCircles, CV_HOUGH_GRADIENT, 2, region2.rows/8, 75, 50, 40, 45);
	for (int i = 0; i < v3fCircles.size(); i++) {
		circle(region2,
			Point(v3fCircles[i][0], v3fCircles[i][1]),
			v3fCircles[i][2],
			Scalar(255, 255, 0),
			2);
	}
	imshow("BYN", grayimage);
	//imshow("Bordes", region2);
	//imshow("Result", SWimage);
	//imshow("LBP", lbpImage);
	waitKey(0);
}

Esq = waitKey(1);

destroyAllWindows;

}

@100367230 ,

Good! Now you can figure a test inside the circle to confirm bolts and discard rocks.

what kind of test do you recomend? i’ve never done it before

@100367230

Well, as it appear to be the only image with bolt, you can go to the original color image and compare color histograms, or test something you know is different on bolt than on rocks.

Meanwhile, you can also try to improve the circle detection using de parameters.

I must leave you here, I hope you can finish soon, you are almost there.

1 Like