Hey OpenCV Community!
I am currently working on a hardware reverse engineering project. I have the following problem: I want to skeletonize the tracks of a chip but when i use the thinning function (which is an implementation of the Zhang-Suen skeletonization algorithm) some important edges get lost.
Sadly i can only send one embedded media picture so i chose picture 2 but i will try to describe picture 3 (the end result).
1.First i converted the image to grayscale.
img0 = cv2.imread(‘originalimage.tif’)
img = cv2.cvtColor(img0, cv2.COLOR_BGR2GRAY)
2.Afterwards i used a median blur to remove some noise and used a sobel operator for edge detection:
imgBlurred = cv2.medianBlur(img,5)
rad_x = cv2.Sobel(imgBlurred, cv2.CV_64F, 1, 0, ksize=3)
grad_y = cv2.Sobel(imgBlurred, cv2.CV_64F, 0, 1, ksize=3)
abs_grad_x = cv2.convertScaleAbs(grad_x)
abs_grad_y = cv2.convertScaleAbs(grad_y)
grad2 = cv2.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0)
3.Then i used the thinning (skeletonization) operation. Only some of the circle shaped objects are left and all other edges dissappeared.
grad2Skeletonized = cv2.ximgproc.thinning(grad2)
Do i need to use some type of “filling” algorithm so that the other edges are used aswell? And are the edges sharp enough for that or do i need to implement “my own” erosion and dilation to improve the results? Because i think if i just “fill” the areas that are bounded by the edges it will propably just paint the whole picture white because the edges are not sharp enough.
are you aware that OpenCV comes with the morphological operations erode, dilate, open, close? they work on grayscale images too, not just binary.
I don’t see why you’d use thinning/skeletonization at all. that leaves you with only a skeleton… and it requires a binary image. if you give it a grayscale image, behavior is undefined.
what is the goal? what do you need out of this picture, in this step? just the pads/holes?
Hey. I have realised that i could have been clearer.
I think just experimenting with erode, dilate, open and close is the right call (i was aware that they exist but i thought that the algorithm i used already uses them and so i dont need to use it anymore which is propably a bad thought, because just because it makes use of similar functions does not mean that i can not make use of them anyways). I want a skeletonized picture as the end result. Each of the pads should then become one small line (if the circles get lost it doesnt matter).
I think i will try the following:
- Make edges sharper by closing gaps
- Fill the areas that are bounded by edges.
- make the image binary
- use skeletonization
“skeletonization” may sound like something you want, but it’s probably not. if you wanted to skeletonize the traces themselves, you’d need the picture before sobel.
it appears as if you want the edges of the punches however, likely just an abstract description of each hole (center, diameter). skeletonization is not an edge enhancement operation. it’s a morphological operation.
it pains me to say it, because the thing is usually a newbie trap, but try the “Canny” operator. it will also look like something you want, but that can deceive. it will not give you closed lines/contours in general, just “enhanced” gradients, if they are strong enough already.
how about you explain (or better show!) the next steps, the goal of what you’re currently asking for? I can guess a lot but that’s tedious.
The result should be a line that ideally goes right through the middle of the edges that are right opposite of each other (picture 2) Because then i could build “perfect” tracks afterwards (picture 3) However if i understood skeletonization correctly i basically need a “filled” object and not just the edges to achieve this.
Thats why i wanted to close every gap at the edges to make sure that it does not just fill the whole picture white. However it is also pretty important that i do not create no new edges by dilating too intensely.
The squares in the picture are equal to the dots in my picture. I think i will detect the dots in a different step and mark them afterwards.
I also might have to switch my approach and calculate the distance of the edges and paint a line in the middle whenever the approximate distance is inside the threshhold.
Currently i used closing to get rid of basically all noise which is already nice for the time when i make the picture binary. However the quality of the picture suffered already a decent amount (which might increase the chance of edges overlapping where they should not). That is why i will experiment a bit with different kernels for the closing operation.
Abbildung 2.3 of the diplomarbeit implies that pictures can be had of layer 3, layer 2+3, and layer 1+2+3, with all layers being somewhat transparent. is that correct? in that case, it should be possible to subtract them (in some way) to get individual layers even if they are ordinarily occluded.
Yes the layers are rather slim and thats why sometimes you can see the other layers when you do not want to see them (for example you photograph M2 but you can see that M1 “shines through”) This can become a problem if the layers are not perfectly distinguishable in the pictures. You can propably make use of that the fact that you know what might shine through. Luckily however the type of photos i received rarely have this type of problem and usually a simple median filter already filters it out.
My problem is rather that the edges of the current layer are really close to each other sometimes and therefore when i am thresholding (currently only binary) and skeletonize they connect even though that is not wanted (red square) . And at other places the actual edge is not fully connecting (green square)
I think one can propably use the fact that usually the lines that are not supposed to be connected are parallel to each other.
skeletonization is the wrong approach here. I don’t know how to dissuade you from using it for this.
Yes Skeletonization should propably not be used here. I was too focused on getting “better” results with this. I thought if i see no edge gaps here i can start to fill it (before Skeletonization). But i think the whole “fill” approach is propably weird as i should propably just use the picture before Sobel as you have said.
I will now try to get rid of the noise again by using “Opening” on the picture before Sobel. Afterwards i can hopefully turn it to binary and Skeletonize.
Just to make sure that i understand Skeletonization (and your critcism) correctly.
Would Skeletonization give me approximately this result? (edit: i forgot 2 lines at the top left)
Edit: Thanks for your patience btw.
this looks like an xray photograph.
I would use a median filter instead of morph ops to remove noise.
then I would simply threshold this to get copper vs background. multiple thresholds if you need to distinguish vias from traces.
yes, skeletonization would give approximately that, but the lines will look stranger. I would still advise against skeletonization. the operation does nothing useful for your use case.
feel free to post some unfiltered source data.
I think i cannot post the real source data of one photo since its already 15 MB large.
Here is another screenshot of an unfiltered photo, i hope that helps already.
I have already tried to use a median blur and then different threshold methods (gaussian, mean, binary and otsu). The thresholding did not work as i wanted there, but maybe combining multiple thresholds might help.
I want to detect vias and tracks but i thought that separating both processes could make it easier. Because then i could for example focus on the tracks without worrying that the vias disapear and vice versa.