The brake is in a place where laser light splits on wooden brick. In other words, the first part of the line is at a different height from the second part of the line.

Here is the image before the threshold:

What do you think about using “Hough Line Transform” algorithm?

that’s the context I needed to understand the situation. it looked like some kind of broken nail but now I know it’s not.

here’s a step towards a solution: a profile line, for every column it gives the y coordinate of mean brightness. it’s not perfect but neither is the picture you can run this on the thresholded picture as well. might even give you cleaner results.

you can do a derivative (np.gradient) on that profile and find extrema in its slope. those will be where the break/step is.

import cv2 as cv
import matplotlib.pyplot as plt
im = cv.imread("de4d1c40c99620fbc9c813f111cb8767c87fcca3.jpeg", cv.IMREAD_GRAYSCALE)
h, w = im.shape
gy, gx = np.mgrid[0:h, 0:w]
# weighted sum
profile = np.sum(im * gy, axis=0) / np.sum(im, axis=0)
# negative to make it look like the picture
# I can't be bothered to mess with the axes so positive y goes down instead of up
plt.plot(-profile)
plt.show()

Thank you @crackwitz. Your solution works very well.
By the way, I got better results on the thresholded image.

How can I get the value of x for the smallest y? I tried np.min(gradient, axis=0), but I got nan. When I print gradient all values are nan.

In the above example, we assume that the line is horizontal. I understand that if the line will be vertical I need to calculate the x coordinate of mean brightness.
How can I deal with a line which diagonal (or has an unknown angle)?

How to deal with multiple lines in different directions?

you can guess what I’ll ask of you next: to present precisely what you did. there’s no way to help you without that information. you should anticipate this.

I’d also recommend argmin, which gives the index of the smallest element.

do you actually have to handle directions other than horizontal? instead you could tell the user to take the picture in the right orientation. there are ways to estimate orientation but there’s insufficient information to pick one yet.

you should present the entire problem thoroughly. playing 20 questions and guessing what you need is counter-productive.

for i in range(0, len(threshold_img)):
for j in range(0, len(threshold_img[i])):
profile[i][j] = sum(threshold_img[i][j] * gy[i][j] / profilesum[j])

I make a simple image like this one

I examine what does profile variable returns, I “feel” that this is what I need but I don’t understand how it is created and why we use gx (which is [[0, 0, 0, …], [1, 1, 1, …] …]). Could you please elaborate on this topic or send me a link to material from which I can get these pieces of information?

no. profile is a 1-dimensional array. your interpretation accesses it like a 2-dimensional array.

I’ll explain my code:

profilesum = np.sum(threshold_img, axis=0)
# weighted sum, calculates the "center of mass" for every column
profile = np.sum(threshold_img * gy, axis=0) / profilesum

we are calculating a weighted sum or weighted average. the goal is to figure out the “center of mass” of the pixels for every column.

threshold_img is the image.

np.sum along axis 0 means every column is collapsed into the sum of its values. you get a row vector that contains something proportional to the number of pixels in every column of the image. it would be exactly the number of pixels if the pixel values were 0 or 1, but they’re 0 and 255, so you get a weight of 255 in almost everything. I’m trying to explain this as simply as possible, so I will only explain this for binary images.

gy merely gives us the y-coordinate for every pixel. gx is not used. I only gave it a name because it is part of the data returned from np.mgrid

threshold_img * gy leaves in every pixel either that coordinate or 0.

if we want the average y coordinate of those pixels, we have to divide the sum by the number of pixels. that’s what the whole expression does.

you have already proposed a small test picture to investigate. now look at the values inside of all the arrays.

Going back to the problem (which isn’t very well defined as I play with OpenCV rather when solving the real problem).

Let assume that on every image I have only one line, the line can be only vertical or horizontal, the crack on the line can be either perpendicular or at an angle (different than 90 degrees which I understand as perpendicular). Resuming we can have 4 situations:

horizontal line, perpendicular crack (as in the image that we already discussed)

horizontal line, crack at angle 0-180

vertical line, perpendicular crack (as in the image that we already discussed)

vertical line, crack at angle 0-180

In the case of the vertical and horizontal lines, one should create a “profile” base on rows instead of columns. We can reduce the problem to two cases: perpendicular crack and crack at angle 0-180, the first one is well examined.

How to get the angle of crack and “point of crack” if it isn’t perpendicular to the line?

I attached the image horizontal line with crack at angle about 45 degrees.

too little data to estimate the angle of the break. I see you have full resolution data. do post that instead of low resolution versions.

next you might tell me that the laser line can move across the scene? I hate to guess.

why is the angle of the break even interesting? don’t say that someone just wants to know. that’s no motivation.

I expected an explanation that includes perhaps a picture of the scene that is not in darkness (and a picture that is not from the camera that does the “measuring”, i.e. a view from the outside), or a claim that the camera is fixed (relative to the scene) and the laser line moves. I’m looking through a keyhole here. it’s frustrating.

you need more data. and I can’t speculate on how to get it because I’m literally tapping in the dark.

What I send is just a crop from the whole image, there is no more data. Rest of the image almost black.

Theoretically, it can move but during taking the picture it stays in one place. Why it is important?

I try to know how much information about the position of a flat object (plywood rectangle) I can get only from image analysis. Depends on the result I will use it in my project in which I use laser light to mark positions on a stage or not.

The camera is fixed relative to the scene, the laser lines are also fixed (but can be moved, but we can assume that there are horizontal and vertical lines). I send a picture on Wednesday.

based on the latest picture, I guess an angle could be estimated somehow, very approximately. the laser line is very narrow. there isn’t much to work with. I’m gonna leave it at that. I’ll be more reserved from now on.

there is very little data in that picture and I think it’s not enough to answer questions like “what’s the orientation of the break”. I think it’s somewhat doable but the result will be guesswork and I suspect the task is made needlessly difficult by the poor picture quality and your other stated constraints, for which there was no justification given.

if you want to test the limits of feasibility, that’s a challenge you give yourself.