How to reliably detect edges (under shadows or poor lighting) in the images ?
I’m trying to identify object boundaries, with edge detection but run into a problem when images have shadows or poor lighting or lower res.
Here is a sample photo.
I use edge detection and sharpening with this code:
def sharpen_edges(binary_data):
image = Image.open(io.BytesIO(binary_data))
image_np = np.array(image)
# Convert to grayscale for edge detection
gray_image = cv2.cvtColor(image_np, cv2.COLOR_RGB2GRAY)
# Apply Canny edge detection
edges = cv2.Canny(gray_image, threshold1=50, threshold2=200)
# Convert edges to RGB and overlay on the original image
edges_rgb = cv2.cvtColor(edges, cv2.COLOR_GRAY2RGB)
# Increase the contrast of edges by blending them with the original image
sharpened_np = cv2.addWeighted(image_np, 1.0, edges_rgb, 1.5, 0)
# Optional: Apply a slight Gaussian blur to soften the edges a bit
sharpened_np = cv2.GaussianBlur(sharpened_np, (3, 3), 0)
# Convert back to PIL image and save to buffer
sharpened_image = Image.fromarray(sharpened_np)
buffer = io.BytesIO()
sharpened_image.save(buffer, "PNG")
sharpened_image_data = buffer.getvalue()
return sharpened_image_data
Appreciate any advice on this issue. I’m open-cv newbie so please bear with me as I try to understand what’s happening.