I see Canny. that’s a newbie trap. discard it in this instance and avoid it in general. there is no solution in that direction.
this looks like a picking task for a robot with suction cups.
if you want industrial performance, you’ll need to deploy ML/DL/AI. nothing less will do. those things are hard to make out with just image processing or anything that doesn’t have some amount of ML in it.
get some neural network to learn to point out the boards. best results if modeled as instance segmentation.
that task should be fairly easy to generate abundant synthetic training data for. boards are just textured boxes laying on top of each other. get someone to poke a 3D engine (unreal or unity or blender) to generate convincingly lit scenes with those things “strewn” around.
if you wanted to cheat, your first step would be to instruct workers to slap those labels in the middle of each board, and then to suck them up on their labels. those labels are probably easier to detect and locate. I see barcodes, and the form has a fairly rigid appearance.
another approach I’ve seen uses a laser scanner to obtain 3D surface information. then, planes are extracted from that point cloud (likely with traditional algorithms, not ML). you would hope that each plane is one board… but if boards butt up against each other, multiple boards might be seen as a single board. that would result in a picking center straddling two boards.
a ML/DL/AI approach should benefit from 3D information over just visual information. you should never rely on pure geometry (point clouds).