humans call “arrow” a variety of different shapes. an arrow can vary in the thickness and length of the shaft, the angle of the tip, do the sides of the tip curve and how much, is the back of the tip straight or curved or nonexistent (sides are just lines, tip isn’t solid)…
if you have a limited number of shapes, and you only have scaled (and translated) instances of them, I see a few options:
-
you could get away with a cascade classifier/detector. that’s very old technology but fairly understood
-
you could also perform shape matching. that uses contours
if you don’t have a limited number of shapes, it is unlikely that such low-level methods alone can solve the problem. I also think that skeletonization won’t lead to any useful information.
edge information seems useful.
an arrow typically has a long shaft, a pointy tip, and they’re arranged to each other just so.
at the very least you’ll need a “parts model”. that already enters the realm of machine learning.
you might even need deep learning (maybe not). it’s a possible solution with different tradeoffs.