Coordinates of touching the image projected from behind. (touchwall)

As a rookie, I greet everyone on the forum! Help me please!

I would like advice on how I could implement openCV:

  • Rear projected optical image on a 2m x 2m transparent screen.
  • person touches the canvas with his hand
  • 2 cameras above the plane of the screen in the xy (rectangular) position

– the coordinates of the touch point of the desired output
– the required accuracy is 10-15cm

What do you suggest?
Thanks for your help!

Maybe you have some example images?

Thank you for your attention!
I quickly drew one on the idea. :slight_smile:

Use triangulation.

Probably you have to fiddle with the formula, but basically you have to resolve the triangle CxCyT (CamX,CamY,Touch) where you know the angles, the position of the cameras and the distance between them.
To get the relative angle of a point to the camera, you need to know the FOV (field of view angle) of the camera: A=(2*P/W-1)*FOV (P=point position; W=image width)

Thanks Kbarney!
I am a 57 year old automation engineer. The geometry part of the task is not a problem. I have no experience using OpenCV, I am looking for help because I need to solve this problem quickly.

As of the computer vision part: to make things easier, use a dark background.

Crop the part of the image that is close to the canvas (the white part of the camera view in your drawing). Threshold the image to get the pixels of the hand. Use the Mu moments to get the center of the hand:

cvtColor( src, gray, COLOR_BGR2GRAY );
gray_crop = gray[10:20,:,:] #crop lines 10-20
threshold( gray_crop, t, thresh_val,255,THRESH_BINARY ); #you need to measure thresh_val
Moments m = moments(t,true);
if m.m00<10: # less than 10 pixels active
    print("no hand detected")
else:
    hand_x = m.m01/m.m00
    print("hand coordinate: {hand_x}");

Probably you can refine the approach, but this is probably the fastest solution.

Thanks!
I’ll see, I’ll try.

if you want to register actual touch, that has been done with acrylic, lights into the glass from the edges, and a camera behind it to see the illuminated fingers.

you’ll find stuff via “FTIR” and “multitouch”

going by your sketches, if you use cameras to look from the side, you’ll see fingers approaching the surface. that information, how far away a finger/hand is, can allow even more interesting effects.

I would make sure the camera’s optical axis lies precisely on the surface of the glass.

as kbarni demonstrated, you’d start by thresholding.

I would then find the first scanline with any white pixels (closest distance to glass), because that’s the tip of the nearest object.


Hi Crackwitz!

Very useful help, thank you!