Ball position detection in Padel game

Hi.
This is my first post and if there is something wrong, please let me know.
I’d like to build an project which detects and analyze a padel game (similar to tennis ball).
But I’m still not sure whether this is possible to detect a ball position (x,y,z) and Ball elevation when it reaches the NET.
The reason why I worry about this issue is that all videos (data source) uses a single camera at a single point.
All record is recorded from a single camera at a fixed position.
One source video looks like this.

I have many videos but all are recorded using same camera at a same point.
Looking forward to hearing for you.
Thanks.

single camera? exceedingly difficult, near impossible to get accurate readings.

“Video assistant referee” systems don’t have a single camera. in pro sports broadcasts, they may show you one or a few perspectives that the meatbags want to watch but they have more cameras for the technical analysis.

a camera, geometrically, is similar triangles. if you know the diameter of the ball, and how wide it appears in the picture, you can figure how far away it is. same principle as with augmented reality markers (those squares with bits in them).

since that involves a division, the further away the ball is, the worse any uncertainty affects the estimation of its distance (Z), in the camera coordinate frame.

you can pin it down in X and Y fairly well though.

this means that you would require at least two perspectives, each giving you a “ray” for the object. then you can “intersect” those, with tolerance, because such rays would always narrowly miss each other.

one valuable perspective would be a camera sitting in the plane of the net, e.g. above it or to the side of it. another two would overlook the far half of the court from each end.

and then you need to think about how to detect/locate the ball. visible light isn’t the only option. you could work with various ranges of infrared. you could use a ball with retroreflective surface, and use a ring light around each camera. you need it to be the brightest thing in some range of light.

Thanks for your reply.
Could you specify more detail about this part?

since that involves a division, the further away the ball is, the worse any uncertainty affects the estimation of its distance (Z), in the camera coordinate frame.

What does exactly make it difficult to estimate the distance? Could you give me some examples?
With this single camera system, how much accuracy can I expect?
If there’re two cameras, is this project possible?
Looking forward to hearing from you.
Thanks.