Event detection in videos for elderly - Eating, taking pills, falling... Is OpenCV the right tool?

I’m researching what tools are available to detect certain habits in video files.

This is about elderly care and the habits/events would be:

  • Did they eat?
  • Did they bath?
  • Did they fall?
  • Did they take their medicines today?

Is OpenCV the right tool for this?

Thanks!

how would you try to detect ‘did they bath?’ in video files ?

(hopelessly naive questions, imho. not solvable with today’s action recognition, opencv or not)

Entered the bathroom + stayed for X seconds at least + leaves the bathroom with different clothes and different skin tone?

Talking out of ignorance, I am a total newbie in this… :slight_smile:

different skin tone? from a shower? does a shower change skin pigmentation or was the patient THAT filthy? I guess if someone showers so hot that they come out looking like a lobster, someone should be notified that they’ve been cooking themselves.

all that would require state of the art AI. nothing less.

any conceivable shortcuts would degrade the value of the data.

all of that requires varying degrees of invasion of privacy, upto 24/7 surveillance in all corners of their home, which raises ethical concerns of societal scope.

if a patient can’t take care of themselves, human care is required anyway.
if a patient is able, but forgetful or unwilling, then human care is required just the same.

some of your listed conditions could be less difficult to detect. human pose estimation exists. you just have to have a model evaluate what that pose means. crawling around/lying on the bed is different from crawling around/lying on the floor.

keeping track of medicine intake could require specifically watching the patient take the stuff, and actually making sure the stuff ingested is the stuff prescribed. that’s a lot harder than spotting a human lying on the ground.

if you’d aim for a system that detects emergency situations, that’d probably be received a lot better, with a lot less concern. patient unconscious/motionless or unable (but trying) to get up, that might be achievable. add in audio (reacting to keywords) and that could be competition to those emergency call pendants advertised to the elderly.

really, not everything has to be solved with the most complex solutions (AI, vision). basic vital signs can be monitored with wrist-worn devices (oxygen saturation, pulse rate) or chest straps (actual breathing, some ECG).

even unresponsiveness can be handled like that. “dead man switch”. if the wrist thing thinks the patient’s vitals are concerning (including no motion or seizure-like motion), it can prompt for a button to be pressed.


OpenCV can “run” neural networks. that’s called inference. other libraries can do that too.

OpenCV can’t train DNNs. it’s not made for that. it’s got some basic machine learning module though. other tools exist to train DNNs and they’re made for that.