I found one challenging problem when I’ve read the article which was published by Dock_ok.org in which Oculus Rift DK2 has been hacked.
Source: Hacking the Oculus Rift DK2, part II | Doc-Ok.org
Is there any way for labelling the LEDs on the device without using heart bit LEDs’ intensity?
In my opinion, one of the drawbacks of this way is waiting for LEDs to changing their intensity in the period of 10 following frames which is a waste of time. So, I am looking forward to a brand new and robust approach to the problem.
It might be a kind of mapping between 3D feature points and 2D detected feature points in each image frames.
I’m thinking that is it an AI problem or computer vision one. If there is any way to solve the problem what is that exactly?
I appreciate any help.
is this a continuous video method or a single-frame method?
if you have video, you can track these dots over time (anonymous identity). that, PLUS determining the name of a dot from some temporal pattern, should be quite sufficient, no?
the basic problem is an “assignment problem”. all dots look the same (in a single picture) but for 3d-2d pose estimation you need matchings/assignments.
you are asking for a novel solution to a problem that has been solved already. I would recommend that you conduct a review of scientific literature. look at some published books for a start.
Firstly thanks for your reply.
As you’ve seen in the article, yes it is a continuous video method but as the author implies, fast movement and image processing occlusion cause problems with having accurate labelling. That’s why he used the blinking pattern since sometimes the object (Oculus Rift VR) can go outside of the camera’s view angle and come into the view angle with a different position and orientation that tracking cannot be of any help.
I want to know about literature and solutions you have mentioned in your reply. Any keywords, names and topics can be a great help and I would appreciate it.
Thanks again for your time and consideration.