Behavioural Analysis

With more and more cameras embedded in the new TVs, gesture control and user behaviour tracking are of an increasing importance in the field. To anticipate the future of cameras embedded on SmartTVs, UMONS partner uses a Microsoft Kinect (X BOX) sensor. This sensor provides both classical RGB images and a depth map relative to the camera. The availability of the depth map provides much more possibilities than classical RGB cameras and this kind of sensor which are cheaper and cheaper will be a standard within a few years.

The UMONS partner is developing and testing two explicit interaction technologies both based on Kinect information for use in LinkedTV scenarios, as an innovative extension of current enrichment content interaction possibilities.
The first one is a module using hand gestures to interact within an interface. This interface is HTML5 based and simulates currently a timeline with media (text, videos, …) linked to specific moments of the timeline in a test interface. This could be further developed to allow control of the timeline component on the main screen LinkedTV UI in a way more intuitive than multiple button pressing on the remote control.

The gesture analysis module can be separated from this test interface and it can send information (like hand position, clicks, people in interaction area, …) by network (UDP) to the LinkedTV interface where it can be mapped according to the scenario interactions.

The figure shows a snapshot of the analysis of the gestures. On the left side the user is selected and the interaction area automatically attached to his shoulder is shown in yellow. This interaction area is shoulder-centred and spherical in order to better fit to natural hand motion and decrease hand pain during long interactions. A video demonstrating this approach can be seen at this link: http://vimeo.com/49277396

The second explicit interaction technology uses real-world objects to interact with an interface. Those objects can be objects from real-world like in this example where a mug was used for the tests, or even better a specific LinkedTV object like a red plastic cube which could be provided with the setup box. This object can be detected and tracked on the table or on any other plane surface. The object position is known by the system and can be mapped to any command on the LinkedTV interface. Again, this data can be sent by network (UDP) to the interface.

Left image: real mug on a table. Right image: tracked mug (blue and ping dots) while it is manipulated by a hand.

Leave a Comment

You must be logged in to post a comment.