Online demos

Demo categories


Scenarios

Hyperlinked Documentary

Description

This scenario developed by the Netherlands Institute for Sound and Vision is focused on cultural heritage. It uses material from the program Tussen Kunst & Kitsch (similar to the BBC’s Antiques Roadshow) courtesy of Dutch public broadcaster AVRO. In the show, people take art objects with them to be assessed by an expert. The objects brought in provide the possibility to add relevant information on questions like Who, When, What, Where and Subject, all related to art history and cultural heritage.

Demos

All Demo videos

Responsible partner

Beeld en Geluid (Sound and Vision)

Contact person

Lotte Belice Baltussen


Interactive News

Description

The basic idea of RBB’s scenario is to enrich the local news program according to the needs and interests of the individual viewer. In some cases this may mean to just watch the daily news show as it is, in another case the viewer may prefer certain topics in some of the news items, and he or she may want to learn more about the topic in question or inform him/herself about one specific aspect. The result will be a personalised, TV-based on-demand service which directly links content concepts to online sources which will be displayed in the LinkedTV service.

Demo

Responsible partner

RBB

Contact person

Nicolas Patz


LinkedTV News prototype

Description

LinkedTV News is a second screen application for tablets that acts as a companion to viewers when watching the news broadcasts. Its main goal is to enrich television newscasts by integrating them with other media thus, integrating and concentrating the different activities related to getting informed about the news in one interactive multiple screen, and potentially mobile experience. It is designed to accommodate two viewing modes in terms of interaction: a lean back mode and a lean forward mode.

The news application is the basis for the realisation of the LinkedTV scenario “Interactive News” (with the partner RBB). This is the current prototype UI and is not fully functional.

Homepage

LinkedTV News

Responsible partner

CWI

Contact person

Michiel Hildebrand


Interactive Video Player

Multiscreen Toolkit

Description

The Multiscreen Toolkit enables rapid prototyping of multiscreen applications, allowing developers and designers to focus on their concept ideas, rather than having to deal with synchronization and communication between screens. Support and default solutions are provided for sharing and notifications between screen, and functionalities are available for different interface options such as touch screens and traditional remotes.

The toolkit is used in LinkedTV for prototyping and implementing a 2nd screen application, which enables viewing and exploring the enrichments related to a TV program on a touchscreen tablet. The application also supports social interaction between viewers while watching a program.

Responsible partner

Noterik BV

Contact person

Daniel Ockeloen

Demo Video

Screencast Set Up Demo

Screencast Remote API


NERD video viewer

Description

The NERD video viewer demonstrates the functionality of the NERD tool by performing entity extraction on a given YouTube or DailyMotion video and showing the results in a Web interface. Entities are highlighted in the video transcript and are linked to explanatory information from the Web.

Homepage

NERD video viewer

Responsible partner

EURECOM

Contact

Contact


Gesture recognition interface

Description

A set of predefined gestures can be recognized in the interactive video player through this gesture recognition interface (play, pause, next, previous, etc.)

Demo video

Responsible partner

UMONS

Contact person

Matei Mancas


Media Analysis

Keyword Extraction Tool

Description

This online demo performs keyword extraction on German and Dutch text. The screen is split in two parts (for German and Dutch respectively). The form enables users to fill a text box with some text and indicate a file name, and to submit this file for analysis. The file is then indexed (it may take longer before the file gets uploaded and indexed). After indexing 20 top keywords extracted from the file are displayed on top of the screen. For the keyword extraction, the algorithm employs proper Part of Speech taggers for German and Dutch, also making feasible the identification of key-phrases. Under the form, a list of already uploaded and indexed files is shown. By clicking on the file name, the keywords extracted from this file along with the text of the file are displayed. Moreover, the user can update the text of existing files and re-submit them for analysis. The uploaded files can be deleted by clicking the cross symbol [X] at the end of each filename. The preloaded documents have been built from content provided by LinkedTV partners’ RBB (RBB Aktuell) and Sound & Vision (Tussen Kunst & Kitsch, AVRO).

Homepage

Keyword Extraction Tool The system is free to use subject to user registration, contact Tomas Kliegr.

Demo Video

Responsible partner

University of Economics Prague

Contact person

Ivo Lašek


Shot Segmentation

Description

This video demo presents the results of the shot segmentation algorithm on one video of the news show scenario (Rundfunk Berlin-Brandenburg’s; RBB Aktuell) and one video of the documentary scenario (Sound & Vision; Tussen Kunst & Kitsch, AVRO). The objective of this algorithm is to segment a video into shots, i.e., sequences of consecutive frames captured without interruption by a single camera, by performing shot boundary detection. The transition between two successive shots can be abrupt (where, one frame belongs to a shot and the following frame belongs to the next shot) or gradual (where, two shots are combined using chromatic, spatial or spatial-chromatic production effects which gradually replace one shot by another). The algorithm performs both abrupt and gradual transition detection. However, for the videos of the news show scenario only abrupt transitions have been considered, since gradual transitions are rarely used. In contrast, for the videos of the documentary scenario, where the use of production effects (e.g., fade in/out, dissolve, wipe) is a common approach, gradual transition detection have been performed. The results are presented in the form of subtitles in the videos, by indicating the starting point of each detected shot.

The video shot segmentation and concept detection demonstrator was developed by CERTH-ITI as part of the MediaMixer EU FP7 CSA Project (http://www.mediamixer.eu), using video analysis algorithms developed in LinkedTV. In this demo, the LinkedTV analysis algorithms are applied to lecture videos, coming from the videolectures.net collection. The videos are automatically segmented into shots, and then 37 concept detectors are applied to each shot, revealing the shots’ visual content. These analysis results enable the user to search by concept and access at the shot level the lecture videos.

Homepage

http://multimedia.iti.gr/mediamixer/demonstrator.html

Demo Video

Responsible partner

CERTH

Contact person

Vasilieos Mezaris


Object Re-detection

Description

This video demo presents the results of the object re-detection algorithm on a video from the documentary scenario (Sound & Vision; Tussen Kunst & Kitsch, AVRO). Object re-detection aims at finding occurrences of specific objects in a single video or a collection of still images and videos. The algorithm takes as input a picture (query image) of a manually specified object of interest by the user, who marks this object on one frame of the video with a bounding box. Then, this picture is compared against consecutive or non-consecutive frames of the video and the instances of the depicted object are automatically detected and marked with a bounding box. In this video demo, the detected re-occurrences of the object of interest are indicated by a green rectangle around them. The object re-detection algorithm is robust against a range of scale and rotation operations and partial occlusion. However, in some cases, extremely different viewing conditions (due to major modifications in scale and/or rotation), under which the object’s re-appearance takes place, lead to significant change of the visual information, and thus detection failure.

Demo Video

Responsible partner

CERTH

Contact person

Vasilieos Mezaris


LinkedTV REST Service

Description

This web-based REST Service integrates the LinkedTV techniques for audio, visual and textual analysis of multimedia content. Specifically, this service performs Automatic Speech Recognition (ASR) and Speaker Identification on the audio channel, Shot Segmentation, Concept Detection, Object Re-detection and Face Detection and Tracking on the visual channel and Keyword Extraction on the video’s subtitles or meta-data, or using the output of the ASR analysis.

Demo Video

Responsible partner

Fraunhofer IAIS

Contact person

Daniel Stein


Face Detection

Description

This video demos present the results of the face detection algorithm applied on a S&V video. When a face is detected, the algorithm demarcates it with a bounding box. Face detection is performed by applying Haar-like cascade classifiers, combined with skin color detection, to every frame of the video sequence. This method performs well on images, and we adapted it to videos in order to create face tracks: we use spatio-temporal information to link matching faces, and perform a linear interpolation to smooth the results.

Demo Video

Responsible partner

EURECOM

Contact person

Mathilde Sahuguet


Media annotation

LinkedTV Editor Tool

Description

The Editor Tool is developed in LinkedTV to allow for visualisation of the annotations and enrichments generated for a video, and their manual correction and completion within the Web browser.  

Responsible partner

Sound and Vision

Contact person

Jaap Blom


Linked media (media interlinking)

NERD Platform

Description

NERD aggegrates several named entity recognition services into a single API and Web interface. It is used in LinkedTV to process the annotations generated by the Video Analysis step and extract named entities which are identified unambiguously using Semantic Web URIs (Linked Data). In this demo, we show:

  • Apply Named Entity recognition on any text, in different languages including Dutch and German
  • Apply Named Entity recognition on timed text, and re-temporal alignment of the named entity in the video … with a video player showcasing the results
  • A personalized dashboard for a logged-in user which enables to monitor his NERD acvitiy

Presentation

Homepage

NERD Platform

Responsible partner

EURECOM

Contact person

Raphaël Troncy


SemiTags

Description

SemiTags performs Named Entity Recognition on Dutch and German text. It has been incorporated into the NERD interface (see above)

Homepage

SemiTags

Responsible partner

University of Economics Prague

Contact person

Ivo Lašek


Targeted Hypernym Discovery (THD)

Description

THD performs Named Entity and Entity Recognition and classification on English, Dutch and German text and disambiguates the entities to Wikipedia articles. Entities are also assigned types from DBpedia and YAGO ontologies providing semantic interoperability. In addition to DBpedia and YAGO, the system uses the Linked Hypernyms Dataset as the underlying knowledge base, which makes THD produce results complimentary to those produced by wikifiers based only on DBpedia or YAGO. A unique feature of THD is the possibility to extract the type of the entity from live Wikipedia using on-demand hypernym discovery.

Homepage

Targeted Hypernym Discovery

Screencast

THD Screencast

Responsible partner

University of Economics Prague

Contact person

Tomas Kliegr


Metadata Conversion Tool

Description

The Metadata Conversion Tool is the primary component to generate the RDF based semantic descriptions of the media. It uses other components such as NERD (see above) to process the different legacy metadata it receives (including the outputs of the EXMARaLDA tool above), and output a RDF description conform to the LinkedTV ontology (http://www.linkedtv.eu/ontology) where fragments of the annotated video are linked to Semantic Web URIs (Linked Data). In this demo, we show:

  • Automatic conversion into RDF of legacy metadata attached to video content, while keeping provenance information
  • Automatic conversion into RDF of WP1 analysis results performed on this video content, while keeping provenance information
  • Automatic interlinking of common resources with LOD resources
  • Automatic push of the resulting metadata in the LinkedTV Platform
  • Useful SPARQL queries to show what can then be retrieved

Homepage

Metadata Conversion Tool

Responsible partner

EURECOM

Contact person

Raphaël Troncy


Live Topic Generation from Event Streams

Description

Social platforms constantly record streams of heterogeneous data about human’s activities, feelings, emotions and conversations opening a window to the world in real-time. Trends can be computed but making sense out of them is an extremely challenging task due to the heterogeneity of the data and its dynamics making often short-lived phenomena. We develop a framework which collects microposts shared on social platforms that contain media items as a result of a query, for example a trending event. It automatically creates different visual storyboards that reflect what users have shared about this particular event. More precisely it leverages on: i) visual features from media items for near-deduplication, and ii) textual features from status updates to interpret, cluster, and visualize media items. The prototype is publicly available at http://mediafinder.eurecom.fr.

Homepage

Live Topic Generation from Event Streams

Demo Video

Responsible partner

EURECOM

Contact person

Raphaël Troncy


Tracking and Analyzing The 2013 Italian Election

Description

Social platforms open a window to what is happening in the world right now: fragmented pieces of heterogeneous data, such as (micro-)posts and media items, are posted by people that share their feelings or their activities related to events. Such an information is worth to be analyzed in order to get the big picture of an event from the crowd point of view. In this paper, we present a general framework to capture and analyze micro-posts containing media items relevant to a search term. We describe the results of an experiment that consists in collecting fresh social media posts (posts containing media items) from numerous social platforms in order to generate the story of the “2013 Italian Election” from the crowd point of view. Items are grouped in meaningful time intervals that are further analyzed through deduplication, clusterization, and visual representation. The final output is a storyboard that provides a satirical summary of the elections as perceived by the crowd. The system is publicly available at http://mediafinder.eurecom.fr/story/elezioni2013

Homepage

Tracking and Analyzing The 2013 Italian Election

Demo Video

Responsible partner

EURECOM

Contact person

Raphaël Troncy


Grab your Favorite Video Fragment: Interact with a Kinect and Discover Enriched Hypervideo

Description

In this demonstration, we propose an approach for enriching the user experience when watching television using a second screen device. The user can control the video program being watched using a Kinect and can grab, at any time, a fragment from this video. Then, we perform named entity recognition on the subtitles of this video fragment in order to spot relevant concepts. Entities are used to gather information from the Linked Open Data cloud and to discover what the vox populi says about this program. This generates media galleries that enrich the seed video fragments grabbed by the user who can then navigate this enriched content on a second screen device.

Demo Video

Responsible partner

EURECOM

Contact person

Raphaël Troncy


Linked Services Infrastructure

Description

LSI makes use of Web APIs for online media platforms such as Flickr or YouTube, defining mapping rules between the semantic query (in terms of a Linked Data resource) and the structural API query to the non-semantic Web API. From semantic to structural query is called “lowering” while the transformation of the structured resource (usually JSON or XML) to a semantic result (RDF) is called “lifting”. The use of mapping rules means that – provided the Web API does not change – media can be retrieved from that source repeatedly with the actual mapping only needing to be defined once. Media resource matches from different APIs is collected in parallel, while a local store of metadata of media resources relevant to a known, expected concept has been added to improve retrieval speed. LSI returns a list of matching media resources in RDF, with additional metadata for each media resource which could be used in subsequent ranking and selection of ‘most relevant’ matches.

Homepage

Linked Services Infrastructure

Responsible partner

STI

Contact person

Lyndon Nixon


Personalisation

Content and Concept Filtering Demonstrator

Description

This web demonstrator serves as the entry point for the content and concept filtering services provided by the f-PocketKRHyper reasoned, developed by CERTH-ITI for LinkedTV. Functionalities supported by this demonstrator include the user creating or updating a preference profile in a designated ontology formalization, receiving recommended content from a plurality of content items available to the system based on his/her profile, and receiving recommended concepts based on the propagation of his/her interests on the LinkedTV personalization concept space (LUMO). Additionally, the user may review the content available to the system and upload/update the semantic description of content items. The web demo is supported by a video presentation of its functionalities.

Homepage

To be announced soon

Demo video

Responsible partner

CERTH-ITI

Contact person

Dimitrios Panagiotou Dorothea Tsatsou Vasileios Mezaris


LinkedTV User Model Editor LUME

Description

The LinkedTV User Model Editor LUME provides an intuitive user interface for end users of LinkedTV to build and manage their user models. It is implemented as a web application and accessible over the Web (with authentication). For this purpose it provides RESTful web services for further integration with other LinkedTV components, in particular with the LinkedTV video player.

Homepage

LUME
(For the user name and password, please contact Fraunhofer IAIS.)

Responsible partner

Fraunhofer IAIS

Contact person

Rüdiger Klein


LinkedTV Semantic Filtering

Description

LSF is the implementation of the LinkedTV Semantic Filtering. It provides an efficient and scalable system to filter media fragment annotations (MFA)1 and enriched media content (eMFA) using personalized user models in a context-sensitive way. The core of LSF is a graph matching algorithm which correlates the active user interest model (aUIM) and the (enriched) media fragment annotations (eMFA).

Homepage

LUME
(For the user name and password, please contact Fraunhofer IAIS.)

Responsible partner

Fraunhofer IAIS

Contact person

Rüdiger Klein


General Analytics INterceptor (GAIN)

Description

GAIN is a stack of web applications and services for capturing and preprocessing user interactions with semantically described content. GAIN outputs a set of instances in tabular form (fixed-length vectors) suitable for further processing with generic machine-learning algorithms.
Within LinkedTV, GAIN is as a component of a “SMART-TV” recommender system. Content interacted with is automatically described with DBpedia types using a Named Entity Recognition (NER) system THD, and user interest is determined based on collected interest clues.

Homepage

General Analytics INterceptor

Screencast

Screencast

Demo video

Responsible partner

University of Economics Prague

Contact person

Jaroslav Kuchař


EasyMiner

Description

EasyMiner is a web-based rule learning system, producing decision rules and association rules. Its user interface resembles a web search engine, the user poses a query in the form of a pattern of a rule. The data are uploaded via a csv file or accessed as a remote database table. The user can use automatic data preprocessing facility, or define the preprocessing manually. Easy Miner can work with multi-valued attributes, supports negations and conjunctions in the rule and multiple interest measures, which can be used as constraints, including support, confidence, lift and chi-square. The discovered rules can be exported to a business rules system (GUHA AR PMML or Drools DRL format). EasyMiner has a built-in reporting.

Homepage

EasyMiner

Screencast

Screencast

Responsible partner

University of Economics Prague

Contact person

Tomas Kliegr


Context Detection

Description

Contextual features (the number of people for the moment) are extracted by using a RGBD camera. Those features are then sent to GAIN through the player server which will identify the videos ID and video time when the change in the number of people occurs.

Demo video

Responsible partner

UMONS

Contact person

Matei Mancas


Attention Tracker

Description

The viewer head direction is extracted by using a RGBD camera and it is sent to GAIN through the player server which will identify the videos ID and video time when the change in attention occurs. If the viewer is looking towards the TV screen, the approximative coordinates (+/-10cm) coordinates are also sent along with the user ID.

Demo video

Responsible partner

UMONS

Contact person

Matei Mancas


Leave a Comment

You must be logged in to post a comment.