Online demos

Demo categories


Scenarios

LinkedCulture

Description

This scenario developed by the Netherlands Institute for Sound and Vision is focused on cultural heritage. It uses material from the program Tussen Kunst & Kitsch (similar to the BBC’s Antiques Roadshow) courtesy of Dutch public broadcaster AVROTROS. In the show, people take art objects with them to be assessed by an expert. The objects brought in provide the possibility to add relevant information on questions like Who, When, What, Where and Subject, all related to art history and cultural heritage.

Demos

All Demo videos

Responsible partner

Beeld en Geluid (Sound and Vision)

Contact person

Lotte Belice Baltussen


LinkedNews

Description

The basic idea of RBB’s scenario is to enrich the local news program according to the needs and interests of the individual viewer. In some cases this may mean to just watch the daily news show as it is, in another case the viewer may prefer certain topics in some of the news items, and he or she may want to learn more about the topic in question or inform him/herself about one specific aspect. The result will be a personalised, TV-based on-demand service which directly links content concepts to online sources which will be displayed in the LinkedTV service.

Demos

All Demo videos

Responsible partner

RBB

Contact person

Nicolas Patz


LinkedTV News application (international version)

Description

LinkedTV News is a second screen application for tablets that acts as a companion to viewers when watching the news broadcasts. Its main goal is to enrich television newscasts by integrating them with other media thus, integrating and concentrating the different activities related to getting informed about the news in one interactive multiple screen, and potentially mobile experience. It is designed to accommodate two viewing modes in terms of interaction: a lean back mode and a lean forward mode.

This is an internationalized version of the LinkedNews scenario application above.

Demos

Homepage

LinkedTV News

Responsible partner

CWI

Contact person

Michiel Hildebrand


SocialDocumentary

Description

The visitors are invited to interact with the installation by choosing keywords through the manipulation of three cubes. Each cube represents a class of keywords (people, action, emotion). As the videos are cut into segments and tagged with the same keywords, the system will automatically choose the most relevant video according to the position of the cubes. A Kinect-based system allows us to track the two closest visitors’ faces and infer on their interest for the played video. The more the visitors are interested, the higher is the probability to display this video to the next visitors, like in recommendation system on video broadcast platforms. In this way, the installation evolves with its public. Some segments will emerge from the others as the “most popular”.
This video presents the public exhibit which completed a one week workshop that took place in August 2014, at BüyükAyi atelier, Istanbul.

Demos

Homepage

SocialDocumentary (source code at Github)

Responsible partner

University of Mons

Contact person

Fabien Grisard


LinkedTV Platform

LinkedTV Platform

Description

The LinkedTV Platform provides an all-in-one solution for content owners to ingest, analyse, annotation and enrich their video materials.

Responsible partner

Condat

Contact person

Jan Thomsen

Demo Video


LinkedTV Applications

Multiscreen Toolkit

Description

The Multiscreen Toolkit enables rapid prototyping of multiscreen applications, allowing developers and designers to focus on their concept ideas, rather than having to deal with synchronization and communication between screens. Support and default solutions are provided for sharing and notifications between screen, and functionalities are available for different interface options such as touch screens and traditional remotes.

The toolkit is used in LinkedTV for prototyping and implementing a 2nd screen application, which enables viewing and exploring the enrichments related to a TV program on a touchscreen tablet. The application also supports social interaction between viewers while watching a program.

Responsible partner

Noterik BV

Contact person

Daniel Ockeloen

Demo Video

Screencast Set Up Demo

Screencast Remote API


NERD video viewer

Description

The NERD video viewer demonstrates the functionality of the NERD tool by performing entity extraction on a given YouTube or DailyMotion video and showing the results in a Web interface. Entities are highlighted in the video transcript and are linked to explanatory information from the Web.

Homepage

NERD video viewer

Responsible partner

EURECOM

Contact

Contact


Gesture recognition interface

Description

A set of predefined gestures can be recognized in the interactive video player through this gesture recognition interface (play, pause, next, previous, etc.)

Demo video

Responsible partner

UMONS

Contact person

Matei Mancas


Media Analysis

Keyword Extraction Tool

Description

This online demo performs keyword extraction on German and Dutch text. The screen is split in two parts (for German and Dutch respectively). The form enables users to fill a text box with some text and indicate a file name, and to submit this file for analysis. The file is then indexed (it may take longer before the file gets uploaded and indexed). After indexing 20 top keywords extracted from the file are displayed on top of the screen. For the keyword extraction, the algorithm employs proper Part of Speech taggers for German and Dutch, also making feasible the identification of key-phrases. Under the form, a list of already uploaded and indexed files is shown. By clicking on the file name, the keywords extracted from this file along with the text of the file are displayed. Moreover, the user can update the text of existing files and re-submit them for analysis. The uploaded files can be deleted by clicking the cross symbol [X] at the end of each filename. The preloaded documents have been built from content provided by LinkedTV partners’ RBB (RBB Aktuell) and Sound & Vision (Tussen Kunst & Kitsch, AVRO).

Homepage

Keyword Extraction Tool The system is free to use subject to user registration, contact Tomas Kliegr.

Demo Video

Responsible partner

University of Economics Prague

Contact person

Ivo Lašek


Shot Segmentation

Description

This video demo presents the results of the shot segmentation algorithm on one video of the documentary scenario (Sound & Vision; Tussen Kunst & Kitsch, AVRO). The objective of this algorithm is to segment a video into shots, i.e., sequences of consecutive frames captured without interruption by a single camera, by performing shot boundary detection. The transition between two successive shots can be abrupt (where, one frame belongs to a shot and the following frame belongs to the next shot) or gradual (where, two shots are combined using chromatic, spatial or spatial-chromatic production effects which gradually replace one shot by another). The algorithm performs both abrupt and gradual transition detection by assessing the visual similarity of neighboring frames of the video, through the extraction and matching of both global and local visual features. The results are presented in the form of subtitles in the videos, by indicating the starting point of each detected shot.

The video shot segmentation and concept detection demonstrator was developed by CERTH-ITI as part of the MediaMixer EU FP7 CSA Project (http://www.mediamixer.eu), using video analysis algorithms developed in LinkedTV. In this demo, the LinkedTV analysis algorithms are applied to lecture videos, coming from the videolectures.net collection. The videos are automatically segmented into shots, and then 37 concept detectors are applied to each shot, revealing the shots’ visual content. These analysis results enable the user to search by concept and access at the shot level the lecture videos.

Homepage

http://multimedia.iti.gr/mediamixer/demonstrator.html

Demo Video

Responsible partner

CERTH

Contact person

Vasileios Mezaris


Object Re-detection

Description

This video demo presents the results of the object re-detection algorithm on a video from the documentary scenario (Sound & Vision; Tussen Kunst & Kitsch, AVRO). Object re-detection aims at finding occurrences of specific objects in a single video or a collection of still images and videos. The algorithm takes as input a picture (query image) of a manually specified object of interest by the user, who marks this object on one frame of the video with a bounding box. Then, this picture is compared against consecutive or non-consecutive frames of the video and the instances of the depicted object are automatically detected and marked with a bounding box. In this video demo, the detected re-occurrences of the object of interest are indicated by a green rectangle around them. The object re-detection algorithm is robust against a range of scale and rotation operations and partial occlusion. However, in some cases, extremely different viewing conditions (due to major modifications in scale and/or rotation), under which the object’s re-appearance takes place, lead to significant change of the visual information, and thus detection failure.

Homepage

http://multimedia.iti.gr/object_redetection/demonstrator.html

Demo Video

Responsible partner

CERTH

Contact person

Vasileios Mezaris


LinkedTV REST Service for Multimedia Analysis

Description

This web-based REST Service integrates the LinkedTV techniques for audio, visual and textual analysis of multimedia content. Its communication with the LinkedTV platform is fully automatic, while the analysis is performed by three interconnected services that communicate via established synchronous and asynchronous communication channels between them. Specifically, the audio analysis sub-service performs Automatic Speech Recognition (ASR) and Speaker Identification on the audio channel, the visual analysis sub-service performs Shot Segmentation, Concept Detection, Chapter Segmentation and Face Detection and Tracking on the visual channel, and the text analysis sub-service performs Keyword Extraction on the video’s subtitles or meta-data, or using the output of the audio analysis (i.e. the created ASR transcripts).

Homepage

Concept detection demo

Demo Video

Responsible partner

CERTH

Contact person

Evlampios Apostolidis, Vasileios Mezaris


Face Detection

Description

This video demos present the results of the face detection algorithm applied on a S&V video. When a face is detected, the algorithm demarcates it with a bounding box. Face detection is performed by applying Haar-like cascade classifiers, combined with skin color detection, to every frame of the video sequence. This method performs well on images, and we adapted it to videos in order to create face tracks: we use spatio-temporal information to link matching faces, and perform a linear interpolation to smooth the results.

Demo Video

Responsible partner

EURECOM

Contact person

Mathilde Sahuguet


Media annotation

LinkedTV Editor Tool

Description

The Editor Tool is developed in LinkedTV to allow for visualisation of the annotations and enrichments generated for a video, and their manual correction and completion within the Web browser.  

Homepage

LinkedTV Editor Tool Free Trial

Responsible partner

Sound and Vision

Contact person

Jaap Blom


Linked media (media interlinking)

NERD Platform

Description

NERD aggegrates several named entity recognition services into a single API and Web interface. It is used in LinkedTV to process the annotations generated by the Video Analysis step and extract named entities which are identified unambiguously using Semantic Web URIs (Linked Data). In this demo, we show:

  • Apply Named Entity recognition on any text, in different languages including Dutch and German
  • Apply Named Entity recognition on timed text, and re-temporal alignment of the named entity in the video … with a video player showcasing the results
  • A personalized dashboard for a logged-in user which enables to monitor his NERD acvitiy

Presentation

Homepage

NERD Platform

Responsible partner

EURECOM

Contact person

Raphaël Troncy


Targeted Hypernym Discovery (THD)

Description

THD performs Named Entity and Entity Recognition and classification on English, Dutch and German text and disambiguates the entities to Wikipedia articles. Entities are also assigned types from DBpedia and YAGO ontologies providing semantic interoperability. In addition to DBpedia and YAGO, the system uses the Linked Hypernyms Dataset as the underlying knowledge base, which makes THD produce results complimentary to those produced by wikifiers based only on DBpedia or YAGO. A unique feature of THD is the possibility to extract the type of the entity from live Wikipedia using on-demand hypernym discovery.

Homepage

Targeted Hypernym Discovery

Screencast

THD Screencast

Responsible partner

University of Economics Prague

Contact person

Tomas Kliegr


Metadata Conversion Tool

Description

The Metadata Conversion Tool is the primary component to generate the RDF based semantic descriptions of the media. It uses other components such as NERD (see above) to process the different legacy metadata it receives (including the outputs of the EXMARaLDA tool above), and output a RDF description conform to the LinkedTV ontology (http://www.linkedtv.eu/ontology) where fragments of the annotated video are linked to Semantic Web URIs (Linked Data). In this demo, we show:

  • Automatic conversion into RDF of legacy metadata attached to video content, while keeping provenance information
  • Automatic conversion into RDF of WP1 analysis results performed on this video content, while keeping provenance information
  • Automatic interlinking of common resources with LOD resources
  • Automatic push of the resulting metadata in the LinkedTV Platform
  • Useful SPARQL queries to show what can then be retrieved

Homepage

Metadata Conversion Tool

Responsible partner

EURECOM

Contact person

Raphaël Troncy


Personalisation

Content and Concept Filtering Demonstrator

Description

This web demonstrator serves as the entry point for the content and concept filtering services provided by the f-PocketKRHyper LiFR reasoner, developed by CERTH-ITI for LinkedTV. Functionalities supported by this demonstrator include the user creating or updating a preference profile in a designated ontology formalization, receiving recommended content from a plurality of content items available to the system based on his/her profile, and receiving recommended concepts based on the propagation of his/her interests on the LinkedTV personalization concept space (LUMO). Additionally, the user may review the content available to the system and upload/update the semantic description of content items. The web demo is supported by a video presentation of its functionalities.

Homepage

http://multimedia.iti.gr:8080/reasoner/index.jsp

Demo video

Responsible partner

CERTH-ITI

Contact person

Georgios Lazaridis Dorothea Tsatsou Vasileios Mezaris


General Analytics INterceptor (GAIN)

Description

GAIN is a stack of web applications and services for capturing and preprocessing user interactions with semantically described content. GAIN outputs a set of instances in tabular form (fixed-length vectors) suitable for further processing with generic machine-learning algorithms.
Within LinkedTV, GAIN is as a component of a “SMART-TV” recommender system. Content interacted with is automatically described with DBpedia types using a Named Entity Recognition (NER) system THD, and user interest is determined based on collected interest clues.

Homepage

General Analytics INterceptor

Screencast

Screencast

Demo video

Responsible partner

University of Economics Prague

Contact person

Jaroslav Kuchař


EasyMiner

Description

EasyMiner is a web-based rule learning system, producing decision rules and association rules. Its user interface resembles a web search engine, the user poses a query in the form of a pattern of a rule. The data are uploaded via a csv file or accessed as a remote database table. The user can use automatic data preprocessing facility, or define the preprocessing manually. Easy Miner can work with multi-valued attributes, supports negations and conjunctions in the rule and multiple interest measures, which can be used as constraints, including support, confidence, lift and chi-square. The discovered rules can be exported to a business rules system (GUHA AR PMML or Drools DRL format). EasyMiner has a built-in reporting.

Homepage

EasyMiner

Screencast

Screencast

Responsible partner

University of Economics Prague

Contact person

Tomas Kliegr


Context Detection

Description

Contextual features (the number of people for the moment) are extracted by using a RGBD camera. Those features are then sent to GAIN through the player server which will identify the videos ID and video time when the change in the number of people occurs.

Demo video

Responsible partner

UMONS

Contact person

Matei Mancas


Attention Tracker

Description

The viewer head direction is extracted by using a RGBD camera and it is sent to GAIN through the player server which will identify the videos ID and video time when the change in attention occurs. If the viewer is looking towards the TV screen, the approximative coordinates (+/-10cm) coordinates are also sent along with the user ID.

Demo video

Responsible partner

UMONS

Contact person

Matei Mancas


Leave a Comment

You must be logged in to post a comment.