Tutorial ISWC 2014 Re-using Media on the Web

RE-USING MEDIA ON THE (SEMANTIC) WEB

LEARN HOW TO ANNOTATE AND RE-USE ONLINE MEDIA FRAGMENTS FOR NEW APPLICATIONS

at the International Semantic Web Conference (ISWC) 2014, Riva del Garda, Trentino, Italy
on October 20 (afternoon), 2014, as an HALF DAY tutorial.

DESCRIPTION

The Web is developing not just into a more Semantic Web but also into a much richer Multimedia Web. While a layer of semantics is being developed on top of web pages and textual documents via structured data markup and more and more linked data datasets are published, the rapidly growing mass of online media – audio, image, video – is nowhere nearly as integrated into this body of web-wide knowledge. Media annotation does take place within archives and repositories, but even the “semantic” annotation is typically disconnected from the web and its semantics layer. Linked Data-based annotation of media resources published on the Web could drive new applications for media retrieval and re-use, to the benefit of both media owners and consumers.

This tutorial will look at tools and services to semantically annotate online media and use those annotations for online retrieval and re-use based on a number of emerging web specifications and technologies. We will focus on means to annotate spatial and temporal fragments of media assets with Linked Data concepts, how to use those annotations to discover types of relevancy between distinct media assets and development of applications using discovered links between annotated media to provide enhanced user services.

STRUCTURE

The main topics of the tutorial are:

  • Media description in terms of fragments and creation of those fragments;
  • Media annotation models and semantic description based on Linked Data;
  • Extraction of semantic annotations using existing metadata, media analysis and named entity recognition (NER);
  • Publication of annotations and use in online multimedia search and retrieval;
  • Applications of re-use of annotated online media (our main example is a HTML5 based multi-screen enrichment of a TV news program, http://linkedtv.project.cwi.nl/).

SCHEDULE

    • Session 1 (1400-1500): Media fragment specification and semantics
      Speaker: Raphaël Troncy (EURECOM)
      Summary: In this session we will introduce the W3C Media Fragment URI specification, highlighting how media fragments can be incorporated into known media description schema, with a focus on the W3C Media Ontology and the Open Annotation Model. We will also discuss extensions to these ontologies to more richly link media fragments to the concepts they represent, re-using Linked Data as a Web-wide knowledge graph about concepts. We will briefly demonstrate various approaches to visual, audio and textual analysis in order to generate meaningful media fragments out of a media resource, as well as look at available annotation tools for semantically describing online media. Finally, we show how existing text around media (subtitles, transcripts) can be used for fragment annotation through Named Entity Recognition services (NERD) and a combined approach for generating a semantic description of media from analysis, metadata and entity recognition (TV2RDF).

    • Session 2 (1500-1630 with 30 min break): Linked Media: An approach to online media annotation and re-use
      Speaker: Lyndon Nixon (MODUL University)
      Summary: The second session looks at how using Linked Data principles for media fragment annotation publication and retrieval (Linked Media) can enable online media fragment re-use:
  • Introducing the Linked Media principles
  • Publishing Linked Media using dedicated multimedia RDF repositories
  • Retrieval of media resources that illustrate linked data concepts
  • Using the Linked Data graph to find relevant links between distinct media assets (examples with SPARQL)
  • Retrieval of links between annotated media to enable topical browsing (using the TVEnricher service)
  • Examples of Linked Media at scale: VideoLyzard and HyperTED

    • Session 3 (1630-1730): User experience driven design of Linked Media applications
      Speaker: Michiel Hildebrand (CWI)
      Summary: The final session looks at the development of appliations on top of Linked Media. We focus on applications that enrich the experience of users watching TV programs by providing background and related information from the Web. What kind of user experiences are suited for a specific TV program? Which requirements do the desired user experiences pose on the algorithms to select and link media fragments? How do we build the applications to support these experiences by reusing material available on the Web? We provide answers to those questions using example applications in the News and Cultural heritage domains. We discuss the results of user studies in these domain and present the tool set developed in the LinkedTV project. The LinkedTV platform supports the annotation of media assets and generation of links between media using the approaches presented in the previous tutorial parts, and exposes the generated RDF via a SPARQL endpoint as well as a dedicated REST service. An open source multi-screen HTML5 toolkit for the development of applications where related media can be displayed or shared across different screens while the TV programme remains on a ‘main’ screen. The development process for the client applications will be presented, with particular focus on User Interface aspects for handling the organisation and display of sets of related media based on how they relate to one another. The applications itself (LinkedNews and LinkedCulture) will be demoed, giving a concrete idea of how the annotations and links between media can be used in building new applications of media re-use on the (Semantic) Web.

MOTIVATION FOR THE TUTORIAL

We welcome all Linked Data and Semantic Web researchers and practitioners who are interested in how semantic approaches may be applied to non-textual media on the Web.

Semantic Web and Linked Data research has reached a mature point with regard to text and hypertext documents, both in terms of specification of their semantic annotations (RDF, RDFa) and extraction of those semantic annotations (e.g. Named Entity Recognition). For non-textual assets, the community has failed to date to converge on an approach to semantically annotate media and re-use those annotations, outside of each one’s own system. As the Web becomes largely non-textual in content, the Semantic Web can not be relevant if it can not incorporate non-textual resources. Solutions to semantic media annotation on the Web and to subsequent media retrieval and re-use online are emerging, both through application of Linked Data principles to media (“Linked Media”) and W3C specifications for online media and its semantics (Media Fragment URIs, Media Ontology, Open Annotation Model).

These specifications and technologies are still very much at an “early adoption” stage, even within the research community let alone the commercial industry, so it is important to communicate their existence, explain their use and point to existing tools and services handling them. This is part of the goal of the MediaMixer (http://www.mediamixer.eu/) and LinkedTV (http://www.linkedtv.eu/)  projects which support this tutorial.

PRESENTERS

RAPHAËL TRONCY

Assistant professor in EURECOM (Sophia Antipolis, France) leading the Multimedia Semantics research group. Raphaël Troncy is also co-chair of the W3C Media Fragments Working Group and W3C Incubator Group on Multimedia Semantics. He is involved in many National (ACAV, Datalift) and European (K-Space, Petamedia, ALIAS, LinkedTV, OpenSEM) projects dealing with multimedia analysis and semantic web technologies in social media. Raphaël Troncy is an expert in audio visual metadata and in combining existing metadata standards (such as MPEG-7) with current Semantic Web technologies. More details and publications at http://www.eurecom.fr/en/people/troncy-raphael

LYNDON NIXON

Dr Nixon is Assistant Professor in the New Media Technology group at the MODUL University Vienna. He is responsible for the EU projects LinkedTV (www.linkedtv.eu) – as Scientific Coordinator – and MediaMixer (www.mediamixer.eu) – as Project Coordinator. He also teaches (MBA) on Media Asset Management and Re-use. His research domain since 2001 is semantic technology and multimedia, with a focus on automated media interlinking, and he has co-authored 73 refereed papers and co-organised 31 workshops or conference tracks to date. More details and publications at https://sites.google.com/site/lyndonnixon/home

MICHIEL HILDEBRAND

Researcher at CWI in the Information Access group and Linked Data specialist at the search engine startup Spinque. His research focuses on human-centred design of interactive information systems for Linked Data and Media. He leads the work package on hypervideo information interfaces in the EU FP7 project LinkedTV.  He will present the latest results in application design of the LinkedNews application from that workpackage. More details and publications at https://www.cwi.nl/people/2001

SUPPORTED BY LINKEDTV

 

Leave a Comment

You must be logged in to post a comment.