May 15
9
As the EU funded R&D project LinkedTV began its work in October 2011, bringing together 12 European research and industry experts from 8 countries, its stated goal was the “seamless interlinking of TV and the Web”. Convergence of TV and the Web was at its beginning and the visionary group, led scientifically by Dr Lyndon Nixon, now Assistant Professor at the New Media Technology Group in MODUL University Vienna, foresaw a near term future where watching TV or browsing Web content would become essentially the same experience, with consumers moving between TV and Web content as easily as Web users were following hyperlinks to browse different Web pages back then. TV viewing as a passive experience would be perfectly complemented by the active browsing of additional information and content on the Web that it could trigger.
The group could not have anticipated how quickly and significantly technology in the market would shift in this direction, with increasing Internet bandwidth and device capabilities making TV/video streaming over Internet available to all, broadcasters launching TVoD and Catch-up TV services to their viewers and SmartTV sales meaning TV viewing in the living room now also had an Internet backchannel available. The utopian world of merged TV and Web seemed close; however TV programme viewers weren’t getting their information needs answered through Web-connected apps on their TVs, instead of having full screen Facebook on their TV they were looking for information on things in the TV shows, background to the news story, more examples of the art visible in the background of the scene – and are there paintings by that artist in the nearby museum?
Today’s TV viewers are multitaskers – they have a laptop, tablet or smartphone to hand while consuming TV on the big screen, and they turn to that second device to use the Web for additional information and content. However, companion applications are not well synchronised to this search – largely, their “knowledge” of what’s on is limited to identifying the programme and linking to social Web conversation around it or a list of cast members. What they don’t know, and today can’t know, is what is INSIDE the TV programme at the time the user is viewing it and could be interested in. So viewer’s real needs for Web and TV convergence are still not answered today – how to find out more about something you see in a TV programme if you don’t know what it’s called, for example?
LinkedTV has been working on the solution. By bringing together R&D experts across Europe who could provide the right tools to enable the envisioned interlinking of TV and the Web, we have produced a number of demonstrators where the experience of watching news – from the German broadcaster RBB – or a cultural heritage programme – the Dutch version of Antiques Roadshow from AVROTROS – is enhanced by additional information and content at the viewers fingertips – whether through remote control actions on a HbbTV-supporting SmartTV or through a Web application on their tablet or laptop. This information goes far beyond programme description or cast details like today’s offers – LinkedTV enables linking to information about concepts like persons, places and organisations inside the news story, links to background or related stories, or even browsing similar art objects in European collections while watching the discussion about another art object on screen.
“Years of collaboration, knowledge and technology transfer, implementation, prototyping and evaluation have brought us to this point, where we can offer an integrated set of services and software to content owners who would like to enrich their video with links to related information”, summarizes Dr Nixon, who initiated the LinkedTV idea following his PhD on multimedia enrichment in 2007 and acted as scientific coordinator of the LinkedTV project work, which finished in March 2015. “Trials with RBB and AVROTROS viewers have shown they appreciate the ability to easily access further information about what they see in the TV programme when they want. Viewers result in being more satisfied and engaged by the content. The broadcasters also stand to gain by offering LinkedTV enrichments as an added value service alongside selected content, as it can attract both new viewers and retain existing viewers, and promote their archived and long tail content with a new viewer experience.”
LinkedTV results – software, services, demos and reports – are published publicly at http://www.linkedtv.eu
LinkedTV products and demonstrators for media organisations are presented at http://showcase.linkedtv.eu
LinkedCulture demonstrator: https://vimeo.com/108891238
LinkedNews demonstrator: https://vimeo.com/119107849
Technology consultancy and proof of concept creation are possible. Organisations interested in LinkedTV enrichments for their content can contact Modul Technology GmbH c/o Lyndon Nixon.
The EuropeanaLabs has created an Application Showcase to help developers find the tools they need to process Europeana data and create innovative and inspiring new data services. The LinkedTV Editor Tool is an open source Web based video annotation tool designed to simplify the complex task of semantic media annotation by plugging in video analysis (for shot and scene fragmentation), named entity extraction and content linking (for related information) services. In the context of the LinkedCulture scenario, the tool was used for annotating art objects in TV programmes (using a dedicated metadata model) and linking these annotations to related art objects via the Europeana API. For more information and contact, see
For the fourth year, there will be a Linked Media workshop. In 2016, it takes place during ESWC 2016 (Crete, end of May 2016). The Linked Media workshops are an unique researcher and practitioner meeting for those working in the innovative area of media description using Linked Data concepts and the re-use of those descriptions and the Linked Data graph to build new services and applications in media retrieval, browsing, enrichment and linking. While the LinkedTV project has contributed greatly to this area, we know there is much more work to be done! Your papers and demos are requested, the deadline is March 11.
More information at the LiME 2016 Workshop webpage.
Congratulations to José Luis Redondo-García, Giuseppe Rizzo and Raphaël Troncy (all from LinkedTV partner EURECOM) who won the best paper award with their work on “The Concentric Nature of News Semantic Snapshots: Knowledge Extraction for Semantic Annotation of News Items“.
Single news articles on the Web often give a limited picture of the story being reported. There is sufficient information available on the Web to enrich the article and provide a broader picture of the reported story. The authors propose a concentric-based approach that enables to represent the context of a news item. Representative entities are collected via named-entity recognition and entity expansion. These entities are then harmonized into a single model and arranged according to different dimensions such as frequency (core), informativeness, semantic connectivity, and popularity (Crust).
The work from the paper draws from LinkedTV R&D in semantically enriching news broadcasts with related entities and news articles from the Web which was used in the scenario and demonstrator called LinkedNews. The technology for that scenario is part open source part commercial, see the list of LinkedTV tools & services as well as the LinkedNews showcase!
Paper description courtesy KCAP 2015 trip report by Martine de vos.
We would like to announce the availability of a new on-line video analysis service with interactive user interface developed by LinkedTV partner CERTH-ITI. This is a web-based service that lets you upload videos in various formats, performs visual analysis algorithms on the videos (shot segmentation, scene segmentation and visual concept detection), and allows you to navigate through the analysis results with the help of an interactive user interface.
The service supports videos in mp4, webm, avi, mov, wmv, ogv, mpg, flv, and mkv format. A few hundred visual concept detectors are evaluated for each video shot. The complete service runs at high speed (i.e., it is several times faster than real-time video processing). The results user interface allows viewing the video structure (shots, scenes), viewing the concept detection results for each shot, and searching by concept within the set of detected shots.
You can access our web service and try how it works for your own videos at: http://multimedia2.iti.gr/onlinevideoanalysis/service/start.html
LinkedTV is pleased to participate as part of the 2nd Screen CC (Content Convergence) mini-cluster in this year’s IBC Exhibition.
The mini-cluster is a collaborative initiative of EU-funded research projects focused on developing innovative solutions for production, delivery, consumption and ecosystems for monetisation of 2nd Screen content. It is composed of 4 leading EU projects with the combined investment of more than 20M €, and the participation of leading business stakeholders and research institutions in the Media Area, including the BBC, Deutsche Welle, Fraunhofer IAIS, Huawei, RAI, RBB, Surrey University, TPVision and NEC, to name just a few.
Our customer code will allow you to register for the IBC Exhibition for FREE, even though early bird registration has closed. With your registration pass, you will also receive a travel pass for the show, which you can pick up from Information Points onsite. Just
register through https://ibc.itnint.com/IBC15/RegOnline/CreateAccount.aspx using the code 20755.
You can then locate us in the Future Zone (Hall 8) at booth 8.F03 and learn more about LinkedTV.
Find below a description of the demonstrators of the different projects:
See you in Amsterdam!!!
Original announcement at the mini-cluster page.
Jul 15
16
42 months of R&D in an international EU-funded project covering media analysis, annotation, linking, personalisation and UIs, a software architecture for all of this on server and client side, and three distinct scenarios covering the domains of News, Cultural Heritage and Media Arts…. how to explain the results of all of this?
LinkedTV is pleased to publish publicly online its final project report which covers the technical outcomes of all the project workpackages covering all the scientific research areas mentioned above. In summarised form the reader can find out the main innovations and outputs in each topic achieved by the project partners, and of course to get at the technology itself, one can check out individual software and services (much open source) or the LinkedTV Showcase for enquiring about a commercial solution.
Given the increasing business interest in the LinkedTV technologies and the need to conform to industry standards, we are pleased to announce on behalf of the media analysis partner CERTH-ITI that the LinkedTV media analysis service, which has in the project output the results of all individual media analysis algorithms* in an aggregated format known as Exmaralda (EXB), will now additionally export analysis results in the EBU/ISO MPEG7 AVDP (Audio Visual Description Profile) result format. The LinkedTV Platform now also stores media analysis results in this format. More information about the MPEG7 AVDP format:
* The LinkedTV aggregated media analysis service, available for external use, covers shot and scene segmentation, concept detection, object re-detection, keyword extraction, ASR and face detection, according to client configuration.
Jul 15
10
The Mons, Belgium based museum Mundaneum re-opened this summer 2015 with a new exhibition “Mapping Knowledge: Understanding the World through Data” as part of the European Capital of Culture 2015 events. LinkedTV is pleased to be present in this exhibition through the “Mundaneum Documentary”, an adapted version of the projects ‘Social Documentary’ multimodal interactive video exhibit. The media art installation allows visitors to explore Belgian video material along different facets through physical interaction with the tabletop interface. It leads us to rethink how we consume and interact with audiovisual content outside of traditional television and streaming models.
Mundaneum Documentary can be visited this year at the Mundaneum, Mons, Belgium.
News item in French (Numediart)
We are pleased to announce an extended version of the LinkedTV tv2avd service (a.k.a. “Aggregated multimodal media analysis service”). This new version integrates software from our partner CERTH for video scene segmentation analysis which is applicable in any type of content, contrary to the developed chapter segmentation techniques which are strictly adapted to the analysis requirements of the LinkedTV scenarios (since they were based on algorithms that rely on the detection/re-detection of specific visual cues that appear in the AVROTROS and RBB content only).
In other words, the updated/extended version of the service contains the tools for providing a higher level temporal segmentation (than shots) of any video other than the material used in the project scenarios, since now the LinkedTV project is finished and new clients may wish to analyze other types of content (e.g. for demonstration purposes).
See a video demo explaining the existing functionalities.
With the LinkedTV project over, our tools and services – many of which are open source – can also be applied in other new media and data contexts. LinkedTV partners will be happy to support you in tool set-up or access, according to the respective licenses and terms of use. Discover in our list: