Posted by Ben Clark on , last updated
First up: Zoe, Tristan, Mathieu and the New News Team won the BBC News Award for “Best Digital innovation” for the New News project! But what have the rest of IRFS been up to?
The stories team have been thinking about how they can make better use of the assets created for The Inspection Chamber. They want to be able to expose the content to smart speakers from different vendors (such as Google Home and the Amazon Alexa) using a single API rather than exporting the content into those ecosystems, using this as a way of thinking about standards and standardisation for experiences which use voice interfaces.
Also in the Stories team, Henry and our students from Queen Mary University of London have continued to work on a new audio augmented reality prototype using Bose Frames. The prototype is intended to be used outside in a wide open space. Henry has been learning Unity to try to use the data from the accelerometers built into the frames to detect when users are moving small distances. They have been working on the narrative for their prototype, settling on the multi-person experience of mini games.
Finally, the team have been user testing their next voice experience for mobile phone and smart speaker in the usability testing lab and are planning the film describing how they made it. They had a visit from the fanSHEN theatre company, and Nicky gave a talk at the TVX2019 workshop on Interactive Radio Experiences.
Over in the Internet team, Kristine has been playing with an application based on Databox. Databox is a "Privacy-Aware Data Analytics Platform" which gives users control over their private data. Kristine is trying to build an application which will use your friends' playlists to expand your music recommendations.
Chris attended the 8th FOKUS Media Web Symposium organised by Fraunhofer in Berlin, on behalf of the W3C. He ran a joint session between W3C and DASH-IF and presented the latest work that the W3C are doing in the media space such as: low latency streaming; media capabilities; the picture in picture api and media session.
In the Data team, preparations are afoot for the hackday with BBC Datalab on the 3rd July. Alex and Matt have been improving bbc-data, a python package to share and distribute datasets. Misa has been preparing for a talk on visualizing convolutional neural network. Matt and Ben have been refactoring the content analysis toolkit code to use python 3 and Ubuntu 18.04 (bionic). Chris has extended the Vox text classifier to handle regression as well as classification.
Alex has also improved the performance of our Speaker ID tool by using majority vote across utterances of the same speaker to improve the results particularly for longer programmes. Ollie has been using syncnet to detect when people are speaking in video to help build an automated pipeline to create training data for Speaker ID models. Denise has evaluated a speaker talking algorithm based on lip movement (based on the paper "Hello! My name is... Buffy” – Automatic Naming of Characters in TV Video", by Everingham et al.).
Mathieu and Andrew have redesigned the search interface for the Content Analysis Toolkit demo. Holly's user research found that the original search controls were not intuitive. Users expected to search for relevant content using a search box which they could type into. The redesigned interface has a single search box with autocomplete which suggests people and entities appearing in the media. Holly has also been interviewing radio programme makers and showing them the demo
This post is part of the Internet Research and Future Services section