Posted by Libby Miller on
The latest sprint notes from the IRFS team: a spot sound effect workshop, good feedback for Tellybox, Newnews and the Next Episode, a new project on Text Mining, and much more.
For Emotional Machines, Emma ran a very enjoyable and interesting workshop in Maida Vale’s radio drama studio about sound and emotions, using their "Spot Sound" (Foley) tools. Al Overdrive also ran a very useful workshop for us on colours and emotions. Together with David McGoran’s workshop on gestures and robotics, and the data from her survey, Emma is now ready to synthesise all this information in order to build a prototype.
In our other graduate trainee project, Smart Speaker Pairing, Oscar started working on a core implementation of pairing based on a numeric code.
Oscar also attended BVE (Broadcast Video Expo) with the other graduates, and learned about different fibre connectors, talked to the people at Vidrovr who are doing similar things to the IRFS Data team, and attended talks on Machine Learning and how to attract younger audiences with ESports.
Alicia has also been working on a React based demo for the W3C Presentation API, as part of our Standards work.
Tellybox was also featured in the Times!
For Talking With Machines, The initial prototype for The Next Episode was finalised and trialled in a pilot user test with students from the National Film and Television school. Their reaction was overwhelmingly positive - they also gave some great feedback on what we should change before the usability test.
Newnews got results back from some remote user testing of the live Incremental pilot, via UserZoom. There were 10 participants, each looked at 3 stories and the results were very positive. 90% said they now had a better understanding of the story, 93% of responses said it was a good way of explaining the story and all participants wanted to see more stories like this.
Over in the Discovery team, Chris and Andrew have been discussing a new project on Text Mining with Professor Sophia Ananiadou and EPSRC Fellow Chrysoula Zerva at the University of Manchester, School of Computer Science, National Centre for Text Mining. The project is part of our Data Science Research Partnership and will study the relationship between the interpretation of scientific publications concerning health and the subsequent news coverage.
Meanwhile Chris has been collating all the BBC's editorial segmentation data and attempting to marry it with the corresponding TV subtitles or Speech-to-Text transcripts from Radio. The goal is to create a substantial dataset which can be used to develop and evaluate segmentation algorithms for text streams.
This sprint the Data team (especially Mathieu and Andrew) have been working on their demonstrator application to show how the tools they've been working on can help production staff find people in audio and video content. Matt and Misa have been looking at improving the Voice Activity Detection by training with a larger dataset of speech and noise. Ben and Alexandros have started to look at new speaker change detection methods. Denise is working on detecting people speaking in video. Ollie has been doing some machine learning courses and has been getting to grips with SyncNet.
This post is part of the Internet Research and Future Services section