BBC R&D

Posted by BBC Research and Development on , last updated

Updates from the last two weeks of work in the Internet Research & Futures Services team of BBC R&D

A "cultural probe", in the form of a postcard, about memories

Voices and audio

As reported last time, we're developing some synthetic voices to learn about which people prefer. This sprint was mostly project planning, and with a deadline of December, it's time to go from a technology exploration to making the thing. We've been thinking about what accents would be representative of the UK, probably about 7 regions with a few different voices in each - maybe 20 synthetic voices in all. We've started recording the voices and accents needed using a makeshift recording studio; it needs about 2-3 hours of reading a specific text. So far Andrew and Libby have been recorded and synthesised.

Tom and Henry, after testing version 1 with some NFTS students, have been planning the next phase of "Looking for Nigel" - an outdoor audio augmented-reality (AR) story experience. We're also aiming to refine the "Not a Robot" AR experience, developed by our QMUL interns. Rosie has started her PhD placement and Rishi's time with us concluded with a talk on his audio archive explorer.

Machine learning and algorithms

The team have reached three milestones in their planned work. First, a new neural net architecture for Kaldi that has made it two times faster, three times more memory efficient and more accurate than our competitors. The team deservedly treated themselves with a cake. Second milestone was a minimum-viable product of an evaluation dashboard and third was porting the package to Python.

Alex has been working on the synthetic voice work described above - he's tidied up the git repository and documented the work. And Ollie, who's just finished his placement with us, is off on a mini-placement with the News team in Delhi.

Our QMUL interns, Saba and Taner have been exploring multimodal recommendations (combining different techniques to make better recommendations), including extracting text features from programme subtitles. The team is also planning a hack week on recommendations and micro-genres. Chris has created a Welsh version of our Starfruit text tagging system, but it's lacking training data, so he's investigating that problem. Importantly, if solved, we could then tackle other minority languages.

Alicia and Kristine have been working on playlists - using machine-learning to extract features that might make musical sense, like loudness, dissonance, tempo or key. They're now building a demo of this to show what they've done.

Elsewhere

David and Holly are designing cultural probes - activities and questions that people can respond to - for a diary study with 18-25 year-olds on memories. One of the postcards they've designed for this is shown at the top of this post. This sprint we were also supposed to be working on fact-checking, but the timing went a bit off. So Libby has been contacting BBC journalists and we're interviewing them next week about what they might need from a fact-checking tool.

Lastly this week, at our team meeting Henry reflected on his visit to Ars Electronica, a festival of art, culture, technology and society in Linz, Austria. It was much bigger than he expected - "4 floors of art, with a giant art bunker underneath". Its theme was "the midlife crisis of the digital revolution". He reported that there were some good AR installations, lots of wibbly-wobbly GAN videos, swarming robots and immersive sound. Overall it was a good spread of technology R&D and art.

Henry at Ars Electronica

Recommended reading

An interactive murder mystery, but you have to learn SQL to solve it
How men and women use FixMyStreet differently
How can we develop transformative tools for thought? is a great essay on tools for thinking, and a new mnemonic format for learning.


This post is part of the Internet Research and Future Services section