Posted by Tristan Ferne on , last updated

These are weekly notes from the Internet Research & Future Services team in BBC R&D where we share what we do. We work in the open, using technology and design to make new things on the internet. You can follow us on Twitter at @bbcirfs.

This week we watched Tony Hall's speech on the future of the BBC. Interesting to see some of R&D's work might have influenced this - from early inspirations for Playlister, to personalisation and user data, to coding and hardware.

A poster showing how the World Service prototype creates and improves metadata

We kicked off a new collaborative project, MediaScape. Chris Needham, Sean, Libby and Chris G travelled to the first meeting. We'll be developing multi-device media applications on the Web, with our tech work focusing on device discovery, pairing, user authentication and synchronisation.

On Comma, Matt H, James and Chris Needham are using the Amazon SWF API to prototype a distributed media processing workflow. James has been getting a dummy algorithm to run through it. And Yves has been improving his speech recognition, using new acoustic models from our partners in the Natural Speech Technology project to get word error rates down from 50% to 25%.

Andrew N, Gareth and team have been sketching out a set of tech prototypes for a second screen application needed for VistaTV.

We're very close to releasing our library for audio waveforms in HTML. Chris F, Thomas, and Chris Needham have been getting everything ready. And Andrew N released Frankenpins, a small open source library that makes it easy to program physical buttons on the Raspberry Pi. Gareth has been extending it to virtual buttons.


  • Denise has been reducing dimensions of a mood dataset.
  • Barbara has been trying to get an ad in a national newspaper.
  • Rob has been setting up projects with UCL students to develop the aforementioned waveform library.
  • Tom showed us how he's scaled speaker recognition tech to the whole of the World Service archive. It's got 54% recall (for a given person speaking it finds about half of all the programmes they were in) and 95% precision (where it does identify a person speaking in a programme it's correct almost all of the time).
  • Michael's been meeting people about people. And drama (with Zillah).
  • Zillah's been prototyping an archive exploration with TouchCast and preparing to run a panel.
  • Olivier's been at the Paris Web conference, speaking about "The Rusty Web". He's been working on the presentation with his co-presenter in the open and in French.
  • Yves has been preparing for the International Semantic Web Conference where he's talking and running a workshop.
  • Chris Newell, Gareth and Dominic have been demonstrating things around the BBC.
  • Sean and the EBU's Cross Platform Authentication group are assessing authentication technologies including WebID, Persona and OpenID Connect.
  • Matt H and Yves were somewhat reluctantly filmed describing the tools they're contributing to this week's News Hack event.
  • Matt H was also trained in DVB and seems to have volunteered to build a Freeview box. Jana's learning about Hadoop admin and Matt P now knows more about Ceph storage.
  • And finally Andrew N and Dan have been inducted into UCL's Institute of Making. Here's a video of the institute making something from a cuttlefish.


Some links:

(from Andrew N) "I loved this physical Lego calendar that uses computer vision to sync with a digital counterpart. A "flip-flop" perhaps?"

(also from Andrew N) "A good read about the shift to a design-led Google in How Google Taught Itself Good Design"

Yves found Pocket Sphinx - speech recognition in the browser and Ruby Band - data mining and machine learning in JRuby.

And I was reminded of Science/Engineering/Art/Design. As Joi says, there's a whole book on this by Rich Gold.