BBC R&D

Posted by Matt Haynes on , last updated

The Experiences team have been user testing their new prototype "The Next Episode", the Discovery team continues it's work on recommender systems and the Data team tests a new speaker diarization algorithm.

Talking with Machines

After much preparation and series of internal pilot testing it was time for the external user testing of The Next Episode. Nicky, Ant, Andrew, Henry, and Oscar spent two days putting the prototype to the test against a range of participants from the intended audience. Initial feedback was positive, with our attention also drawn to lots of future improvements and possible changes to further the experience.

Audio AR

Henry, Tim, Kristine and Emma attended a performance of Berberian Sound Studio and experienced the powerful and immersive sound design created by a team we’re working with on our early Audio AR experimentations…

Recommendations

The team has been focused primarily on recommendations:

  • Kristine has been doing background research on different approaches to music recommendations and how the algorithms can be shaped by users
  • Chris revisited earlier work on A/V recommendations to explore how we could build hybrid collaborative / metadata-based models
  • David met with the iPlayer recommendations team to discuss their findings on user research

Speaker Identification and Segmentation

During this sprint, the Data team were joined by Holly, our new Senior UX Researcher. Holly is joining us from the Government Digital Service.

Ben has been testing Google’s Unbounded Interleaved-State Recurrent Neural Network (UIS RNN) as an online speaker diarization system, he has been comparing it’s performance to the current LIUM segmenter and the more recent Kaldi X-Vectors implementation.

Misa and Alexandros are looking into an updated version of the VGGVox neural network for the speaker identification and discrimination stage or the system. They are looking at using “utterance-level embeddings” as decribed in the VoxCeleb2 paper from our friends at Oxford University.

Tellybox

Libby has been gathering together a page of press and links for further Tellybox promotion.

Standards

Chris is preparing for the upcoming W3C Advisory Committee meeting, and planning next year’s standards budget.

With the help of Chris, Alicia created a React component which use the Presentation API web standard to display web content from a browser to devices such as projectors and televisions through the ChromeCast. At the moment, the prototype only displays a static HTML page but the next step is to enable interaction by modifying the HTML from the browser. Chris sent some feedback on the Presentation API spec.

Graduate Projects

Emma has been creating graphs explaining the results of the Voice survey. She and Libby got a basic ‘voice robot’ working, self-contained on a Raspberry Pi using tensorflow, Rasa_nlu and Radiodan-neue, based on work by Anthony.

Oscar spent some time trying to get sonic pairing working combining the backend spike and using existing libraries, without success for the time being.

Other stuff

  • Vino and Libby ran a fun workshop with Goldsmith’s Interaction Design Studio (the makers of MyNatureWatch) to see if there are ways we can collaborate.
  • Tristan gave a remote talk to the Digital Journalism, Platforms, Professions and Coordination : OsloMet Digital Journalism Symposium on how to prototype news with a multi-disciplinary team. He’s also been planning for various reviews of our work with R&D management.
  • Libby spent some 10% time getting a heat sensor working with MyNatureWatch.
  • Oscar, with much appreciated help from Chris, got the Whereabout’s LED ring working - now showing available for the sit-stand desks.
  • Libby arranged and ran the regular Sounds / R&D meeting.

This post is part of the Internet Research and Future Services section