BBC R&D

Posted by Libby Miller on

It's my turn to write weeknotes, so here's something about the purpose and philosophy behind two projects I run: Tellybox and Better Radio Experiences.

Tellybox is a project about experiences for on-demand TV, on large screens. We started by asking people why choosing what to watch can be such a dispiriting experience. Some of the reasons we found were:

  • I don't care what I watch but I want something on and I don't want to spend time choosing
  • I want to watch with someone but we tend to argue about it
  • I've got a certain amount of time and want to fill it
  • I want something to match my mood

These very different needs led to the insight that there could be many different iPlayer experiences. Alicia and I have been making working implementations, and using them on a Raspberry Pi with a remote control to emulate a set top box. The idea of having them in a box is that they feel realistic for testing and are interesting to use, so that people want to help us evaluate them.

Better Radio Experiences is a similar idea for radio. There is no ecosystem for apps on radio devices like there is for TV sets and "sticks". The closest that currently exists is probably the Alexa skill store, but that's limited in functionality - there are many other ways of processing sound than the restricted set available on Alexa, and many other ways of interfacing with devices than voice. In Better Radio Experiences we've taken some users' needs, including:

  • I want to hear and see the lyrics so I can immerse myself in the music more fully
  • I want to skip music and other things I don't like
  • I want to hear my kind of music, but I'm also interested in the other things that artists are doing in their lives

We used these as a springboard to come up with ideas that use some of the many sound-based technologies created within R&D to meet those needs, implemented as app-like experiences on a physical device, with various interesting control and signalling interfaces.

The two projects are underpinned by four principles:

First, we start with people - using in-depth interviews, diary studies and workshops first to try and work out what people are really trying to do when they watch TV or listen to the radio or music, or whatever it is we are researching, using their experiences as a springboard for ideas and always having a voice on the team for those users as we develop prototypes.

Second, we make things. Ideas are cheap, but figuring out what to build is hard. Ideas embodied as physical objects give us the best, most engaged evidence about which to work with further and which to discard.

Third, we evaluate many ideas. Whole branches of potential might be curtailed if we polish our ideas too soon and bet on one branch.

Fourth, we get different opinions. Showing the results to people like ourselves doesn't tell us anything we don't already know.

Testing lots of ideas as physical objects only works if you we can make lots of things quickly. We're using web technologies (Javascript, HTML) combined with the Raspberry Pi, giving us the thousands of libraries, ease of use and widely available expertise of the Web, together with the physical flexibility and large community of the Pi. It's an incredibly powerful combination.

And with that, here are the notes from the rest of IRFS.

Standards

Chris, with Jason Williams and Edd Yerburgh from D&E attended the ECMA TC39 meeting, held at Imperial College in London. TC39 is the standardisation group that defines the ECMAScript language.

Chris has also started discussion in the Media & Entertainment Interest Group at W3C about use cases and requirements for a new version of the Media Source Extensions spec.

Tellybox

Alicia's been improving the "Children's" app (pictured above) - a timer-based choice mechanism which suggests children shows to fill a specific amount of time. The next step is to add playlists to the results page. She and Libby also spent an afternoon swearing at Bluetooth in an attempt to get the voice remote working consistently. It didn't work.

Better Radio Experiences

In the past couple of weeks we've been preparing for a hackday we're running for R&D in May - bug-fixing the platform and apps, planning, ordering hardware, and making sure people know about it.

undefined

We also held a workshop (pictured above) to make our prototypes more tactile and interesting. Richard Sewell facilitated, and Andrew N, Kate, and Joanna came along, and we were also joined by Alicia, and by Spencer Marsden from the Blueroom. The prototypes are starting to look amazing.

Talking With Machines

A lot of activity on the design research side of the project during this sprint; Andrew and Joanna, working with Stevey from the BBC Voice team, did a large user testing study on the Orator GUI, interviewing 10 producers, designers and developers about their experience with the tool, improvements that could be made and what they’d like the whole Orator system to do in the future.

We also released the results and recommendations from The Inspection Chamber user testing study to the BBC Voice team and other interested folk around the BBC.

Autonomous Cars Media Experiences

The freelancer we hired has been conducting a user study with external participants to get a better understanding of media consumption patterns in cars and the current dynamics between driver and passengers in relation to media. Barbara and Joanna reviewed the draft analysis report delivered.

Newnews

The team have been wrapping up Phase 2 (Personalisation) of the project - further iterations on a couple of the prototypes, making a demo video of the voicebot, writing a presentation for stakeholder talks, and writing a blog post for Medium about our combining media phase.

We met Pietro from News Labs who showed us some of his work related to personalisation.

We discussed themes and target audience for the next phase of Newnews.

Zoe, Thomas and Mathieu attended the iDocs interactive documentary festival in Bristol.

Public Service Personalised Radio

This sprint, David, Jakub, Todd and Tim have been getting ready for our user test of our 'Public Service Personalised Radio' project, alongside Rhiannon Barrington, a UX Researcher who’s joined us from the BBC's UX&D department as part of an informal 'exchange programme' we're doing. Rhiannon's been a great asset, bringing extra rigour and creativity to our research processes and we’ve learnt loads from her.

Quote Attribution

Matt Spendlove has joined us for a few weeks, working on a prototype which uses Chris’s Citron quote attribution prototype to provide a searchable database of all the quotations cited in BBC news articles over the last few years. We’re going to use this to demonstrate the value of the API for journalists and audiences - creating tools that allow people to understand the types of claims made in the news media and their provenance.

Local News Hackday

Tim attended the 'Local News Data' hackday in Birmingham and worked with a team from BBC News and Marketing and Audiences to use the Citron API to build a prototype which helps audiences see what their election candidates have said on a particular topic.

Acoustic Model Neural Network

This week Ben and Misa have been writing Python bindings for the Kaldi 'Acoustic Model Neural Network' an important part of the online Kaldi speech to text pipeline which we described in weeknotes previously.

Kaldi Phonetic Dictionary

Matt has been working with Elizabeth, a phoneticist, who is helping us map between several different phonetic dictionaries. This will allow us to use the BBC phonetic dictionary to provide pronunciations of new words to Kaldi.

Neural Network for Speaker Discrimination

Nick has trained a neural network to produce feature vectors using support vector machines to discriminate between speakers, producing good results.

SVM Algorithm

Denise is processing the BBC ground truth to test her Multiple-Instance SVM algorithm against our content.

This post is part of the Internet Research and Future Services section