BBC R&D

Posted by Tristan Ferne on

In R&D’s Internet Research & Futures Services (IRFS) section we organise our work in 2-week sprints. At the end of the sprint on a Thursday we share what we’ve done with the rest of the team. We also publish a subset of this in our “weeknotes” - we think it’s important to work in the open where possible and we’ve been doing it for over 9 years! Here are our sprint notes for the 10th to the 21st June from the three teams in our section.

Stories team

Ant’s been working on the Inspection Chamber open access proof-of-concept, starting by extracting the “intents” they use. This work is about making an API or reference implementation so that third-parties can test their voice interfaces with our landmark interactive drama.

Nicky and Barbara kicked off “Pears and Plums” (don’t ask, I’ve no idea) yesterday. It’s a research collaboration with the Voice+AI team, the BBC Science unit and Salford University. Around 90 years ago the BBC interviewed thousands of people to determine the best BBC “voice”. Now we’re thinking about what the BBC should sound like in the 21st century with voice assistants and the like. We’re planning some kind of online test to get insights into how people perceive synthetic voices. Next up is refining and planning the project and thinking about what kind of voice properties we want to explore.

Henry and Nicky had a workshop with sound designer Ben Ringham to plot out their interactive piece for audio augmented reality devices (i.e. smart headphones/hearables/EarPods and their ilk). Barbara’s been continuing her diversity and inclusion work and had a catchup with the team working on orchestrated audio. Ant learnt FinalCut Pro on a course.

Next up for the Stories team; sourcing synthetic voices, better defining the team scope (and name), a new intern from QMUL and a new developer joining the team.

 

Data team

Matt, Misa and Alex are preparing for our joint hackday with the BBC Datalab team. They’ve been refactoring their tool for managing and sharing datasets and putting together some datasets, including subtitles from YouTube.

For our Content Analysis Toolkit we’ve been porting code to a new version of Ubuntu and working out how much the services will cost. Mathieu and Andrew are working on the homepage and content for the demonstrator. Denise is wrapping up work on fusing different algorithms. Ollie is evaluating syncnet (an automated lip-syncing algorithm) and Holly’s been interviewing radio producers on how they might use the tools, while trying to find sports producers.

In text analysis tools Chris has continued development of the emotional reaction classifier, learning from emoji reactions to news stories on Facebook, and it is now working reasonably well. He also made our Starfruit and Mango APIs available at the News Labs hackday.

The team have also been thinking about where to go next. After an ideas session and some mind mapping they have some themes, but no decisions yet. The themes include continuing speech-to-text & face/voice recognition work, more recommendations, mood and sentiment analysis, programme & audience data analysis and automatic highlight generation.

 

Better Internet team

The newest of the teams, and this is their first full sprint as a team together. They all worked on the news mood filter and their goal for the sprint is to get to a really strong case and a compelling demo. It has a connection with several of their themes (digital wellbeing, privacy and personalisation).

Hypothesis: Because we think that young people avoid all news because it is depressing and makes them feel helpless, we think that giving them more control over what's visible to them will mean that they are more likely to read some news.

The idea came out of a hackday and in the browser extension that Alicia built you can mute words, which then hides news stories mentioning those terms. The team did desk research around mental health, news and young people, tweaked the language and design of the prototype and Alicia, Holly, Kristine and David went out guerrilla user testing the prototype. Most people understood the interface but there was a mixed response to the concept; particularly around whether news should be filtered or personalised.

Next is a reading and organising week; analysing the research, thinking about what could be improved and planning what’s next.

Elsewhere

Emma came back to visit us, she’s now working on BBC Box and object-based media in the Salford lab. And finally we have enough DVI cables for all our monitors.

Finally, here are some interesting things I saw on the Internet recently…

If anyone remembers our Mythology Engine or Peaky Blinders Story Explorer projects providing the how, what, who and why for TV series, this seems to be Netflix's version for one of their dramas.

Why books don’t work as well as they could for learning.

"To be sure, there are many experts who are doing important security research to make the detection of fake media easier in the future. This is important and worthwhile. But on its own, it is unclear that this would help fix the deep-seated social problem of truth decay and polarization that social media platforms have played a major role in fostering"; https://www.theguardian.com/commentisfree/2019/jun/24/deepfakes-facebook-silicon-valley-responsibility

“Researchers at the University of Washington and Facebook have developed an algorithm that can “wake up” people depicted in still images (photos, drawings, paintings) and create 3D characters than can “walk out” of their images.” https://kottke.org/19/06/photo-wake-up

 

 

I’ll be having nightmares about that Picasso...

 

This post is part of the Internet Research and Future Services section