Research & Development

Posted by Libby Miller on

After a short hiatus, weeknotes is back!

The Internet and Society team have been working on our current theme, making machine learning understandable and controllable. As always, we're user-centred, so for each project we're trying to understand if it's needed and if so, for whom, and for what. We think it's a very important problem, but we're not sure if end users care, so we've run a survey to try and find out, and we're in the process of analysing that.

Our first project was about images, so to explore that, Jess and Ben built a machine learning model and API for bird identification based on work by Rob Dawes and his team. Alicia built an interface designed by David and with text by Tristan, allowing you to upload a picture and identify the bird. The system uses Misa's FlashTorch as a component to understand what the model "sees". This is a really interesting project to explore the nature of biased or unbalanced training datasets - ours (from iNaturalist) was largely ducks - and a good test of explaining to a non-technical person what was going on, which is complex and often unclear even to an expert.

Screenshot of a prototype for bird identification.

Our next project was around music, and to explore understandability in this area we were able to use some data from BBC Introducing. Our users in this case were people from the BBC Sounds mixes curation team who were kind enough to give us some of their time to explore the problems they face when finding relevant Introducing tracks to use in their mixes. Thanks to (ex-R&D) Alex for arranging all that for us. We explored various visual metaphors that might enable curators to find tracks that could help them in their role, by using low-level music analysis tools (Essentia) and machine learning over the top of that, including Kristine's work on matching tracks with similar radio stations and mixes.

A sample visual music pattern with key.

We're now about to explore understandability with respect to recommendations, yet another, different aspect of the problem, in collaboration with the Data team.

Meanwhile, with Henry's help, Tristan has been running a series of workshops about stock photos and "AI" (machine learning), trying to understand what visual representations make sense and accurately convey some aspect of what machine learning is (i.e. not blue brains and embodied robots).

Stock photos ideas.

Other things - Chris has been actively participating in the W3C remote TPAC (all-groups meeting). Holly and David have been continuing to work with Todd and Hannah with workshops on their user research study "Life Under lockdown". There'll be some blog posts about that soon. Holly, David and Miranda have started up the second part of our collaboration with Full Fact. We said a sad but temporary goodbye to Hannah as she joins the sustainability team in BCS for a year.

The Data Team have finished their month-long Review and Planning process and are about to start work on their first new release for Year 2. For the next 12 months they will be continuing their work in Speech-to-text and Sentiment Analysis, giving a more solid footing to their Recommendations work (which has recently been integrated into iPlayer) and moving up a gear with their Data Science Research Partnership efforts.

The biggest change from last year is the introduction of Natural Language Processing (NLP) as a stand-alone workstream of its own. There are lots of business problems that a more in-depth understanding of NLP could help with - how to better moderate BBC forums, how to automatically segment our programmes so we can deliver personalised content to users, even how to better understand what our users are saying about the BBC and its programmes on Twitter. They're looking forward to getting started on this exciting portfolio of work.

The Interaction and Prototyping team (formerly Anansi) has started exploring how to create a better shared experience across devices with multiple participants. As a first pass, Mathieu and Andrew have been looking at how to apply a board game design approach to multi-screen online shared experiences. They have been exploring and creating interaction patterns to test with casual gamers. The aim is to eventually create a format that is flexible and scalable so it can be used for different types of formats - using game mechanics but not just for games. The idea is to bring people together, and hopefully enable meaningful social interaction even if they're not physically together.

I&P is also involved in a collaboration project with TS&A and UX&D on voice-first experiences for which Anthony has been making some fantastic prototypes. Henry has also been working with Jasmine and me on an upcoming workshop for R&D colleagues on building a physical Internet radio (based on the Radiodan work, remember that?)

This post is part of the Internet Research and Future Services section