Research & Development

Posted by Caroline Alton on , last updated

It had never been done before, and the idea would make many a broadcaster’s eyes go wide with wonder. Use artificial intelligence (AI) to help create two nights of experimental programming on network BBC TV on BBC Four, by delving into the treasures of the vast BBC Archive. And then dig deeper, and use AI and machine learning to create mini-BBC Four style segments in a programme presented by Hannah Fry, to illustrate how the technology works.

So why were we doing it? Here at R&D we’re already exploring how AI and machine learning technologies could transform the future of media production, and are always looking to work with programme makers to support and future proof the BBC.

The BBC has one of the largest broadcast archives in the world, and manually searching millions of programme hours is near impossible. It’d take decades for one person to watch them all.

The increasing digitisation of the archive gives us the opportunity to develop AI tech to help filmmakers and schedulers find hidden gems - programmes and content that may otherwise not be seen again, or overlooked. And we are able to utilize the speed of machines to help us, as a computer’s gaze is much faster than ours and can scan a years’ worth of TV in a couple of days.

R&D have previously worked with BBC Four, so it was a great to collaborate again and we agreed with BBC Four Editor Cassian Harrison that artificial intelligence, an emerging trend, would be a great choice. Two ideas were finally selected that could make their working lives easier and complement their schedule best. And so BBC 4.1 was born.

First, we would use machine learning to pick out likely shows for an evening’s entertainment on BBC Four from the vast BBC Archive. The channel's scheduler would then use this list to select what would be broadcast.

Now BBC Four is quite a distinctive channel, so we’ve had to create the new technology ourselves, drawing on the work of developers around the world. It’s been done in-house by R&D and can’t be bought off the shelf. It’s in its early stages and we’ve been working with academics to refine it, including one from Mexico who is working on a PhD about teaching computers to enjoy television.

The AI examined the programmes that BBC Four had shown in the past and their attributes, analyzing their descriptions and subjects – be it music, history or science. Computers trawled through more than 270,000 programmes across the archive that were available in digital form, ranking the top 150 most relevant factual ones by what you could call their 'BBC Four-ness'. Schedulers then used the list to select those to show across two nights, the 4th and 5th September 2018.

Woman looking through the shelves of television archives

Now to the second idea, and really challenging part of the project, to use AI to help make TV in Made by Machine: When AI met the Archive. Our aim was to highlight how the technology worked, and using it to create mini-BBC Four style compilations from these top 150 programmes seemed like the perfect way to do it. But how would the machine 'watch' them? What would it see? And could it be trained to select and edit video in the beautiful way people do?

The aim was not only for the AI to create segments, but to let people 'see' inside the mind of the machine as it did so, albeit in a simplified way. Viewers could watch its processes and also spot the limits of its learning.

How did it do that? It broke each of those top 150 BBC Four-like shows into bite-size chunks (more than 15,000 in total), and then chained them back together.

And to show a range of different ways computers could analyse video and then try to link it together to form some sort of story or narrative, R&D technologists devised four techniques:

  • Object & Scene Recognition: where the AI learns to identify what a scene consists of, including the type of landscape, identifying objects, whether people are featured and what they might be wearing. You then see how it attempts to create a compilation where each scene follows on from the last in some way.
  • Subtitle Analysis: the AI uses natural language processing as it scans the subtitles of archive programmes and then looks for connections between words, topics and themes as it pieces footage together.
  • Visual Energy (or Dynamism): the AI analysing video frame by frame to try to detect whether there’s a lot of activity on screen (high energy) or not (low energy). It then tries to create a compilation with shifting pace starting slow, building up energy then dropping back for a breather before shifting to a climax.
  • Finally the AI draws on what it’s learnt using all three techniques to create a new piece of content.

For each we tried to show on screen the decisions that were being made, in the form of simplified text that was generated by the AI. The video compilations in the programme are virtually as made by the AI. The selection of all the clips and pretty much the editing were all done by the AI, with minimal human intervention.

We tested each of these techniques by creating compilations of up to 15 minutes. It didn’t always go to plan. The machine didn’t always get it right, and sometimes got stuck in its own a data loops. At one point it had a particular fascination with buses, another test run focused almost solely on clips from one documentary about the garden of an English country house, with barely a mention of the other 150 programmes. But there were fantastic moments of unexpected juxtaposition, and wonderful snippets of archive programmes which our team had never seen before and which we’d now like to watch in their entirety!

The AI reflects its training data, so any bias in its training was replicated. So it failed to spot mobile phones, but picked up cell phones (incorrectly in some case). And the machine's choices were guided by our engineers. For example, we opted for the bite-size chunks of programme to be created with durations between 25 seconds to just under two minutes. It also found some programmes that wouldn’t usually be picked up for scheduling due to complicated contractual and rights issues, so we filtered those out. Multiple versions were made using each technique, and then one of each was chosen by BBC Four to go into Made by Machine: When AI met the Archive.

These sections were incorporated into the programme together with presenter Dr Hannah Fry and a virtual host, 'BBC 4.1' as well as colleagues across the BBC, including the team at BBC Archives in West London. As experimental as the programme was, we didn’t have the timeframe or resources for everything to be created by AI and machine learning. The virtual host 'BBC 4.1’ was made by BBC Creative, in the spirit of BBC Four and the programme. We also worked closely our colleagues in BBC Television & Media Operations to produce the programme production, with graphics designed with Made in Colour.

Made by Machine: When AI met the Archive at 9pm on BBC Four, 5th September 2018 as part of BBC 4.1 AI TV.

Tweet This - Share on Facebook

BBC R&D - Using Artificial Intelligence to Search the Archive

TVB Europe - BBC Four experiments with AI for night of programming

BBC R&D - AI in Production

BBC R&D - AI Opportunities: Transforming Coverage of Live Events

BBC R&D - Artificial Intelligence in Broadcasting

BBC R&D - The Business of Bots

BBC Technology + Creativity Blog - Fighting bias in the machines

BBC Technology + Creativity Blog - AI for Common Good

BBC News Labs - What we talk about when we talk about fair AI

BBC R&D - BBC RemArc - how our archives can help people with dementia

Machine Learning and Artificial Intelligence Training and Skills from the BBC Academy including:

The history of machine learning

Get ready to work with AI

What does AI mean for the BBC?

AI at the BBC: Hi-tech hopes and sci-fi fears

This post is part of the Internet Research and Future Services section

Topics