We investigate how applying the video processing techniques of artificial intelligence, including machine learning and computer vision, can create new and innovative production tools.
Project from - present
What we're doing
We investigate and develop tools to process, analyse and understand video – normally in real time. We aim to take the latest academic research and industrial techniques and translate them across to solve problems in the world of broadcasting.
In the past, we’ve used various computer vision techniques to investigate camera and object tracking, scene geometry and image analysis. These tools were used in Piero, our sports graphics system which won a Queen’s Award for Enterprise. They have also featured as part of our Biomechanics project and other sports analysis tools we’ve developed. You’ll see some of these tools at work whenever you watch the analysis on Match of the Day and we continue to improve and support them with additional features and developments.
Over recent years, rapid improvements in the field of artificial intelligence, and machine learning in particular, have revolutionised computer vision and much of our current work takes advantage of these developments. Recently we have been experimenting with ML-based “TwinNetwork” object trackers, developing tools to recognise animals in images and to classify the type of activity taking place in a video. We've recently been collaborating with our CloudFit Production team to see if we can use our tools to process and analyse media that they record and manage.
We work closely with partners both inside and outside the BBC to develop our tools. These broadcast companies and production teams help us to crystallise technical innovations into practical tools that will be genuinely useful for them.
Why it matters
Many production teams are increasingly stretched by time and budget constraints. There is a pressure to produce programmes that provide value for money for broadcasters while at the same time there is a demand to create ever more content for new digital platforms and more innovative content to meet audience demand. Yet much of a production team’s effort can be spent on relatively low-level and time-consuming mundane tasks such as logging rushes and transcribing interviews rather the more creatives tasks needed to tell great stories.
Over the last few years developments in computer vision, and now machine learning, have made it much quicker and easier to apply these techniques to media. Our work investigates how we might be able to take advantage of this to aid the production process. We look to help with current production processes, developing tools to speed them up and free staff for more high-level work – but also seek to enhance existing workflows with new tools that offer and enable new creative options.
There are opportunities to assist production teams to work faster and better; to help them offer the audience more without requiring extra effort.
This project is part of the Immersive and Interactive Content section
This project is part of the Visual Computing for production work stream