BBC R&D

Posted by Tristan Ferne on , last updated

These are weekly notes from the Internet Research & Future Services team in BBC R&D where we share what we do. We work in the open, using technology and design to make new things on the internet. You can follow us on Twitter at @bbcirfs.

We've just moved into our new lab but here's what we managed to do last week while packing up.

On Tuesday, Rob, on his quest for the metadata holy grail* headed to Oxford University with Jana to catch up on some research projects...

(*for another time...)

They looked at four areas of research that we think could be of real use to BBC programme makers and researchers: 1) the retrieval of still images from footage, specifically finding paintings in arts and news documentaries, 2) Automated tagging of objects in paintings (horses and boats for example) based on object models built up from photo collections, 3) more general object, scene and face recognition in images and video and 4) text recognition in video footage, which could be useful for extracting data from captions in documentaries and news programmes

Thursday was crowdsourcing day. Our friends from CWI in Amsterdam were visiting and showed us their work on Waisda, a crowdsourced game for tagging a popular Dutch TV programme. And Chris Lintott from Galaxy Zoo talked to us in the afternoon about his experience of citizen science. Our World Service project is more of an online archive with crowdsourcing features, but we're considering trying a Galaxy Zoo-style task-based approach to compare. 

And on Friday we finished packing up and throwing away old paperwork. Some of us have now moved into our new lab in Euston, shared with the BBC Connected Studio team and some of the UCL Computer Science department, as part of our partnership with UCL. It went remarkably smoothly, mainly thanks to Matt P, Chris G, Justin and Akua. Just the normal teething troubles - blocked sinks, missing bins and a broken coffee machine.

On to the project updates...

On COMMA (Building a general purpose cloud platform and marketplace for media analysis)

Yves, Matt H, Chris Needham and James have been auditing algorithms to understand the breadth of things we need to support and some in-depth case studies of a few. And they've been investigating the available cloud platforms and how to share processing and data across different providers.

On the World Service archive prototype (using algorithms and crowd-sourcing to put a massive radio archive online) 

We've been making more of the metadata editable - Gareth and Thomas have have been adding a full edit history and rollback. Also Anthony's been adding another filter to the search and Chris L and Andrew N have been adding more data to the stats dashboard.

And we've just released diarize-jruby under the AGPLv3 license. It's a Ruby toolkit for speaker segmentation and identification from audio. You can see the results of segmenting speakers in the prototype.

On metadata research

In our metadata research stream Jana has been talking to a number of groups across BBC R&D about fingerprinting audio and video. We're currently using it for many things from synchronisation to matching rushes to edits to detecting re-use in the archive. Denise has been implementing the University of Surrey's object recognition software and trying to repeat their results and also working with Chris Newell, Theo, Penny and Sam on the evaluation of a TV programme recommender based on mood data.

Meanwhile elsewhere...

In James' working-on-the-bus time he has been developing his OpenOB project, a software solution to expensive Outside Broadcast (OB) links. This week he was "...adding support for split streaming (send via two different network paths, receive both at the other end and combine/deduplicate in the jitter buffer) for highly redundant links (eg, venue internet and 3G or two 3G modems without bonding, or dual wired WANs for STLs)."

Chris L has been building on his work with the W3C and Web Audio by writing about how to build a polyphonic synth in your browser. Squelch squelch dahhhhhhhh.....

Yves is interested in deep learning; many-layered neural networks that are capable of learning about low-level and high-level concepts. He found an interesting article from Google about how they used Deep Neural Networks for their new image search by automatically tagging images with Freebase concepts. He says "Given the way DNNs have been getting massively popular for all sorts of tasks over the last year (automated classification, but also speech recognition - Microsoft is claiming fantastic results using them, and our NST partners have improved their results a lot by replacing one of their components by a DNN), perhaps worth investigating?".

And finally Chris F was on secondment to News where he built this visualisation of the cats' journeys from Horizon's The Secret Life of the Cat (also featuring cat-cams developed by R&D). 

Interesting links

A report from the W3C's Open Data on the Web workshop

The news storyline ontology, which we helped develop:

Using face recognition to do TV recommendations for groups

 

Topics