BBC R&D

Posted by Ben Clark on , last updated

These are the weeknotes from Internet Research and Future Services, a team in BBC R&D. This week: Data processing pipelines, lots of collaboration, blog posts and a new W3C Working Group.

Above: The Discovery Team at their joint hackday with Radio & Music, under the watchful eye of Lord Reith

We work in three main areas: Discovery, Experiences and Data.

The Discovery Team had a successful hackday with Radio and Music. They have also been talking to lots of people across the BBC, including BBC Monitoring and the Events and Arts team in Glasgow. On the engineering side, they've  completed some early work deploying the Artifactory repository manager. Finally, they published their first blog post entitled  "Understanding Editorial Decisions".

In the Experiences Team, Barbara, Tim, Ant, Chris, Lei, Lara and Tom H have been getting the Atomized News app ready for its launch on BBC Taster.

Henry helped to organise a workshop in Salford with Becky G-C from Connected Studio on Virtual Reality and 360 Video. Zillah, Tom H and Andrew W also attended.

The Peaky Blinders Story Explorer was launched on Taster, and is getting good ratings. Tristan has been writing a blog post about it, whilst Andrew has been creating a promo video for the prototype.

Alan has been working with Casualty script data, with the aim of creating a demo where a user can search for scenes by characters and then view the scripts for those scenes.

On the "Tellybox" project, Libby has been working on parsing BARB data with help from Andrew McParland, to see who watches with who. Calliope and Joanne have both been interviewing people about their TV habits.

Chris chaired the first conference call of the new W3C TV Control Working Group to discuss the current status of the API specification, and plans for developing the spec on the W3C’s Recommendation Track. He also worked with colleagues from the EBU and RTS to start developing a production-ready CPA and OAuth 2.0 compatible authorization provider in Node.js, based on our existing reference implementation.

Joanne is attending CHI, a Human-Computer Interaction conference in San Jose, with some of our North Lab colleagues.

In the Data Team, Ben and Jana met with colleagues in Design and Engineering to discuss how to process videos from the Jupiter News Video Pipeline into video fingerprints. The aim is to demonstrate how the video fingerprinting could help editorial staff using Jupiter find videos containing the same footage.

Tom has been writing up his work on speech music discrimination as a technote and gave a presentation to the section summarizing his results. Matt has been looking at the voice activation part of our Kaldi speech-to-text install, checking how well it performs and how that influences the error rate. He's also been think about integrating Tom's work and recent developments in Kaldi.

Links

Long-Form Reading Shows Signs of Life in Our Mobile News World

Pre-Touch Sensing for Mobile Interaction

What is a Product Manager Anyway?

Conversational UIs: