BBC R&D

Posted by Dan Nuttall on , last updated

These are weekly notes from the Internet Research & Future Services team in BBC R&D where we share what we do. We work in the open, using technology and design to make new things on the internet. You can follow us on Twitter at @bbcirfs.

Hello, my name is Dan and these are the weeknotes. Although everyone's been busy, this week's stand out contribution has come from Gareth and his trip report from the Google I/O conference. But first, let's dip into what the UK-bound members of the team have been up to.

COMMA

This new, two-year TSB-funded project kicked off in earnest this week. Yves, Matt, Rob & Chris Needham have been working with our partners in the project to form an understanding of what needs to be done over the next few Work Packages.

James, Matt and I have been looking into software to manage an elastic set of virtual machines that would allow us to scale resources at a moment’s notice. One of the highlights of the week was James' attempt to explain to us what Apache’s Zoo Keeper actually does. Matt’s been taking a look at AMQP and I’ve been looking into Open Nebula as the platform base.

World Service Archive Prototype

Chrises Lowis and Finch put together a dashboard to keep track of user activity such as programmes played, tags voted on, etc. This is especially pertinent since we recently re-opened registration to the prototype. Pete’s UX designs for improvements to the site are coming together nicely, with dynamic search-results, a date filter for search and image carousel mockups looking especially good.

Round Up

Chris Needham has been experimenting with a C++ program that reads an input audio file and generates a PNG image showing the waveform at a given zoom level, using code extracted from the Audacity audio editor. Waveforms as a service, anyone?

Libby ventured into further experiments on the Vista TV code base - this time visualising tweets tagged with #bbcqt and attempting to identify the subject of the discussion / vitriol.

James and Matt showed great courage when they rescued a live machine from a dependency hell that took down a SSH server, in theory removing all remote access to the machine. Thanks to our use of Puppet to provision and maintain machines, they were able to reissue a fix that restored our access without any downtime.

Jana started to work on video fingerprinting and extracting representations that allow us to match video clips even if the content has been slightly altered, compressed or re-encoded. There’s an MPEG-7 standard for this, which we’re currently evaluating on rushes from The Bottom Line. Long term goals are to aid production if they lose track of where a clip comes from, and to establish reuse in the archive.

Joanne and Andrew Wood are conducting audience trials for Snippets’ radio-based features. They are planning user research to help define the requirements around how users can navigate, identify and snip radio content.

Chris Newell has started working with Denise on the use of mood metadata for making personalised recommendations. They’ve been working on how to compare the mood properties of different programmes which are expressed as vectors. For initial investigations they’ve been using the Euclidean Distance which has given meaningful results but they’ll be working to improve upon these in the coming weeks.

Google I/O

Here’s Gareth’s field report from US of A, stateside.

I was lucky enough to be able to attend this year’s Google I/O. I spent the 3 days watching talks, trying code and talking to people demoing various products. There were a couple of themes I spotted in the programme that I tried to follow:

Google are getting more heavily into Knowledge Graph, there were various talks on how Youtube are using Freebase (which they acquired in 2010) to help automatically annotate videos by topic, as well as how you can apply Schema.org schemas to your HTML emails so data can be extracted. A demo during the keynote showed how they’re improving voice interaction with Google Now to help surface information about (e.g.) your schedule, I imagine being able to mine structured emails would allow them to make that service an order of magnitude more useful. (On that note, the new Google Maps – which they’ve rebuilt from the ground up – is pretty sweet to use)

I was also really interested in Google’s “Web Components” proposal which is a set of 4 draft specifications geared around encapsulation of web presentation and functionality. I’m not saying much about that now because I’m planning to demo some examples to the team to show how my mind was blown at the conference. I may also follow that up with a blog post on the subject.

Finally I got to check out some third party demos that were being exhibited around the conference. The Leap Motion is a great bit of kit and really responsive, we only had a simple map demo to play with but it felt really intuitive to use. I’m skeptical about gesture interfaces for the same reason large screen touch screens aren’t taking over the world, but the Leap’s form factor definitely makes sense embedded in a laptop base. They also had an Oculus Rift hooked up to the demo which was fun to try, but that particular demo suffered from low latency and not being able to see the sensor, it became a lot trickier to navigate.

Also check out Chrome Racer – an interesting mobile version of Scalextric, and a motion sensing skydiving demo developed by the same agency who built the Google I/O launch experiment.

I managed to get 5 minutes with a Google Glass, but I don’t think that’s long enough to get used to it. Walking around with the HUD screen always there felt pretty intrusive, it would need a great application to get over that feeling (although the walking directions did look good).

Links!