Research & Development

Posted by Libby Miller on , last updated

User testing, embodied prototyping, W3C Technical Plenary, Atomised News, video clip search and text to speech - what the IRFS team has been doing this week.

I'm writing on the train up to Salford where I'll be technical support for Joanne for user testing our "EuroMeme" Eurovision gif-clipping synchronised second screen application. I've taught myself some basic React.js so I can tweak the code to improve the performance and add in some more views to illustrate some typical ways the application might be used. I have the server working on a Raspberry Pi (2) now, to make it more portable for demonstrations and tests. We are running three sessions with three groups of people and aim to understand how the concept fits into the TV show and whether the technical aspects are understandable.

While they are here, Joanne and Malin are running a workshop with the User Experience team, who are based in Salford. The workshop is about 'Embodied Prototyping', a method we devised in the MediaScape project based on initial work from NoTube, and which we've been gradually improving thanks to Joanne's workshops. It's used to get a deep, thorough and cross-disciplinary understanding of the technical and user aspects of an interactive system by role-playing the devices, people and servers involved. The method is a work in progress, and the workshop aims to make improvements to it.

Back at IRFS base in Euston, Tristan has invented a new way of writing stories using Trello, as illustrated above. It's a natural fit for our Atomized News project. Each card is a self-contained event that can also include mixed media linked to other cards in the same section.

The Content team has also started thinking at best ways to our test applications with users. Barbara continued investigating how we can test native mobile apps at larger scale. Tristan has been exploring ways we could test the content separately from the UI and app features.

Andrew met with Alia and Maxine from the North Lab to discuss our research objectives in the Virtual Reality space - and how we could collaborate. He also attended 'With You Shortly' - BBC North's first short form festival. He started to think about the next iteration of the story explorer - this time for TV - and started desk research into appropriate BBC GEL patterns for a more white-label product approach.

Meanwhile, Chris and Olivier are in Sapporo in Japan, where they are attending the W3C 'Technical Plenary' (TPAC) - a week-long series of meetings for all the working groups and interest groups at the W3C. Chris and Olivier have both been attending the Web and TV Interest Group meetings; Chris has also been at the Second Screen Working Group. Olivier's also attending the Web Annotations working group, the Audio working group, the Advisory Committee and the Web of Things interest group. It's plenary day on Wednesday, with breakouts on video processing, AMP, policy/rights expression and HDR graphics. Beyond IRFS, we have two BBC colleagues also at TPAC - Matt Paradis chairing the Audio working group and Nigel Megitt chairing the Timed Text working group.

The Discovery team has been working on the pipeline of its related-topic discovery prototype, making sure it is robust enough to be demoed to our partners in production. Talking to production users like this is invaluable; they can tell us how useful the topics and techniques are for their day-to-day jobs. For instance, we already know that topics are a great tool when it comes to navigating online content, but on their own they can operate at too specific a level for a user who wants to browse something quickly. Someone looking for a great article on fashion might baulk at having to search all the topics within it, e.g ‘london fashion week’, ‘housecoats’, ‘eyebrows’ etc. Conversely, an article referencing the identifier ‘David Beckham’ could fit into sport, or fashion or, if he’d done something particularly bad (e.g those hairbands), even (fashion) crime. We need categories to identify the type of content we are after, so part of our work is to explore which ones work best.

The Data team are just finishing off the user trials of the CODAM demonstrator and technologies (CODAM is a tool for for fingerprinting and matching video segments). Ben and Jana have spoken to media managers from News, Information and Archives and Multi-platform Archives where they've had a positive response and found a few new use cases. They hope to get some further test data from these teams to explore their requirements further.

They have also been specifying and ordering a multiple-GPU machine to train the neural networks needed for the Kaldi speech-to-text engine. It will also come in useful for their forthcoming evaluation of the image / text recognition software from Oxford University. They have released an updated version of the Kaldi software to the News, Rewind and R&D Audio Teams. At the same time they have started to also test IBM's Watson Speech to text API. Christina has finished off the packaging and interface work for the LIUM Diarization toolkit, which we can plug into Kaldi too.

And finally: heroic James worked late into the night at the weekend to migrate our internal tools from some prehistoric servers to the R&D cloud infrastructure. Thanks James!

Links this week are from Ant:

Five Years of Building Instagram 

AMP and Responsive Web Design 

Delivering high scroll performance