Posted by Libby Miller on
A hackday, a workshop, a bug, salience and relevance in this week's IRFS team's weeknotes
In the Experiences team, Libby, Tim and Joanne have been writing up the Tellybox qualitative research and design fictions as a paper. We're now thinking about how to get the best possible feedback on our nine prototypes.
For Atomised Media, Tim, Joanne and Barbara have been working on summarising the results of the user study for NewsBeat Explains. Looking further ahead, Chris has been working with Lei on the news search tool as part of the "Small, Medium, Large" video summaries concept, while Thomas has been designing the requirements and draft experience for a test to validate it. They are now going to pick up Alan’s initial build to create the minimal version we need to validate the data.
Preparation for Barbara's panel at SXSW about Atomised Media is under way!
On the Talking with Machines project, Henry, Joanne and Andrew have been preparing materials and activities for an upcoming voice user interface workshop with BBC Childrens. Tom, Ant and Sacha have been helping them pilot the materials ahead of the workshop in Salford.
Tom has been experimenting with mycroft.ai, an open source effort to clone the Alexa stack.
Henry has been meeting with various people from around the BBC to talk about the work we’ve been doing so far and what the corporation could and should be doing next with voice UI, as well as recording an panel episode of the Academy podcast on the subject.
Chris has been working on rechartering the TV Control Working Group, as well as the ongoing design changes to the API itself. He has also joined a new Community Group to discuss support for wide gamut and high dynamic range colour on the Web.
Tim and Henry ran a Music hackday across two BBC sites, working with Radio and Music and Entertainment Syndication, about using acoustic and editorial metadata to build personalised playlists (pictured above).
In other work, Chris also made a few updates to the OMRI Android API for radio in smartphones, to support reporting of services found during channel scanning. Chris has released a new version of audiowaveform, to work around a bug in libstdc++ and fix a long-standing issue with encoding delays in MP3 files (see details). We're pleased that the package is now available in Arch Linux, thanks to Carey Metcalfe.
Together with all R&D's 2016 trainees, Kristian, Tim and Chris continue studying Digital Signal Processing, looking at sampling theory, the z-transform and Fourier Transform. Chris and Tim have also been preparing talks for our regular R&D engineers' meeting, Chris on SOLID principles of object-oriented design, and Tim on functional programming.
This sprint the Discovery team started the migration process of our architecture onto Cosmos - the BBC’s toolset for cloud deployment. Manish, Josh, Frankie and Katie held an architecture workshop to get Josh get up to speed and help with drawing up new diagrams.
Good progress has been made on the stream builder. David and Matt iterated the design, following feedback from Olivier, by replacing auto-complete with a search interface (to find entities and feeds). It still requires a little more work and we will soon be able to share this with our colleagues across the BBC. Matt also updated the API and interface to cope with tags (all known DBpedia entities outside of our People, Place, Organisation set) to aid the migration of existing legacy streams to our intermediate production infrastructure. Olivier helped with the deployment of the new tool - mainly by scripting the migration of the old, hand-cranked “search recipes” into the new system.
Our work on the Salience algorithm has been successfully integrated to Mango and now replaces the previous Relevance score. Fionntán: "I found salience and relevance do similar on predicting important tags, but salience much better at predicting if tags are unimportant. As most tags are unimportant, this is a very helpful update."
Katie has been working with NewsLabs, CPS-Vivo, and Barbara on an internal Hackday that we are co-hosting in February. They have initially focusing on the pre-meet and plans for lightning talks to familiarise everyone with the teams and their tech.
Frankie also started exploring an additional tool to make it easier for developers to use our content analysis pipeline. One of the current difficulties with this is that because the analysis happens asynchronously, developers must submit a ‘callback URL’ to receive the metadata back, which means maintaining an internet-accessible server (which is fine for production, but an extra hurdle for quick hacks). So to make this easier he's built a very simple server application which both submits urls for analysis, and saves the results into S3. This would enable clients to poll S3 for the results. An open question is whether we simply make this code available for use, with good instructions, or additionally provide a ‘hosted’ version.
And finally: some good news regarding Starfruit – a 3 month trial with News is currently in preparation. Starfruit is our codename for a new semantic tagging system. Unlike our earlier work in tagging and NLP, Starfruit is trained on a data set of how journalists manually tag articles. It therefore performs much better as a tag suggestion system than a more generic automated tagging engine would.
Bulletin, a bespoke internet-connected radio. Every fifteen minutes it airs a short collection of news, tweets and updates based on things Russell Davies is interested in. It looks beautiful and also uses our Radiodan platform!