Posted by Libby Miller on

This week: standardised publishing and pulling Kaldi datasets, a hackathon, evaluating offline recommendations, two Audio AR projects, and the shape of playlists.

Data Team

The Data team have been improving Kaldi in various ways - refactoring it to be able to publish and pull datasets in a standardised way. Ollie has finished training a new voice model on syncnet and is busy writing up his technote before the end of his placement.

Misa has been at a week-long hackathon organised by the Turing Institute and the University of Bristol - using ML to solve real world problems - five or six challenge owners, each with datasets. She worked with a group predicting protein folding patterns, and will be writing up the results with them.

Chris has finished the offline evaluation recommendations he started last sprint. He’s using four diversity metrics, and it’s been really interesting to see how those change with different algorithms. Random and most popular are his baselines to beat.

Saba is joining us for three months as an intern from QMUL, working on multi-model content similarity - text / audio fused together to find better recommendations or better content classification - using metadata, subtitles, descriptions, starfruit tags etc.

Team Anansi

The bulk of the work this sprint has been on the two augmented audio projects: “I am not a robot” and “Searching for Nigel”, both using the Bose audio sunglasses, which play you spatial audio while you’re hearing real-world audio at the same time.

“I am not a robot” is a project by Anna Nolda Nagele and Valentin Bauer from QMUL. It’s an audience choreographing project - a group of people, directed to do something. The team refined lots of ideas down to a series of games to test user interactions, and got in Emergency Chorus to help - they did a performance and we talked with them about ideas.

The team then built something (see picture!) in Unity. Networking was the most painful bit: four sets of glasses, paired to four phones, the glasses all networked together.

The latest sprint has been about user testing the four mini games. They had no overarching narrative and so needed a bit of UX handholding, and there were network problems, but we got lots of data. People liked it, thought it was novel and interesting, and liked the technology itself.

Second prototype for audio AR is a story we tell you as you walk around a park. It’s told as a story about a place you can’t see but can hear, which is good for testing voice and AR and body gestures and AR. We’re working with a theatre sound designer, who has done similar stuff before.

The idea is that there are parallel universes, and you have to find all the parallel Nigels to go to the party, travelling between them using the glasses.

Looking for Nigel illustration

Better Internet team

Our current sprint, run by AlexR and Kristine, is called ‘The shape of playlists’ - using machine learning to understand the ingredients / flow of Sounds Music Mixes. Our goals are to interview the curators who make the Sounds mixes about how they make them, to analyse third party playlists, and to see if we can make satisfying playlists based on the shapes of other things.

On Tuesday we met with the six curators to see if there was a way to support what they do, given the various requirements and considerations for building their mixes. These are very complex and include a number of rules, such as when music was last played on the networks, and the length of mixes; alongside artistic considerations, such as the flow of a mix, retaining interest, and having a journey through it. We got hold of a few hundred of their mix sound files, and Kristine and Tim are figuring out how to analyse them.

We also finished the Media Sessions API prototype changes and have useful feedback for the working group and for Sounds. Chris is currently preparing for TPAC, the W3C’s all-groups meeting.

Finally, we have been writing our manifesto, to create a set of values to evaluate projects against research interests and ethical concerns, as well as a set of decision-making criteria for topics and projects. We started by using Tricyder to identify everyone’s interests - anyone could suggest a clause for manifesto and then vote on it. Then we did a collaborative writing exercise to flesh out the components. We’re nearly there!

This post is part of the Internet Research and Future Services section