Posted by Anthony Onumonu on
In our previous issue of weeknotes, we mentioned that we carried out some external user testing of our still-to-be-appropriately-titled experience 'The Next Episode'. Our objective for the test was to evaluate the overall proposition. To carry out the analysis we decided to use the Wizard of Oz method that comes from the field of Human-computer Interaction. Users would interact with parts of our system in a way that the user thinks is autonomous when in fact it will be operated by a team member.
Our system consists of three parts: a story runner, story operator and a mobile web app. The runner plays the story audio and dispatches story events to the mobile app via WebSockets. The mobile app displays these events as text messages in a custom interface. The user can also send messages back. All messages are consumed by the story operator, which provides a log of a users journey path in the story. The operator also serves as a controller for specific decision points in the story. These points required sentiment analysis functionality which was not implemented to save time and instead simulated by in our case the wizard. To keep the test realistic the participants were not told about the parts operated by us. We conducted the tests in a lab that had a one-way mirror so we could observe how the user interacted with the application. We could have enhanced the experience by making it possible to send text messages from the operator to the mobile app, maybe an emoji in response.
Feedback from the users has been positive. We did have a moment of fail on the operator side when I accidentally sent a user down the wrong path after hearing their response. Our Dorothy moment came when one of the participants asked if there was someone behind the mirror. We are currently analysing the results and will come back with a report at a later date.
Emma and Oscar continued with their Digital Signal Processing training and also met with teams in the North Lab to explore potential projects for their next rotation on the R&D Graduate Scheme.
Nicky has been giving some talks overseas on the Talking with Machines work, first, presenting at Radio Days Europe in Lausanne where she discussed interactive audio alongside people from The Guardian Voice labs and the Financial Times. And then later headed over to Ireland for the International Audio Arts festival Hearsay, where she demoed R&Ds voice experiences and discussed our upcoming interactive audio work.
Ben has been experimenting on Unbounded Interleaved-State Recurrent Neural Network (UISRNN) developed by Google in the context of online speaker segmentation. He has also ported the facial recognition system into our pipeline, benchmarked it and written up documentation for it.
Misa and Alexandros have been working on implementing the VoxCeleb2 embedding into our pipeline. It was published by the Visual Geometry Group at the University of Oxford and is trained with ~x10 utterances with ~x6 identities. They completed the work and it significantly improved the performance of the speaker identification system.
Tim gave a talk in New York at Theorizing the Web, an annual conference that considers the interrelationships between the Web and society.
This post is part of the Internet Research and Future Services section