Posted by Matt Haynes on
Latest sprint notes from the IRFS team. Our new Alexa skill is released, the team gets arty at Somerset House plus much more
This sprint saw the release of our Alexa skill The Unfortunates. It is available on the Alexa skill store or on BBC Taster. Much of our time this sprint was gearing up to this release, getting Taster images and copy ready, and getting the relevant approvals for release.
Tim, Henry and Jakub were asked by the London Music Hackspace to assist with the creation of an installation at Somerset House Studios entitled ‘Climatotherapy’, by Artists Nozomu Matsumoto and Nile Koetting, which featured an Amazon Alexa periodically giving out robotic health and well-being advice from within a futuristic therapy room. This was part of the ‘Assembly’ festival at Somerset House which featured loads of interesting work at the intersection of technology and sound art, and resulted from our earlier ‘Singing with Machines’ work investigating how smart speakers could be used for musical performance.
Chris has began building a dataset reflecting people’s engagement and emotional response to news articles - this will be based on emoticons used on BBC News articles in social media channels. The dataset will be used to build a classifier to predict the emotional response which can then be used to explore issues around well-being.
The Data team have been testing the impact of adding new words and pronunciations to the Speech to Text system. They have been reusing pronunciations from the BBC’s Pronunciation Unit database and have increased overall system accuracy by adding many new proper nouns and in some places improving the recorded pronunciation for existing words.
Along the way the team had to map between the BBC Pronunciation Units ‘Modified Spelling’ system and the ARPABET system commonly used in STT. The team came across some issues in the initial mapping and have been diagnosing where the bugs exist by building frequency distributions to identify mispronounced words.
Alicia has been continuing to test Tellybox on different browsers and fix bugs; Ant and Chris have been setting up the infrastructure to deploy the prototypes in public, while Libby has been doing the necessary paperwork.
Chris has met with Nigel Earnshaw to plan our next steps on the Open Screen Protocol, focusing on security and authentication made some changes towards finalising the use case and requirements document for media timed events. We expect to be able to start spec development soon, and also tighten up the timing requirements for text track cue events in HTML. Chris also met with Phil Lee from WS2020 and Richard Ishida, who leads W3C’s internationalisation activities, to talk about issues of language support on the web.
Chris is helping the TDA organise its next peer review meeting, with the Cognitus project. The TDA group also had a workshop last Wednesday afternoon to go through topics such as the engineering principles its defining, the agenda for peer review meetings, and which workstreams should be in scope.
- Chris has released a new version of audiowaveform, which supports creating multi-channel waveform files.
- Tristan’s wrapping up the newnews project - a phone interview, some analytics, and a wrap-up blog post.
- Barbara is working with Laura and Maxine preparing a survey for women in R&D as part of the BBC’s Diversity and Inclusion strategy.
- Barbara also spent some 10% with server set up (apache, letsencrypt, WebRTC) with Tim, Libby and Chris’ help. Quote - “It’s hard!! I feel for our engineers!”
- Henry worked with Tim to set up the installation at Somerset House as part of the talking/singing with machine work. He also attended the R&D branding workshop.
- Libby and Vinoba had a very interesting meeting with Bill Gaver and the Interaction Research Studio at Goldsmiths. They made timelapse of it, using their mynaturewatch homemade animal camera
This post is part of the Internet Research and Future Services section