Posted by Tim Cowlishaw on , last updated
In June, Jakub Fiala and I had the amazing opportunity to show some of our work at Sonar+D, the technology and creativity conference attached to the Sonar Festival in Barcelona. This was a great opportunity for us to show our work at a venue very different to the places we usually present at, and to gather feedback from a very diverse audience.
Sonar Festival has been running for 25 years, and is a major event in Barcelona’s cultural calendar. Split over two sites and three days, it attracts performers and festival-goers from around the world, and showcases some of the most exciting new music being produced today. Sonar+D has been running alongside it for the last five years, offering a forum for collaboration between artists, businesses, and research institutions investigating and imagining the future of creativity. We were privileged to be invited to give a talk on the work of BBC R&D’s Internet Research and Future Services section, and also to deliver a version of our ‘Singing with Machines’ workshop, teaching musicians and sound artists to use ‘smart speaker’ devices in new and imaginative ways.
Our talk centred around the role of creativity in our work - how, in the service of our mission to imagine the BBC services of the five-to-ten year future, we use a strategy of ‘operationalised weirdness’ to imagine and prototype as diverse a set of possible futures as possible and respond to them in an innovative manner. This impacts the things we make the way in which we make them, and the people we make them with. More specifically, we focus on ‘half resolution’ prototypes (and often physical objects rather than on-screen prototypes) which express a single idea or feature as fully as possible, rather than striving for completeness in terms of functionality. We find that this allows us both to explore the possibilities afforded by the technologies we’re working with more comprehensively, and to produce prototypes which help us elicit specific, actionable feedback on the ideas we’re testing from the widest variety of people.
The people we work with to achieve this are also key to our approach. By working participatively with a great variety of potential users or collaborators (and focusing specifically on diversity and on examining use cases which might currently be considered to be edge-cases or ‘outliers’), we are able to imagine future use cases and opportunities which might otherwise have gone unexamined.
One great example of this is our ‘Singing with Machines’ project, which was the focus of our second session at Sonar. A spin-off of our Talking With Machines work, which aims to investigate new experiences using ‘smart speaker’ devices such as the Amazon Echo and Google Home, ‘Singing with Machines’ is concerned specifically with how these devices can be used for new ways of creating, enjoying, and disseminating music and sound art. We’ve been working collaboratively with musicians and sound artists to explore this, most notably our recent workshop with Wesley Goatley, Natalie Sharp, Graham Dunning and Lia Mice, as well as a collaboration with Lizzie Wilson and Jorge del Bosque, two students on Queen Mary University’s Media and Arts Technology PhD programme.
This work has two main purposes. Firstly, we believe that smart speakers provide an exciting platform for innovative and accessible new musical experiences, through their connectedness, growing ubiquity, comparative low cost, reasonable sound quality and opportunities for interaction. The BBC has been at the forefront of innovation in sound and music since the Radiophonic Workshop, so we hope to continue this tradition in some small way with our present work. Secondly, we believe that this work has more far-reaching positive effects on our other projects using smart-speaker devices; in particular, more creative and speculative work such as this can act as a material exploration of the technologies and platforms on which these devices operate, giving us insights into how they work, which can be more generally useful.
At Sonar, we condensed our findings from our work to date into a three-hour workshop for sound artists and musicians; introducing them to the technologies and tools used to create interactive voice experiences, and allowing them to explore how these devices could be used within their own practice. It was great to present our approach to a lively and creative community of people with whom we’d previously been unfamiliar. Participants came away enthused and informed about this new technology, and generated lots of interesting, creative ideas for uses of this technology that we hadn’t anticipated - allowing us to broaden our own exploration of the possibilities of the medium. We have gained many new ideas and opportunities for future work and collaboration, for ‘Singing With Machines’ and other projects, and we look forward to sharing these developments with you in the near future!
This post is part of the Internet Research and Future Services section