Posted by Frank Melchior on , last updated
Last month we invited programme makers, technologists and researchers to come together in Broadcasting House, London, to discuss, experience and explore the future of sound in broadcasting. The event was hosted by the sparkling and inquisitive LJ Rich from BBC Click - one cannot think of anyone better suited to the job.
Sound: Now and Next was the logical next step after four successful years of the BBC Audio Research Partnership. With this, the first of the BBC R&D’s research partnerships, now maturing, it is important for us to enable our audience to benefit from the new technologies and research on sound. Therefore we brought together programme makers, academics and technologists to Broadcasting House to enable them to experience the latest technologies and outcomes of our research projects through a technology fair and various listening demonstrations. This four-minute video will give you a taste of the event.
After an energetic, motivating and inspiring kick off to the event from Andy Conroy (Controller, BBC R&D) and Alan Davey (Controller, BBC Radio 3), Chris Watson’s keynote took us to the origins of silence and quietness, bringing us some of the most astonishing sounds one can imagine. His talk about ‘A Journey South’ demonstrated how to capture sounds that most of us will never hear in real life, such as the inside of a glacier or under the Antarctic sea. Some of these sounds revealed surprising similarities to early computer music. Listen to them yourself as part of Chris’s talk here.
On the first day, we started the four themed sessions with live broadcast sound. This session provided a lot of practical insights from three professionals working on sound for large-scale events. After an introduction to microphone techniques and some general thoughts on immersive audio from Bill Whiston, Nuno Duarte impressed the audience by explaining the challenges and logistics of providing sound for the Olympic Games. Based on the statistics and the years of advanced planning, it became clear that such events only work by building on the experience of the professionals who came before. Or to say it in Nuno’s words: “We cannot decide where to put the audience microphones because there is no roof to hang them from yet, [however] we have to order the microphone cable for Rio now”. I believe the key message to all our researchers is that we have to start doing now what we think might be coming next for the audience, because it will take one or two cycles to make new technology work on the required scale.
BBC Radio 1’s Andy Rogers took us through an event of a different type. Mud and MADI were the two themes of the history of BBC live broadcast from Glastonbury. Besides the technological developments in BBC outside broadcasting, which were fascinating in themselves, the relationship between the artist and the broadcaster was a non-technological but highly relevant theme in his talk. Pressure from the music industry means that most artists desire a sound that is very close to the stereo album. The BBC helps to build trust with the artists by installing familiar studio equipment into their outside broadcast vans. However the challenge remains - if the broadcaster could offer new technologies like 3D sound or even surround sound and the music industry is not on board, how can these be successfully applied in high pressure environments like live broadcast?
On the other hand there are always artists who are on the forefront of technology. During the breaks, one of the most popular demo topics was virtual reality (VR) technology combined with binaural audio. BBC R&D has had the chance to work with Björk, amongst others, on her latest VR music video. This 360-degree video exploits the strengths of head-tracked VR technology and demonstrates very impressively how producers and musicians can develop new forms of creative expression when working with novel technology.
For the lovers of classical music, we presented a recording made in collaboration between the BBC Philharmonic orchestra and BBC R&D, demonstrating what it might look and sound like to sit in the middle of an orchestra. Applications like these could be highly valuable for educational purposes but are another example of where we can potentially deliver experiences to our audience that are very hard to get in real life.
Binaural audio was also the big overarching theme in our session on immersive sound. Martyn Harries started with some reflections on how to define immersion, beyond technology – especially underlining the difference between immersive and enveloping sound. A sound system can be enveloping, but an immersive experience can be delivered in mono with touching stories that capture one’s imagination.
Isabel Platthaus and Achim Fell then showed an excellent example of how immersion and envelopment can work together. Their project ‘39’ is a binaural radio drama which has a free smartphone app to complement the experience. The app is an audio-only game, enabling the listener to explore additional parts of the environment where the story takes place, but also enabling a different view on the story as it is explored fully. All in all, it was a nice example of cross-media, immersive and massively engaging storytelling. It is maybe also worth mentioning that as the app was top rated in the app store, this has helped promote the radio drama further. As Isabel put it, “We all have to move with radio, beyond radio”.
The session was rounded up by Varun Nair who brought us back to the remaining technological challenges in delivering binaural audio to a mass audience. However I believe it is fair to say that some of the key elements, such as tracking, sufficient processing power per listener and good headphones, are becoming more and more widespread.
Nick Ryan kicked off day two of Sound: Now and Next. Exploring the point where creativity meets technology, he took us on a ride into reactive music, synaesthesia and finally reminded the audience that we should not forget that sound is magic. I couldn’t agree more.
This is especially true when the sound interacts with its environment and, even better, is responsive to the listener. The first session of the day therefore took us into the world of reactive and interactive sound. Steve Jackson gave some fascinating insights into the early years, when interactive games were played using dial-operated telephones and the sound of creatures in the games had to be designed using cabbages and swordplay.
Jumping to projects which might be next in radio, Werner Bleisteiner elaborated on the concept of a multi-dimensional radio play which is binaural, multi-lingual, multi-dimensional and based on the concepts of object-based audio demonstrating four key potentials: personalisation, immersion, interactivity and accessibility.
Finally our own Matthew Brooks revealed the magic behind a radio programme which can be varied in length, individually by each audience member. See his full talk here or a quick intro from last year’s IBC here.
After the session, we had another round of demos which included
• Dolby on immersive audio for the home
• Fairlight and their multi-format spatial audio authoring system
• Blue Ripple Sound and their HOA based solutions for games and VR
• DTS with new codecs and headphone reproductions
• Fraunhofer with MPEG.H based solutions and VR sound
• BBC R&D’s live demonstration of an object-based audio over a real-time link to our new reference listening room in Salford. The details for this demo will be available in a separate blog post from Robert Wadge and his colleagues soon. But sound-wise it ultimately revealed what a giant elephant-camel with webbed feet might sound like.
The last session of this packed two days looked into new production tools. For the final kick, Tim Exile explained and, even more impressively, demonstrated “Flow Machine Two” - his live musical improvisation tool. He also shared insights into human computer interaction (HCI) for live, creative environments, which is highly relevant for our colleagues in the editing of audio, but also video.
Mark Boas then took us on a journey through his projects which showed how linking semantic audio and editing tools enables impressive editing and retrieval capabilities. Please watch his talk and discover the power of a text editor which automatically edits the audio and video files in the background. You can also find out more in this blog post.
The last talk of the day, and the event, was delivered by Jörn Loviscach, who helped us all see beyond traditional waveform displays and opened our eyes to much more user-friendly and useful forms of displaying audio content for editing. Having seen this last session, the key question from the panel afterwards has stuck in my mind: Why is this still not part of every audio editor?
All in all, two days packed with talks and experiences to provoke thoughts and imagination on what might be next, and reflection on where we are now – the starting point to delivering the next generation of audio to you, our audience.