Posted by Henry Cooke on , last updated
In R&D right now, a few of us are becoming very interested in interactive spatial audio and sound-led augmented reality. Partially, this is a build on previous work in R&D – the excellent work by our colleagues on binaural audio, for instance, or The Turning Forest (which started life as a sound-only experience). It’s also a response to trends we see in the hardware market - the arrival of consumer-level playback and recording hardware like Bose Frames, Sennheiser AMBEO and Google Buds seem to suggest that 3D audio and smarter headsets are on the rise. For myself, I had a demo of the Frames at SXSW last year, was very impressed by the sound quality and got excited about the possibilities presented by the form factor – and I wasn’t the only one.
I’ve had a long-standing bugbear with visual AR so far (as delivered by phones and tablets, in the main): I feel like it imposes itself on the world, getting in the way of your view by forcing a screen between you and what you can see. This screen is usually a lot smaller than your field of view, so you end up viewing the augmentations through a fairly small window. Not particularly immersive, and diverts your attention to the screen rather than your surroundings - it’s supposed to augment, but ends up becoming the focus.
To me, this seems to short-sell the promise of AR - that we can add a layer to the world that enhances what you’re seeing, that blends into and enhances your experience of the world.
In some ways, audio is a much better medium for this. Ambience delivered through headphones really does complement what you’re seeing, without getting in the way, and all the rendering is done in the participant’s imagination. Which is much better and cheaper than any visuals we can generate - theatre of the mind and all that.
From the point of view of doing this inside the BBC, it’s interesting in that we can view it as creatively inheriting from radio - and thus tap into all the production skill and experience available in the organisation.
Some very early prototyping - and past experience - point to sound walks as something which could work very well on an audio AR device, mixing ambient, spatial and geolocated audio elements with a participant’s aural perception of the world. So, this will be the focus of our research for the next little while.
In our next post, we’ll go for a walk around Whitechapel, London with Janet Cardiff’s The Missing Voice (Case Study B).
This post is part of the Internet Research and Future Services section