Posted by BBC Research and Development on
In our first post on audio-led AR, R&D's Henry Cooke discussed why we’re interested in the technology. After going on a soundwalk, we decided that a interesting next step would be to port our existing experiment in geolocated audio to our Bose Frames devkit: Alluvial Sharawadji. Here, Jakub Fiala - until recently a Software Engineer at BBC R&D - describes the port and what he learned by making it.
Alluvial Sharawadji is “crowdsourced soundwalk” work made for the Eufónic festival in Catalonia in summer 2018. For the piece, Tim & I built a mobile-friendly web app allowing participants to record sounds, which would be saved along with the current geolocation of the participant. We then used Google Street View and the Web Audio API to present a “virtual soundwalk” around the town.
The Sharawadji soundwalks are represented as remotely stored sound files with associated latitude and longitude coordinates. The coordinate mapping and the sound files are downloaded when the soundwalk starts.
For this rebuild, we use an iPhone’s GPS to position sounds around the listener, and calculate the volume level of each sound based on its distance from the listener. We then use the Resonance Audio (Google VR) library to render the sounds spatially in real time, giving us a dynamic soundscape that changes in response to the listener’s position in the real world.
The first soundwalk we tested was composed of sounds recorded by various members of our team with their phones around our office in White City, London. Most of these sounds were fairly quiet, ambient recordings with lots of traffic sounds, construction noises and passers-by. At this point, we were using a “rolloff” mechanism from the Resonance Audio library – “rolloff” being the gradual attenuation of sound volume as the listener moves away from it.
We found that ambient sounds, especially those recorded in the environment where they’re being played back, tend to be drowned by real-world noises, and so don’t provide enough immersion. Furthermore, the built-in rolloff mechanism in Resonance Audio seems unsuitable for our coordinate system, as to walk a longitudinal distance of one degree is to walk about 51 miles in London. Due to the small changes in coordinate values, there was no perceptible rolloff effect at all.
For the second test, we “placed” three pieces of music at approximately 150 metre intervals along Wood Lane. We also ported the Inverse Square Law-based volume attenuation function from the Sharawadji webapp into our demo, and switched off the Resonance Audio rolloff. We found the custom attenuation worked very well, resulting in well-localised sounds – at about 5-10m from the sound location, we could hear a faint hint of the music, and following the direction it seemed to come from, we arrived at the spot where it played the loudest.
We are very excited about the possibilities of audio AR technology, and we’re keen to run more experiments with it. From our tests so far it seems that with the right sound material and adjustments to the placement and attenuation of sounds, it could prove an interesting platform for new localised sound experiences.
This post is part of the Internet Research and Future Services section