Posted by Chris Baume on , last updated
It's been a full year since our last Audio Research Partnership update and a lot has been happening. In addition to securing £5.4m from the EPSRC for the S3A project, new research has started on diffuse audio objects, audio visualisation, personalised compression and automatic podcast creation. Below are a selection of updates from the partners on some of the things that have been going on.
BBC R&D are hiring!
BBC Research and Development are looking to recruit a talented R&D engineer to join their audio research team. This is an excellent opportunity to contribute to developing the next generation of broadcast audio. You can find out more and apply by following this link. Applications close on 13th May.
Audio production conference
Following the success of the annual partnership event, BBC R&D are currently organising a two-day audio technology conference, focussing on broadcast production. It will take place at BBC Broadcasting House in London on 19th and 20th May 2015. Save the date and we will send out more information shortly.
S3A: Future Spatial Audio for Immersive Listener Experience at Home
The S3A five-year EPSRC Programme Grant worth £5.4 million was launched in December 2013 and is a major new UK research collaboration between internationally leading experts in 3D audio and visual processing at the Universities of Surrey, Salford and Southampton together with the BBC R&D and UK industry.
S3A's goal is to enable listeners to experience the sense of "being there" at a live event, such as a concert or football match, from the comfort of their living room through delivery of immersive 3D sound to the home using object-based content delivery. Although visual content in film, TV and games has undergone a transformation from 2D to 3D in the past decade, enabling greater visual engagement and ultimately higher cinema box office sales, the development of 3D audio content is less evolved. Existing technology does not meet the needs of the majority of consumers who want to experience live events, without having to set up loudspeakers in precise locations around their living rooms, or to control their room acoustics.
The partnership aims to unlock the creative potential of 3D sound to provide immersive experiences to the general public at home or on the move. S3A will pioneer a radical new listener centred approach to 3D sound production that can dynamically adapt to the listeners' environment and location to create a sense of immersion. Current 3D sound systems rely upon fixed loudspeaker arrangements and acoustically treated rooms that are not practical for home use. S3A will change the way audio is produced and delivered to enable practical high-quality 3D sound reproduction based on listener perception.
New research project on diffuse audio object
Many components of an auditory scene may be effectively described as sound objects that represent simple, point-like sound sources with precise spatial location. This model may not apply to other elements of the sound scene, such as the sound field generated by a multitude of distributed sources (rain, traffic noise, etc.) or, as another example, the reverberant field of a closed environment (concert hall, room, etc.). In the context of audio production, broadcasting and reproduction, these sound entities may be represented as diffuse audio objects. But what exactly is a diffuse sound object? What is the minimum amount of information required to describe it accurately (in a psychoacoustical sense) but efficiently (in signal processing terms)? How can an audio object be recorded, transmitted and reproduced?
In order to answer these questions, the university of Southampton and BBC have embarked on new 3.5-year research project, with the aim to develop strategies to capture, represent and reproduce diffuse audio objects. The project has been undertaken in the framework of EPSRC "Industrial CASE" scheme and is jointly supported by the BBC and by the Engineering and Physical Sciences Research Council. Michael Cousins started in March his PhD project on this topic, under the supervision of Dr Filippo Fazi and Dr Frank Melchior.
Good Recording Project
We have a new web experiment running about the perception of handing noise called Donk! The test is experimenting with moving away from more common psychoacoustic tests on quality, where listeners have a sole focus on listening for defects. The experiment has people listening to a speech podcast and answering questions about the content, so that the listener's attention is directed towards the foreground speech. Listeners only judge the effects of the handling noise when it is sufficient to distract them from the speech task. The test is running in both the lab and on-line to compare results.
We've also launched an iPhone recording app, which alongside the normal VU meter includes a meter to warn recordists about wind noise. The wind noise detector uses decision trees to detect wind noise on the microphone signal being recorded. The team are now looking to publish the machine learning method as an open source toolbox for use by others.
Enhanced Acoustic Modelling for Auralisation using Hybrid Boundary Integral Methods
This EPSRC funded project is ongoing at the University of Salford, in collaboration with the department of Applied Mathematics at the University of Reading, and progress is being made in the development of novel acoustic prediction models. In addition, post-doctoral researcher Dr Jonathan Hargreaves presented some new ideas developed in the project to a spatial audio audience at the recent EAA Joint Symposium on Auralization and Ambisonics in Berlin. His paper, entitled "An energy interpretation of the Kirchhoff-Helmholtz boundary integral equation and its application to sound field synthesis", was selected to be one of ten which will be included in a special edition of Acta Acustica united with Acustica, which will be available later this year.
ICoSOLE test shoot
Last month the ICoSOLE project gathered in Salford for a test shoot to capture source material. Ten musicians from the BBC Philharmonic orchestra performed for us in their rehearsal studio at Dock10 studios in Manchester. More information about the research and the project can be found in this blog post, and this article from BBC News.
This post is part of the Immersive and Interactive Content section