Posted by Tom Nixon on
Back in June 2018 we reported on the release of the EBU ADM Renderer (EAR), which is specified in EBU Tech 3388. This is a system for rendering the types of content defined by the Audio Definition Model (ADM) to any defined loudspeaker layout.
The EAR is part of our wider efforts to standardise open formats for working with so-called 'next-generation audio' (NGA), which aims to make audio experiences which are more accessible and immersive. In addition to the ADM as a way of representing NGA audio formats, a renderer (such as the EAR) is an important piece of the puzzle, since it defines what the parameters in the format definition mean in terms of signals that are played out of the speakers.
In July, the ITU published ITU-R Recommendation BS.2127-0 "Audio Definition Model renderer for advanced sound systems", which is based on the EAR algorithm. The ITU ADM Renderer also comes with a python implementation, which allows developers to try out the algorithm easily. While the python implementation is great for experimental work, understanding the algorithm and validating ADM files, it can't be used for real-time applications. Therefore, a C++ library containing the core EAR functionality has been developed.
We worked with IRT in developing libear as a collaboration within the EBU. This library is available under the permissive Apache 2.0 licence. Libear contains just the core parts of the EAR project (calculation of gains and some DSP components); we recommend using it with libbw64 and libadm (both developed by IRT) when developing applications which also need to read, write and process ADM content.
There are several potential applications for a library like libear. It could be included in a Digital Audio Workstation (DAW) to render NGA content, either integrated directly or as a suite of plugins. It could be built into a stand-alone ADM monitoring system, or used to render ADM content to legacy formats before emission.
The current release of the library supports channel-based, scene-based and object-based audio, though some parameters are not yet supported; full details can be found in the documentation. The API is already complete, so there's no need to wait before starting integration work.
We'll continue to work on libear in the coming months, implementing the missing features and responding to feedback from users. We've got a few projects using libear already; we'll publish more details when they are released.
Immersive Audio Training and Skills from the BBC Academy including:
This post is part of the Immersive and Interactive Content section