Google has announced an immersive audio software development kit called Resonance Audio. A tool that brings scalable performance for high-fidelity 3D sound in 360-degree video, VR, and AR, Resonance Audio is built to work on both mobile and desktop, and is compatible across a variety of platforms, from game engines like Unity to the web. It’s even available as a standalone VST plugin, meaning it can be used within multiple audio creation programs.
Delivering quality 3D audio in VR in the past has been a challenge because the audio is interactive — changing based on both the object’s movements and the movements of the user. It’s a burden on hardware and requires lots of on the fly rendering, but Google says Resonance Audio is lightweight enough to deliver high-fidelity 3D audio with VR even on your phone.
3D audio is an important factor in believability when it comes to feeling immersed in a virtual world. Think about how even a walk down your block presents hundreds of sounds that give you a sense of space: a bird chirping above your head, a car approaching, zooming by your side, and then trailing off in the distance, a crack coming from below when you step on a twig, the rustle of leaves around your legs as wind pushes them around. It would be pretty odd if all those noises came at you just from the right and left at ear level, wouldn’t it? That’s what 3D audio solves.
Resonance Audio allows for spatialization of hundreds of sounds at once, without compromising quality
To accomplish this, Google’s Resonance Audio uses a full-sphere surround sound technique called Ambisonics combined with head-related transfer functions (HTRFs), a filter that maps incoming sound arriving toward your head. Together, they trick your brain into assigning positions and distance to sounds, even when wearing something like headphones, which only have two directional outputs for audio. It does this by replicating the methods our brain already uses to perceive and place sounds in real life: the difference in time it takes for a sound to reach your left and right ears, volume differences between our ears, and the changes in a sound’s frequency between our ears. All of these variances in how our right and left ears hear a sound allow us in real life to determine things like distance, height, and where a sound is originating from.
Resonance Audio uses “highly optimized digital signal processing algorithms” based on Ambisonics that will allow for spatialization of hundreds of sounds at once, without compromising quality. Some of the tricks it uses to accomplish this is automatic rendering of near-field effects when an object is approached, and on Unity, pre-rendering of CPU-heavy reverb that matches the acoustic properties of an object’s material (marble pillars, for example, or the interior of a wood cabin).
Google says Resonance Audio is also more powerful than traditional 3D spatialization as it can not only control where a sound comes from, but how it spreads out from its point of origin. In the below example, Resonance Audio is adjusting a guitar’s direction of sound, the shape of how the sound emanates, and how wide the sound will travel. This means if the guitar is strummed, it should sound different if you’re standing in front of or behind it, and if you’re facing it or turned away.
Should Resonance Audio prove to be as processor-friendly as it claims to be, it could mean a wider adoption of 3D audio to go along with the ever-growing library of of VR and 360-degree experiences. At the very least, the fact that it’s available for every platform, from Android to iOS to Linux, means it could be an attractive option for game developers, audio engineers, musicians, and more to give it a shot. If you’re interested in checking out Resonance Audio for yourself, it’s available on Google’s developers website.