Nemisindo awarded an EPIC MegaGrant for MetaSounds Plus

Nemisindo (from the Zulu word for sound effects) is a spin-out company from our research group, offering sound design services based around procedural audio technology. Nemisindo has been fairly quiet recently, developing new products and delivering some contract work. But expect to hear a lot from them in the near future, including this announcement.

A couple of years ago, Nemisindo was awarded a MegaGrant from Epic Games, to support their contributions to the Unreal community. It resulted in, among other things, delivery of a lot of procedural audio plug-ins for the Unreal game engine; the Action Pack, the Nature Pack, and Adaptive Footsteps.

Nemisindo is very pleased to announce that we have been awarded a new Epic MegaGrant, titled MetaSounds Plus. MetaSounds is the Unreal Engine’s new high-performance procedural audio system. It gives sound designers complete control over a Digital Signal Processing graph for the generation of sound sources. MetaSounds has the potential to be a game-changer in the industry. It is a fascinating technology that combines the best aspects of audio-oriented graphical programming languages (such as Max MSP and Pure Data), with the Unreal Engine workflow. It allows game developers to take their first steps in the world of procedural audio in an intuitive and seamless fashion.

The MetaSounds Plus project will greatly enhance Unreal Engine’s MetaSounds by delivering a suite of powerful nodes and assets. This benefits the Unreal community by providing developers with rich, intuitive tools to achieve great game sound design.

Here’s a video giving an overview of our proposal for the MetaSounds Plus project.

Nemisindo will soon be hiring an audio software engineer to support this project, so keep an eye out for more announcements. And keep an eye (and ear) out for some great new releases.

As always, if interested in knowing more, please get in touch.

C4DM at recent Audio Engineering Society Conferences

Featuring contributions from Dave Moffat and Brecht De Man

 

 

As you might know, or can guess, we’re heavily involved in the Audio Engineering Society, which is the foremost professional organisation in this field. We had a big impact at two of their recent conferences.

The 60th AES conference on Dereverberation and Reverberation of Audio Music and Speech took place in Leuven, Belgium, 3-5 February 2016.. The conference was based around a European funded project of the same name (DREAMS – http://www.dreams-itn.eu/) and aimed to bring together all expertiese in reverb and reverb removal.

The conference started out with a fantastic overview of reverberation technology, and how it has progressed over the past 50 years, by Vesa Välimäki. The day then went on to present work on object based coding and reverberation, computation dereverberation techniques.car7-bmweaa5pap

Day two started with Thomas Brand discussing sound spatialisation and how participants are much more tolerant of reverberation in binaural listening conditions. Further work then presented on physical modelling approaches to reverberation simulation, user perception, and spatialisation of audio in the binaural context.

Day three began with Emanuël Habets, presenting on the past 50 years of reverberation removal, discussing that since we started modelling reverberation, we have also been trying to remove it from audio signals. Work was then presented on multichannel dereverberation and computational sound field modelling techniques.

The Audio Engineering group from C4DM were there in strength, presenting two papers and a demo session. David Moffat presented work on the impact dereverberation can make when combined with state of the art source separation technologies. Emmanouil Theofanis Chourdakis presented a hybrid model which, based on machine learning technologies, can intelligently apply reverberation to an audio track. Brecht De Man presented his latest research, as part of the demo session and again in a plenary lecture, on analysis of studio mixing practices, focused on analysing the perception of reverberation in multitrack mixes.


 

The following week was the AES Audio for Games conference in London. This is the fifth game audio conference they’ve had, and we’ve been involved in this conference series since its inception in 2009. C4DM researchers Dave Moffat, Will Wilkinson and Christian Heinrichs all presented work related to sound synthesis and procedural audio, which is becoming a big focus of our efforts (more to come!).

Brecht De Man put together an excellent report of the conference, where you can find out a lot more.