Nemisindo launches procedural audio plugins for Unreal Engine

Nemisindo is a spin-out company from our research group, offering sound design services based around procedural audio technology. Back in August we blogged about the launch of Nemisindo’s online service at https://nemisindo.com . Now, Nemisindo has a new launch, targeted specifically at game developers.

The Nemisindo team is pleased to introduce fully procedural audio plugins for Unreal Engine: the Nemisindo Action Pack. Nemisindo have brought our state-of-the-art models to Epic Games’ renowned game engine, enabling true procedural sound effects generation within Unreal projects.

Procedural audio refers to the real-time synthesis of sounds depending on specific input parameters. Much like how a landscape can be procedurally generated based on certain inputs, like “elevation”, “variation”, or “biome type”, a helicopter sound can be procedurally generated based on parameters like “rotor speed”, “engine volume”, or “blade length”. Procedural audio is the next generation of sound technology that creates realistic immersive soundscapes that are fully interactive, adaptive and dynamic.

The Nemisindo Action Pack includes 11 different sound classes: Explosion, Gunshot, Rifle, Helicopter, Jet, Propeller, Rocket, Alarm, Alert, Siren, and Fire. Each sound class can generate audio in real-time, and, for comes with built-in presets for popular settings (such as ‘bomb’ and ‘thud’ for the Explosion model, or ‘emergency vehicle horn’ for the Siren model). The Nemisondo Action Pack plugin enables Unreal developers to:

  • Design sound exactly how you want it – directly inside the Unreal Editor
  • Link model parameters to in-game events via Blueprints
  • Add any model to any actor, instantly turning the actor  into a sound generator
  • Easily implement adaptive audio without external software dependencies
  • Reduce disk space with zero-reliance on sound samples

The Nemisindo Action Pack is available in the Unreal Marketplace at: 

https://www.unrealengine.com/marketplace/product/nemisindo-action-pack .

And here’s a short video introducing the Action Pack and its features:

There’s another great video about it by The Sound Effects Guy here (we don’t know him personally and didn’t pay him for it or anything like that),

Nemisindo’s mission is to generate any sound effect procedurally, doing away with the need for sample libraries. Nemisindo’s technology can generate entire auditory worlds, create soundscapes based on sounds that have never been heard before, or enable every object in a VR/AR world to generate sounds that can adapt to changing conditions. These sound effects can also be shaped and crafted at the point of creation rather than via post-processing, breaking through the limitations of sampled sounds.

Named after the Zulu for “sounds/noise”, Nemisindo is an Epic MegaGrant recipient, awarded to support their contribution to procedural audio in the Unreal community.

Pitter-patter and tip-toe – will you do a footstep listening test?

Footstep sounds are one of the most widely used sound effects in film, TV and game sound design.
Great footstep sound effects are often needed, from the creeping, ominous footsteps in a horror film to the thud clunk of an armored soldier going into battle in a sci-fi action game.

But its not easy. As Andy Farnell pointed out in Designing Sound (which has a whole chapter on footstep synthesis), there are lots of issues with using recorded footstep samples in games. Some early games would use just one sample, making a character sound like he or she had two left (or two right) feet.
To get more realistic variation, you need several different samples for each character, for each foot, for each surface, at different paces. And so one needs to store hundreds of footstep samples. Even then, repetition becomes a problem.

We have a procedural model for generating footstep sounds without the use of recorded samples at nemisindo.com , see https://nemisindo.com/models/footsteps.html .

And we have also been looking at a new approach to footstep synthesis, based on multi-layer neural networks.

To investigate this, we have prepared a listening test comparing several different footstep synthesis approaches, as well as real recordings. The study consists of a short multi-stimulus listening test, preceded by a simple questionnaire. It takes place entirely online from your own computer. All that is needed to participate is;
• A computer with an internet connection and modern browser
• A pair of headphones
• No history of hearing loss
The duration of the study is roughly 10 minutes. We are very grateful for any responses.The study is accessible here: http://webprojects.eecs.qmul.ac.uk/mc309/FootEval/test.html?url=tests/ape_footsteps.xml

If you have any questions or feedback, please feel free to email Marco Comunità at m.comunita@qmul.ac.uk

Faders, an online collaborative digital audio workstation

Since you’re reading this blog, you probably saw the recent announcement about Nemisindo, https://nemisindo.com our new start-up offering sound design services based on procedural audio technology. This entry is about another start-up in the audio space, arising from academic research, Faders, https://faders.io . We aren’t involved in Faders, but know the team behind it very well, and have worked with them on other projects.

Faders is a spin-out from Birmingham City University: an online, collaborative, intelligent digital audio workstation (DAW) that’s free to use and has a range of intelligent features. 

Faders is built around the idea of removing all barriers in audio production, without hiding the complexity and dumbing down the interface. The result is a user-friendly, powerful, and smart DAW, which helps you by labelling tracks, doing rough mixes, trimming silence from recordings, … and a collaborative web-based platform that can accommodate cutting-edge processors that don’t have to fit a single track “plugin” for DAWs. 

Synth and FX plugins are based on the open source JSAP framework https://github.com/nickjillings/JSAP, which we worked with them on a little bit [1,2], and have talked about before. A Freesound browser is integrated https://freesound.org/. Some of the tech in this platform is based on about a decade of academic research. 

They’re looking for people to try it out and give feedback, and they have a forum at https://community.faders.io

They are especially keen to work with educators and will be releasing an education mode where one can share files, follow up on student projects, and even teach remotely with video, all inside Faders. They’re also exploring all kinds of partnerships, from software instruments over plugin development to cross-promotion.

[1] N. Jillings, Y. Wang, R. Stables and J. D. Reiss, ‘Intelligent audio plugin framework for the Web Audio API,’ Web Audio Conference, London, 2017

[2] N. Jillings, Y. Wang, J. D. Reiss and R. Stables, “JSAP: A Plugin Standard for the Web Audio API with Intelligent Functionality,” 141st Audio Engineering Society Convention, Los Angeles, USA, 2016.


Nemisindo, our new spin-out, launches online sound design service

We haven’t done a lot of blogging recently, but for a good reason; there’s an inverse relationship between how often we post blog entries and how busy we are trying to do something interesting. Now we’ve done it, we can talk about it, and today, we can launch it!

Procedural audio is a big area of research for us, which we have discussed in previous blog entries about aeroacoustics, whistles, swinging swords , propellers and thunder. This is sound synthesis, but with some additional requirements. Its usually intended for use in interactive content (games), so it needs to generate sound in real-time, and adapt to changing inputs. 

There are some existing efforts to offer procedural audio. However, they usually focus on a few specific sounds, which means sound designers still need sound effect libraries for most sound effects. And some efforts still involve manipulating sound samples. Which means they aren’t truly procedural. But if you can create any sound effect, then you can do away with the sample libraries (almost) entirely, and procedurally generate entire auditory worlds.

And we’ve created a company that aims to do just that. Nemisindo, named after the Zulu for “sounds/noise” offer sound design services based on their innovative procedural audio technology. They are launching a new online service, https://nemisindo.com, that allows users to create sound effects for games, film and VR without the need for vast libraries of sounds.

The following video gives a taste of the technology and the range of services they offer.

Nemisindo’s new platform provides a browser-based service with tools to create sounds from over 70 classes (engines, footsteps, explosions…) and over 700 preselected settings (diesel generator engine, motorbike, Jetsons jet…). It can be used to create almost any sound effect from scratch, and in real-time, based on intuitive controls guided by the user.

If someone wants a ‘whoosh’ sound for their game, or footsteps, gunshots, a raging fire or a gentle summer shower, they just tell the system what they’re looking for and adjust the sound while it’s being created. And unlike other technologies that simply use pre-recorded sounds, Nemisindo’s platform generates sounds that have never been recorded, a dragon roaring, for instance, light sabres swinging and space cannons firing. These sound effects can also be shaped and crafted at the point of creation by the user, breaking through limitations of sampled sounds.

Nemisindo has already caught the attention of Epic Games, with the spinout receiving an Epic MegaGrant to develop procedural audio for the Unreal game engine. 

The new service from Nemisindo launches today (18 August 2021) and can be accessed at nemisindo.com. For the first month, Nemisindo is offering a free trial period allowing registered users to download sounds for free. After the trial period ends, the system is still free to use, but sounds can be downloaded at a low individual cost or with a paid monthly subscription.

We encourage you to register and check it out.

The Nemisindo team can be reached at info@nemisindo.com .

Death metal, green dance music, and Olympic sound design

This is an unusual blog entry, in that the three topics in the title, death metal, green dance music, and Olympic sound design, have very little in common. But they are all activities that the team here have been involved with recently, outside of our normal research, which are worth mentioning.

Angeliki Mourgela, whose work has been described in previous blog entries on hearing loss simulation and online listening tests is also a sound engineer and death metal musician. Her band, Unmother, has just released an album, and you can check it out on Bandcamp.

Eva Fineberg is a Masters student doing a project on improved thunder simulation, building on some work we did which showed that none of the existing thunder synthesis models were very good. Eva is one of the leaders of Berlin’s Clean Scene, a collective of industry professionals focused on making dance music greener. They have been investigating the environmental impacts of touring. They recently released a report, Last Night a DJ Took a Flight: Exploring the carbon footprint of touring DJs and looking towards alternative futures within the dance music industry, that found rather stunning environmental impact from touring DJs. But it also went further and gave many recommendations to reduce this impact. Its good to see initiatives like this in the music industry that bring research and action together.

Finally, I was asked to write an article for The Conversation about sound design in the Olympics. A quick search showed that there were quite a few pieces written about this, but they all focused on the artificial crowd noise. Thats of course the big story, but I managed to find a different angle. Looking back, the modern Olympics that perhaps most revolutionised sound design in the past was… the 1964 Olympics in Tokyo. The technical aspects of the sound engineering involved were published in the July 1965 issue of the Journal of the Audio Engineering Society. So there’s a good story there on innovation in sound design, from Tokyo to Tokyo. The article, 3,600 microphones and counting: how the sound of the Olympics is created, was just published the moment I started writing this blog entry.

The crack of thunder

Lightning, copyright James Insogna, 2011

The gaming, film and virtual reality industries rely heavily on recorded samples for sound design. This has inherent limitations since the sound is fixed from the point of recording, leading to drawbacks such as repetition, storage, and lack of perceptually relevant controls.

Procedural audio offers a more flexible approach by allowing the parameters of a sound to be altered and sound to be generated from first principles. A natural choice for procedural audio is environmental sounds. They occur widely in creative industries content, and are notoriously difficult to capture. On-location sounds often cannot be used due to recording issues and unwanted background sounds, yet recordings from sample libraries are rarely a good match to an environmental scene.

Thunder in particular, is highly relevant. It provides a sense of the environment and location, but can also be used to supplement the narrative and heighten the tension or foreboding in a scene. There exist a fair number of methods to simulate thunder. But no one’s ever actually sat down and evaluated these models. That’s what we did in,

J. D. Reiss, H. E. Tez, R. Selfridge, ‘A comparative perceptual evaluation of thunder synthesis techniques’, to appear at the 150th Audio Engineering Convention, 2021.

We looked at all the thunder synthesis models we could find, and in the end were able to compare five models and a recording of real thunder in a listening test. And here’s the key result,

This was surprising. None of the methods sound very close to the real thing. It didn’t matter whether it was a physical model, didn’t matter which type of physical modelling approach was used, or whether an entirely signal-based approach was applied. And yet there’s plenty of other sounds where procedural audio can sound indistinguishable from the real thing, see our previous blog post on applause foot .

We also played around with the code. Its clear that the methods could be improved. For instance, they all produced mono sounds (so we used a mono recording for comparison too), the physical models could be much, much faster, and most of the models used very simplistic approximation of lightning. So there’s a really nice PhD topic for someone to work on one day.

Besides showing the limitations of the current models, it also showed the need for better evaluation in sound synthesis research, and the benefits of making code and data available for others. On that note, we put the paper and all the relevant code, data, sound samples etc online at

And you can try out a couple of models at

Aural diversity

We are part of a research network that has just been funded, focused around Aural diversity.

Aural Diversity arises from the observation that everybody hears differently. The assumption that we all possess a standard, undifferentiated pair of ears underpins most listening scenarios. Its the basis of many audio technologies, and has been a basis for much of our understanding of hearing and hearing perception. But the assumption is demonstrably incorrect, and taking it too far means that we miss out on many opportunities for advances in auditory science and audio engineering. We may well ask: whose ears are standard? whose ear has primacy? The network investigates the consequences of hearing differences in areas such as: music and performance, soundscape and sound studies, hearing sciences and acoustics, hearing care and hearing technologies, audio engineering and design, creative computing and AI, and indeed any field that has hearing or listening as a major component.

The term ‘auraldiversity’ echoes ‘neurodiversity’ as a way of distinguishing between ‘normal’ hearing, defined by BS ISO 226:2003 as that of a healthy 18-25 year-old, and atypical hearing (Drever 2018, ‘Primacy of the Ear’). This affects everybody to some degree. Each individual’s ears are uniquely shaped. We have all experienced temporary changes in hearing, such as when having a cold. And everybody goes through presbyacusis (age-related hearing loss) at varying rates after the teenage years.

More specific aural divergences are the result of an array of hearing differences or impairments which affect roughly 1.1 billion people worldwide (Lancet, 2013). These include noise-related, genetic, ototoxic, traumatic, and disorder-based hearing loss, some of which may cause full or partial deafness. However, “loss” is not the only form of impairment: auditory perceptual disorders such as tinnitus, hyperacusis and misophonia involve an increased sensitivity to sound.

And its been an issue in our research too. We’ve spent years developing automatic mixing systems that produce audio content like a sound engineer would (De Man et al 2017, ‘Ten Years of Automatic Mixing’). But to do that, we usually assume that there is a ‘right way’ to mix, and of course, it really depends on the listener, the listener’s environment, and many other factors. Our recent research has focused on developing simulators that allow anyone to hear the world as it really sounds to someone with hearing loss.

AHRC is funding the network for two years, beginning July 2021. The network is led by  Andrew Hugill of the University of Leicester. The core partners are the Universities of Leicester, Salford, Nottingham, Leeds, Goldsmiths, Queen Mary University of London (the team behind this blog), and the Attenborough Arts Centre. The wider network includes many more universities and a host of organisations concerned with hearing and listening.

The network will stage five workshops, each with a different focus:

  • Hearing care and technologies. How the use of hearing technologies may affect music and everyday auditory experiences.
  • Scientific and clinical aspects. How an arts and humanities approach might complement, challenge, and enhance scientific investigation.
  • Acoustics of listening differently. How acoustic design of the built and digital environments can be improved.
  • Aural diversity in the soundscape. Includes a concert featuring new works by aurally diverse artists for an aurally diverse audience.
  • Music and performance. Use of new technologies in composition and performance.

See http://auraldiversity.org for more details.

Invitation to online listening study

We would like to invite you to participate in our study titled “Investigation of frequency-specific loudness discomfort levels, in listeners with migraine-related hypersensitivity to sound“.

Please note : You do not have to be a migraine sufferer to participate in this study although if you are, please make sure to specify that, when asked during the study (for more on eligibility criteria check the list below)

Our study consists of a brief questionnaire, followed by a simple listening test. This study is targeted towards listeners with and without migraine headaches and in order to participate you have to meet all of the following criteria:

1) Be 18 years old or older

2) Not have any history or diagnosis of hearing loss

3) Have access to a quiet room to take the test

4) Have access to a computer with an internet connection

5) Have access to a pair of functioning headphones

The total duration of the study is approximately 25 minutes. Your participation is voluntary however valuable, as it could provide a useful insight on the auditory manifestations of migraine, as well as aid the identification of possible differences between participants with and without migraines, this way facilitating further research on sound adaptations for migraine sufferers.

To access the study please follow the link below:

https://golisten.ucd.ie/task/hearing-test/5ff5b8ee0a6da21ed8df2fc7

If you have any questions or would like to share your feedback on this study please email a.mourgela@qmul.ac.uk or joshua.reiss@qmul.ac.uk

What we did in 2020 (despite the virus!)

So this is a short year in review for the Intelligent Sound Engineering team. I won’t focus on Covid 19, because that will be the focus of every other year in review. Instead, I’ll just keep it brief with some highlights.

We co-founded two new companies, Tonz and Nemisindo. Tonz relates to some of our deep learning research and Nemisindo to our procedural audio research, though they’ll surely evolve into something greater. Keep an eye out for announcements from both of them.

I (Josh Reiss) was elected president of the Audio Engineering Society. Its a great honour. But I become President-Elect, Jan 1st 2021, and then President Jan 1st 2022, so its a slow transition into the role. I also gave a keynote at the 8th China Conference on Sound and Music Technology.

Angeliki Mourgela’s hearing loss simulator won the Gold Prize (first place) in the Matlab VST plugin competition. This work was also used to present sounds as heard by a character with hearing loss in the BBC drama Casualty.

J. T. Colonel and Christian Steinmetz gave an invited talk at the AES Virtual Symposium: Applications of Machine Learning in Audio

We are continuing collaboration with Yamaha, started new grants with support from InnovateUK (Hookslam), industry and EPSRC, and others. There’s others in various stages of submission, review or finalising acceptance, so hopefully I can make proper announcements about them soon.

Christian Steinmetz and Ilias Ibnyahya started their PhDs with the team. Emmanouil Chourdakis, Alessia Milo and Marco Martinez completed their PhDs . Lauren Edlin, Angeliki Mourgela, J. T. Colonel and Marco Comunita are all progressing well through various stages of the PhD. Johan Pauwels and Hazar Tez are doing great work in Postdoc positions, and Jack Walters and Luke Brosnahan are working wonders while interning with our spin-out companies. I’m sure I’ve left a few people out.

So, though the virus situation meant a lot of things were put on pause or fizzled out, we actually accomplished quite a lot in 2020.

And finally, here’s our research publications this past year;

Hearing loss simulator – MATLAB Plugin Competition Gold Award Winner

Congratulations to Angeliki Mourgela, winner of the AES Show 2020 Student Competition for developing a MATLAB plugin. The aim of the competition was for students to ‘Design a new kind of audio production VST plugin using MATLAB Software and your wits’.

Hearing loss is a global phenomenon, with almost 500 million people worldwide suffering from it, a number only increasing with an ageing population. Hearing loss can severely impact the daily life of an individual, causing both functional and emotional difficulties and affecting their overall quality of life. Research efforts towards a better understanding of its physical and perceptual characteristics, as well as the development of new and efficient methods for audio enhancement are  an essential endeavour for the future.

Angeliki developed a real-time hearing loss simulator, for use in audio production. It builds on a previous simulation, but is now real-time, low latency, and available as a stereo VST audio effect plug-in with more control and more accurate modelling of hearing loss. It offers the option of customizing threshold attenuations on each ear corresponding to the audiogram information. It also incorporates additional effects such as spectral smearing, rapid loudness growth and loss of temporal resolution on audio.

In effect, it allows anyone to hear the world as it really sounds to someone with hearing loss. And it means that audio producers can easily preview what their content would sound like to most hearing impaired listeners.

Here’s a video with Angeliki demonstrating the system.

Her plugin was also used in an episode of the BBC drama Casualty to let the audience hear the world as heard by a character with severe hearing loss.

You can download her code from the MathWorks file exchange and additional code on SoundSoftware.

Full technical details of the work and the research around it (in collaboration with myself and Dr. Trevor Agus of Queen’s University Belfast) were published in;

A. Mourgela, T. Agus and J. D. Reiss, “‘Investigation of a Real-Time Hearing Loss Simulation for Audio Production,” 149th AES Convention, 2020

Many thanks to the team from Matlab MathWorks for sponsoring and hosting the competition, and congratulations to all the other winners of the AES Student Competitions.