Aural diversity

We are part of a research network that has just been funded, focused around Aural diversity.

Aural Diversity arises from the observation that everybody hears differently. The assumption that we all possess a standard, undifferentiated pair of ears underpins most listening scenarios. Its the basis of many audio technologies, and has been a basis for much of our understanding of hearing and hearing perception. But the assumption is demonstrably incorrect, and taking it too far means that we miss out on many opportunities for advances in auditory science and audio engineering. We may well ask: whose ears are standard? whose ear has primacy? The network investigates the consequences of hearing differences in areas such as: music and performance, soundscape and sound studies, hearing sciences and acoustics, hearing care and hearing technologies, audio engineering and design, creative computing and AI, and indeed any field that has hearing or listening as a major component.

The term ‘auraldiversity’ echoes ‘neurodiversity’ as a way of distinguishing between ‘normal’ hearing, defined by BS ISO 226:2003 as that of a healthy 18-25 year-old, and atypical hearing (Drever 2018, ‘Primacy of the Ear’). This affects everybody to some degree. Each individual’s ears are uniquely shaped. We have all experienced temporary changes in hearing, such as when having a cold. And everybody goes through presbyacusis (age-related hearing loss) at varying rates after the teenage years.

More specific aural divergences are the result of an array of hearing differences or impairments which affect roughly 1.1 billion people worldwide (Lancet, 2013). These include noise-related, genetic, ototoxic, traumatic, and disorder-based hearing loss, some of which may cause full or partial deafness. However, “loss” is not the only form of impairment: auditory perceptual disorders such as tinnitus, hyperacusis and misophonia involve an increased sensitivity to sound.

And its been an issue in our research too. We’ve spent years developing automatic mixing systems that produce audio content like a sound engineer would (De Man et al 2017, ‘Ten Years of Automatic Mixing’). But to do that, we usually assume that there is a ‘right way’ to mix, and of course, it really depends on the listener, the listener’s environment, and many other factors. Our recent research has focused on developing simulators that allow anyone to hear the world as it really sounds to someone with hearing loss.

AHRC is funding the network for two years, beginning July 2021. The network is led by  Andrew Hugill of the University of Leicester. The core partners are the Universities of Leicester, Salford, Nottingham, Leeds, Goldsmiths, Queen Mary University of London (the team behind this blog), and the Attenborough Arts Centre. The wider network includes many more universities and a host of organisations concerned with hearing and listening.

The network will stage five workshops, each with a different focus:

  • Hearing care and technologies. How the use of hearing technologies may affect music and everyday auditory experiences.
  • Scientific and clinical aspects. How an arts and humanities approach might complement, challenge, and enhance scientific investigation.
  • Acoustics of listening differently. How acoustic design of the built and digital environments can be improved.
  • Aural diversity in the soundscape. Includes a concert featuring new works by aurally diverse artists for an aurally diverse audience.
  • Music and performance. Use of new technologies in composition and performance.

See http://auraldiversity.org for more details.

Hearing loss simulator – MATLAB Plugin Competition Gold Award Winner

Congratulations to Angeliki Mourgela, winner of the AES Show 2020 Student Competition for developing a MATLAB plugin. The aim of the competition was for students to ‘Design a new kind of audio production VST plugin using MATLAB Software and your wits’.

Hearing loss is a global phenomenon, with almost 500 million people worldwide suffering from it, a number only increasing with an ageing population. Hearing loss can severely impact the daily life of an individual, causing both functional and emotional difficulties and affecting their overall quality of life. Research efforts towards a better understanding of its physical and perceptual characteristics, as well as the development of new and efficient methods for audio enhancement are  an essential endeavour for the future.

Angeliki developed a real-time hearing loss simulator, for use in audio production. It builds on a previous simulation, but is now real-time, low latency, and available as a stereo VST audio effect plug-in with more control and more accurate modelling of hearing loss. It offers the option of customizing threshold attenuations on each ear corresponding to the audiogram information. It also incorporates additional effects such as spectral smearing, rapid loudness growth and loss of temporal resolution on audio.

In effect, it allows anyone to hear the world as it really sounds to someone with hearing loss. And it means that audio producers can easily preview what their content would sound like to most hearing impaired listeners.

Here’s a video with Angeliki demonstrating the system.

Her plugin was also used in an episode of the BBC drama Casualty to let the audience hear the world as heard by a character with severe hearing loss.

You can download her code from the MathWorks file exchange and additional code on SoundSoftware.

Full technical details of the work and the research around it (in collaboration with myself and Dr. Trevor Agus of Queen’s University Belfast) were published in;

A. Mourgela, T. Agus and J. D. Reiss, “‘Investigation of a Real-Time Hearing Loss Simulation for Audio Production,” 149th AES Convention, 2020

Many thanks to the team from Matlab MathWorks for sponsoring and hosting the competition, and congratulations to all the other winners of the AES Student Competitions.