Congratulations Dr. Angeliki Mourgela!

Today one of our PhD student researchers, Angeliki Mourgela, successfully defended her PhD. The form of these exams, or vivas, varies from country to country, and even institution to institution, which we discussed previously. Here, its pretty gruelling; behind closed doors, with two expert examiners probing every aspect of the PhD.
Angeliki’s PhD was on ‘Perceptually Motivated, Intelligent Audio Mixing Approaches for Hearing Loss.’ Aspects of her research have been described in previous blog entries on hearing loss simulation and online listening tests. Angeliki is also a sound engineer and death metal musician.

As the population ages, hearing loss is becoming more and more of a concern. Yet mixing engineers, and sound engineers in general, rarely know how the content that they produce would sound to those listeners with hearing loss. Wouldn’t it be great if they could, with the click of a button, hear in real-time how their mix would sound to listeners with different hearing profiles? And could the content be automatically remixed so that the new mix sounds as close as possible for someone with hearing loss as the original mix sounds for someone with normal hearing? That was the motivation for this research.

Angeliki’s thesis explored perceptually motivated intelligent approaches to audio mixing for listeners with hearing loss, through use of a hearing loss simulator as a referencing tool for manual and automatic audio mixing. She designed a real-time hearing loss simulation and tested for its accuracy and effectiveness through the conduction of listening studies with participants with real and simulated hearing loss. The simulation was used by audio engineering students and professionals, in order to see how engineers might combat the effects of hearing loss while mixing content through the simulation.
The extracted practices were then used to inform intelligent audio production approaches for hearing loss.

Angeliki now works for RoEx Audio, a start-up company based partly on research done here. We discussed RoEx in a previous blog entry

Here’s a video with Angeliki demonstrating an early version of her hearing loss simulator plugin.

The simulator won First Place in the Matlab Student Plugin Competition, at the 149th AES Convention, Oct. 2020. It was also used in an episode of the BBC drama Casualty to let the audience hear the world as heard by a character with severe hearing loss.

And finally, here are a few of her publications

Many thanks also to Angeliki’s collaborators, especially Dr. Trevor Agus, who offered great advice and proposed research directions, Dr. Lorenzo Picinali, who collaborated on some recent evaluation of hearing loss simulators, and Matt Paradis and others from BBC, who supported this work.

Intelligent sound engineering for all

Signal processing challenges to rework music for those with a hearing loss

Intelligent sound engineering opens up the possibility of personalizing audio, for example processing and mixing music so the audio quality is better for someone with a hearing loss. People with a hearing impairment can experience problems when listening to music with or without hearing aids. 430 million people Worldwide have a disabling hearing loss, with this number increasing as the population ages. Poor hearing makes music harder to appreciate, for example picking out the lyrics or melody is more difficult. This reduces the enjoyment of music, and can lead to disengagement from listening and music-making.

I work on the Cadenza project, which has just launched a series of open competitions to get experts in music signal processing and machine learning to develop algorithms to improve music for those with a hearing loss. Such open challenges are increasingly used to push forward audio processing. They’re free to enter, and we provide lots of data, software and support to help competitors take part.

The Cadenza Challenges are about improving the perceived audio quality of recorded music for people with a hearing loss.

What do we mean by audio quality? Imagine listening to the same music track in two different ways. First on a low quality mp3 played on a cheap mobile, and then via a high quality wav and studio-grade monitors. The underlying music is the same in both cases, but the audio quality is very different.

Headphones

The first task you might tackle is our Task 1: listening over headphones. The figure below shows the software baseline that we are providing for you to build on. First the stereo music is demixed into VDBO (Vocals, Drums, Bass, Other) before being remixed into stereo for the listener to hear. At the remixing stage there is an opportunity for intelligent sound engineering to process the VDBO tracks and adjust the balance between them, to personalise and improve the music. We’re also hoping for improved demixing algorithms that allow for the hearing abilities of the listeners.

    Baseline schematic for headphone task

    Car

    The second task you could tackle is intelligent sound engineering in the presence of noise. Listening to music in the car against the rumble of car noise is really common. How would you tune a car stereo (Enhancement box in the diagram below), so the processed music is best allowing for both the noise and the simple hearing aid the driver is wearing?

    Baseline schematic for car task

    Next steps

    Both tasks are live now, with entrants having to finish and submit their entries in July 2023. Join us in trying to improve music for those with a hearing loss. Or let us know what you think below, e.g., what do you think of the project idea and the scenarios we’ve chosen.

    You’ll find lots more on the Cadenza project website, including a Learning Resources section that gives you background information on hearing, hearing loss, hearing aids and other knowledge you might need to enter the challenge. We also have a “find a team” page, if you want to get together with other experts to improve music for those with a hearing loss.

    Aural diversity

    We are part of a research network that has just been funded, focused around Aural diversity.

    Aural Diversity arises from the observation that everybody hears differently. The assumption that we all possess a standard, undifferentiated pair of ears underpins most listening scenarios. Its the basis of many audio technologies, and has been a basis for much of our understanding of hearing and hearing perception. But the assumption is demonstrably incorrect, and taking it too far means that we miss out on many opportunities for advances in auditory science and audio engineering. We may well ask: whose ears are standard? whose ear has primacy? The network investigates the consequences of hearing differences in areas such as: music and performance, soundscape and sound studies, hearing sciences and acoustics, hearing care and hearing technologies, audio engineering and design, creative computing and AI, and indeed any field that has hearing or listening as a major component.

    The term ‘auraldiversity’ echoes ‘neurodiversity’ as a way of distinguishing between ‘normal’ hearing, defined by BS ISO 226:2003 as that of a healthy 18-25 year-old, and atypical hearing (Drever 2018, ‘Primacy of the Ear’). This affects everybody to some degree. Each individual’s ears are uniquely shaped. We have all experienced temporary changes in hearing, such as when having a cold. And everybody goes through presbyacusis (age-related hearing loss) at varying rates after the teenage years.

    More specific aural divergences are the result of an array of hearing differences or impairments which affect roughly 1.1 billion people worldwide (Lancet, 2013). These include noise-related, genetic, ototoxic, traumatic, and disorder-based hearing loss, some of which may cause full or partial deafness. However, “loss” is not the only form of impairment: auditory perceptual disorders such as tinnitus, hyperacusis and misophonia involve an increased sensitivity to sound.

    And its been an issue in our research too. We’ve spent years developing automatic mixing systems that produce audio content like a sound engineer would (De Man et al 2017, ‘Ten Years of Automatic Mixing’). But to do that, we usually assume that there is a ‘right way’ to mix, and of course, it really depends on the listener, the listener’s environment, and many other factors. Our recent research has focused on developing simulators that allow anyone to hear the world as it really sounds to someone with hearing loss.

    AHRC is funding the network for two years, beginning July 2021. The network is led by  Andrew Hugill of the University of Leicester. The core partners are the Universities of Leicester, Salford, Nottingham, Leeds, Goldsmiths, Queen Mary University of London (the team behind this blog), and the Attenborough Arts Centre. The wider network includes many more universities and a host of organisations concerned with hearing and listening.

    The network will stage five workshops, each with a different focus:

    • Hearing care and technologies. How the use of hearing technologies may affect music and everyday auditory experiences.
    • Scientific and clinical aspects. How an arts and humanities approach might complement, challenge, and enhance scientific investigation.
    • Acoustics of listening differently. How acoustic design of the built and digital environments can be improved.
    • Aural diversity in the soundscape. Includes a concert featuring new works by aurally diverse artists for an aurally diverse audience.
    • Music and performance. Use of new technologies in composition and performance.

    See http://auraldiversity.org for more details.