Cultural Influences on Mixing Practices

TL;DR: we are presenting a paper at the upcoming AES Convention in Milan on differences in mixes by engineers from different backgrounds, and qualitative analysis of the mixer’s notes as well as the critical listening comments of others.

We recently reviewed research to be presented at the AES 144th Convention, with further blog entries on some of our own contributions, analog-matched EQ and physically derived synthesis of edge tones. Here’s one more preview.

The mixing of multitrack music has been a core research interest of this group for the past ten years. In particular, much of the research in this area relates to the automation or streamlining of various processes which traditionally require significant time and effort from the mix engineer. To do that successfully, however, we need to have an excellent understanding of the process of the mix engineer, and the impact of the various signal manipulations on the perception of the listener. Members of this group have worked on projects that sought to expand this understanding by surveying mix engineers, analysing existing mixes, conducting psychoacoustic tests to optimise specific signal processing parameters, and measuring the subjective response to different mixes of the same song. This knowledge has lead to the creation of novel music production tools, but also just a better grasp of this exceedingly multidimensional and esoteric process.

At the upcoming Convention of the Audio Engineering Society in Milan, 23-26 May 2018, we will present a paper that builds on our previous work into analysis of mix creation and evaluation. Whereas previously the analysis of contrasting mixes was mostly quantitative in nature, this work focuses on the qualitative annotation of mixes and the documentation provided by the respective creators. Using these methods we investigated which mix principles and listening criteria the participants shared, and what the impact of available technology is (fully in the box vs outboard processing available).

We found that the task order, balancing practices, and choice of effects was unique, though some common trends were identified: starting the mix with all faders at 0 dB, creating subgroups, and changing levels and effect parameters for different song sections, to name a few. Furthermore, all mixes were made ‘in the box’, i.e. using only software) even when analogue equipment was available.

Furthermore, the large existing dataset we collected during the last few years (in particular as part of Brecht De Man’s PhD) allowed us to compare mixes from the subjects of this study – students of the Paris Conservatoire – to mixes by students from other institutions. To this end, we used one multitrack recording which has served as source material in several previous experiments. Quantitative analysis of level balancing practices showed no significant deviation between institutions – consistent with previous findings.

The paper is written by Amandine Pras, a collaborator from the University of Lethbridge who is among others an expert on qualitative analysis of music production practices; Brecht De Man, previously a member of this group and now a Research Fellow with our collaborators at Birmingham City University; and Josh Reiss, head of this group. All will be present at the Convention. Do come say hi!

You can already read the paper here:

Amandine Pras, Brecht De Man and Joshua D. Reiss, “A Case Study of Cultural Influences on Mixing Practices,” AES Convention 144, May 2018.


Weird and wonderful research to be unveiled at the 144th Audio Engineering Society Convention


Last year, we previewed the142nd and 143rd AES Conventions, which we followed with a wrap-up discussions here and here. The next AES  convention is just around the corner, May 23 to 26 in Milan. As before, the Audio Engineering research team here aim to be quite active at the convention.

These conventions have thousands of attendees, but aren’t so large that you get lost or overwhelmed. Away from the main exhibition hall is the Technical Program, which includes plenty of tutorials and presentations on cutting edge research.

So we’ve gathered together some information about a lot of the events that caught our eye as being unusual, exceptionally high quality involved in, attending, or just worth mentioning. And this Convention will certainly live up to the hype.

Wednesday May 23rd

From 11:15 to 12:45 that day, there’s an interesting poster by a team of researchers from the University of Limerick titled Can Visual Priming Affect the Perceived Sound Quality of a Voice Signal in Voice over Internet Protocol (VoIP) Applications? This builds on work we discussed in a previous blog entry, where they did a perceptual study of DFA Faders, looking at how people’s perception of mixing changes when the sound engineer only pretends to make an adjustment.

As expected given the location, there’s lots of great work being presented by Italian researchers. The first one that caught my eye is the 2:30-4 poster on Active noise control for snoring reduction. Whether you’re a loud snorer, sleep next to someone who is a loud snorer or just interested in unusual applications of audio signal processing, this one is worth checking out.

Do you get annoyed sometimes when driving and the road surface changes to something really noisy? Surely someone should do a study and find out which roads are noisiest so that then we can put a bit of effort into better road design and better in-vehicle equalisation and noise reduction? Well, now its finally happened with this paper in the same session on Deep Neural Networks for Road Surface Roughness Classification from Acoustic Signals.

Thursday, May 24

If you were to spend only one day this year immersing yourself in frontier audio engineering research, this is the day to do it.

How do people mix music differently in different countries? And do people perceive the mixes differently based on their different cultural backgrounds? These are the sorts of questions our research team here have been asking. Find out more in this 9:30 presentation by Amandine Pras. She led this Case Study of Cultural Influences on Mixing Practices, in collaboration with Brecht De Man (now with Birmingham City University) and myself.

Rod Selfridge has been blazing new trails in sound synthesis and procedural audio. He won the Best Student Paper Award at AES 141st Convention and the Best Paper Award at Sound and Music Computing. He’ll give another great presentation at noon on Physically Derived Synthesis Model of an Edge Tone which was also discussed in a recent blog entry.

I love the title of this next paper, Miniaturized Noise Generation System—A Simulation of a Simulation, which will be presented at 2:30pm by researchers from Intel Technology in Gdansk, Poland. This idea of a meta-simulation is not as uncommon as you might think; we do digital emulation of old analogue synthesizers, and I’ve seen papers on numerical models of Foley rain sound generators.

A highlight for our team here is our 2:45 pm presentation, FXive: A Web Platform for Procedural Sound Synthesis. We’ll be unveiling a disruptive innovation for sound design,, aimed at replacing reliance on sound effect libraries. Please come check it out, and get in touch with the presenters or any members of the team to find out more.

Immediately following this is a presentation which asks Can Algorithms Replace a Sound Engineer? This is a question the research team here have also investigated a lot, you could even say it was the main focus of our research for several years. The team behind this presentation are asking it in relation to Auto-EQ. I’m sure it will be interesting, and I hope they reference a few of our papers on the subject.

From 9-10:30, I will chair a Workshop on The State of the Art in Sound Synthesis and Procedural Audio, featuring the world’s experts on the subject. Outside of speech and possibly music, sound synthesis is still in its infancy, but its destined to change the world of sound design in the near future. Find out why.

12:15 — 13:45 is a workshop related to machine learning in audio (a subject that is sometimes called Machine Listening), Deep Learning for Audio Applications. Deep learning can be quite a technical subject, and there’s a lot of hype around it. So a Workshop on the subject is a good way to get a feel for it. See below for another machine listening related workshop on Friday.

The Heyser Lecture, named after Richard Heyser (we discussed some of his work in a previous entry), is a prestigious evening talk given by one of the eminent individuals in the field. This one will be presented by Malcolm Hawksford. , a man who has had major impact on research in audio engineering for decades.


The 9:30 — 11 poster session features some unusual but very interesting research. A talented team of researchers from Ancona will present A Preliminary Study of Sounds Emitted by Honey Bees in a Beehive.

Intense solar activity in March 2012 caused some amazing solar storms here on Earth. Researchers in Finland recorded them, and some very unusual results will be presented in the same session with the poster titled Analysis of Reports and Crackling Sounds with Associated Magnetic Field Disturbances Recorded during a Geomagnetic Storm on March 7, 2012 in Southern Finland.

You’ve been living in a cave if you haven’t noticed the recent proliferation of smart devices, especially in the audio field. But what makes them tick, is there a common framework and how are they tested? Find out more at 10:45 when researchers from Audio Precision will present The Anatomy, Physiology, and Diagnostics of Smart Audio Devices.

From 3 to 4:30, there’s a Workshop on Artificial Intelligence in Your Audio. It follows on from a highly successful workshop we did on the subject at the last Convention.


A couple of weeks ago, John Flynn wrote an excellent blog entry describing his paper on Improving the Frequency Response Magnitude and Phase of Analogue-Matched Digital Filters. His work is a true advance on the state of the art, providing digital filters with closer matches to their analogue counterparts than any previous approaches. The full details will be unveiled in his presentation at 10:30.

If you haven’t seen Mariana Lopez presenting research, you’re missing out. Her enthusiasm for the subject is infectious, and she has a wonderful ability to convey the technical details, their deeper meanings and their importance to any audience. See her one hour tutorial on Hearing the Past: Using Acoustic Measurement Techniques and Computer Models to Study Heritage Sites, starting at 9:15.

The full program can be explored on the Convention Calendar or the Convention website. Come say hi to us if you’re there! Josh Reiss (author of this blog entry), John Flynn, Parham Bahadoran and Adan Benito from the Audio Engineering research team within the Centre for Digital Music, along with two recent graduates Brecht De Man and Rod Selfridge, will all be there.