Ten Years of Automatic Mixing

tenyears

Automatic microphone mixers have been around since 1975. These are devices that lower the levels of microphones that are not in use, thus reducing background noise and preventing acoustic feedback. They’re great for things like conference settings, where there may be many microphones but only a few speakers should be heard at any time.

Over the next three decades, various designs appeared, but it didn’t really grow much from Dan Dugan’s original Dan Dugan’s original concept.

Enter Enrique Perez Gonzalez, a PhD student researcher and experienced sound engineer. On September 11th, 2007, exactly ten years ago from the publication of this blog post, he presented a paper “Automatic Mixing: Live Downmixing Stereo Panner.” With this work, he showed that it may be possible to automate not just fader levels in speech applications, but other tasks and for other applications. Over the course of his PhD research, he proposed methods for autonomous operation of many aspects of the music mixing process; stereo positioning, equalisation, time alignment, polarity correction, feedback prevention, selective masking minimization, etc. He also laid out a framework for further automatic mixing systems.

Enrique established a new field of research, and its been growing ever since. People have used machine learning techniques for automatic mixing, applied auditory neuroscience to the problem, and explored where the boundaries lie between the creative and technical aspects of mixing. Commercial products have arisen based on the concept. And yet all this is still only scratching the surface.

I had the privilege to supervise Enrique and have many anecdotes from that time. I remember Enrique and I going to a talk that Dan Dugan gave at an AES convention panel session and one of us asked Dan about automating other aspects of the mix besides mic levels. He had a puzzled look and basically said that he’d never considered it. It was also interesting to see the hostile reactions from some (but certainly not all) practitioners, which brings up lots of interesting questions about disruptive innovations and the threat of automation.

wimp3

Next week, Salford University will host the 3rd Workshop on Intelligent Music Production, which also builds on this early research. There, Brecht De Man will present the paper ‘Ten Years of Automatic Mixing’, describing the evolution of the field, the approaches taken, the gaps in our knowledge and what appears to be the most exciting new research directions. Enrique, who is now CTO of Solid State Logic, will also be a panellist at the Workshop.

Here’s a video of one of the early Automatic Mixing demonstrators.

And here’s a list of all the early Automatic Mixing papers.

  • E. Perez Gonzalez and J. D. Reiss, A real-time semi-autonomous audio panning system for music mixing, EURASIP Journal on Advances in Signal Processing, v2010, Article ID 436895, p. 1-10, 2010.
  • Perez-Gonzalez, E. and Reiss, J. D. (2011) Automatic Mixing, in DAFX: Digital Audio Effects, Second Edition (ed U. Zölzer), John Wiley & Sons, Ltd, Chichester, UK. doi: 10.1002/9781119991298. ch13, p. 523-550.
  • E. Perez Gonzalez and J. D. Reiss, “Automatic equalization of multi-channel audio using cross-adaptive methods”, Proceedings of the 127th AES Convention, New York, October 2009
  • E. Perez Gonzalez, J. D. Reiss “Automatic Gain and Fader Control For Live Mixing”, IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, New York, October 18-21, 2009
  • E. Perez Gonzalez, J. D. Reiss “Determination and correction of individual channel time offsets for signals involved in an audio mixture”, 125th AES Convention, San Francisco, USA, October 2008
  • E. Perez Gonzalez, J. D. Reiss “An automatic maximum gain normalization technique with applications to audio mixing.”, 124th AES Convention, Amsterdam, Netherlands, May 2008
  • E. Perez Gonzalez, J. D. Reiss, “Improved control for selective minimization of masking using interchannel dependency effects”, 11th International Conference on Digital Audio Effects (DAFx), September 2008
  • E. Perez Gonzalez, J. D. Reiss, “Automatic Mixing: Live Downmixing Stereo Panner”, 10th International Conference on Digital Audio Effects (DAFx-07), Bordeaux, France, September 10-15, 2007
Advertisements

C4DM at recent Audio Engineering Society Conferences

Featuring contributions from Dave Moffat and Brecht De Man

 

 

As you might know, or can guess, we’re heavily involved in the Audio Engineering Society, which is the foremost professional organisation in this field. We had a big impact at two of their recent conferences.

The 60th AES conference on Dereverberation and Reverberation of Audio Music and Speech took place in Leuven, Belgium, 3-5 February 2016.. The conference was based around a European funded project of the same name (DREAMS – http://www.dreams-itn.eu/) and aimed to bring together all expertiese in reverb and reverb removal.

The conference started out with a fantastic overview of reverberation technology, and how it has progressed over the past 50 years, by Vesa Välimäki. The day then went on to present work on object based coding and reverberation, computation dereverberation techniques.car7-bmweaa5pap

Day two started with Thomas Brand discussing sound spatialisation and how participants are much more tolerant of reverberation in binaural listening conditions. Further work then presented on physical modelling approaches to reverberation simulation, user perception, and spatialisation of audio in the binaural context.

Day three began with Emanuël Habets, presenting on the past 50 years of reverberation removal, discussing that since we started modelling reverberation, we have also been trying to remove it from audio signals. Work was then presented on multichannel dereverberation and computational sound field modelling techniques.

The Audio Engineering group from C4DM were there in strength, presenting two papers and a demo session. David Moffat presented work on the impact dereverberation can make when combined with state of the art source separation technologies. Emmanouil Theofanis Chourdakis presented a hybrid model which, based on machine learning technologies, can intelligently apply reverberation to an audio track. Brecht De Man presented his latest research, as part of the demo session and again in a plenary lecture, on analysis of studio mixing practices, focused on analysing the perception of reverberation in multitrack mixes.


 

The following week was the AES Audio for Games conference in London. This is the fifth game audio conference they’ve had, and we’ve been involved in this conference series since its inception in 2009. C4DM researchers Dave Moffat, Will Wilkinson and Christian Heinrichs all presented work related to sound synthesis and procedural audio, which is becoming a big focus of our efforts (more to come!).

Brecht De Man put together an excellent report of the conference, where you can find out a lot more.