Ten Years of Automatic Mixing

tenyears

Automatic microphone mixers have been around since 1975. These are devices that lower the levels of microphones that are not in use, thus reducing background noise and preventing acoustic feedback. They’re great for things like conference settings, where there may be many microphones but only a few speakers should be heard at any time.

Over the next three decades, various designs appeared, but it didn’t really grow much from Dan Dugan’s original Dan Dugan’s original concept.

Enter Enrique Perez Gonzalez, a PhD student researcher and experienced sound engineer. On September 11th, 2007, exactly ten years ago from the publication of this blog post, he presented a paper “Automatic Mixing: Live Downmixing Stereo Panner.” With this work, he showed that it may be possible to automate not just fader levels in speech applications, but other tasks and for other applications. Over the course of his PhD research, he proposed methods for autonomous operation of many aspects of the music mixing process; stereo positioning, equalisation, time alignment, polarity correction, feedback prevention, selective masking minimization, etc. He also laid out a framework for further automatic mixing systems.

Enrique established a new field of research, and its been growing ever since. People have used machine learning techniques for automatic mixing, applied auditory neuroscience to the problem, and explored where the boundaries lie between the creative and technical aspects of mixing. Commercial products have arisen based on the concept. And yet all this is still only scratching the surface.

I had the privilege to supervise Enrique and have many anecdotes from that time. I remember Enrique and I going to a talk that Dan Dugan gave at an AES convention panel session and one of us asked Dan about automating other aspects of the mix besides mic levels. He had a puzzled look and basically said that he’d never considered it. It was also interesting to see the hostile reactions from some (but certainly not all) practitioners, which brings up lots of interesting questions about disruptive innovations and the threat of automation.

wimp3

Next week, Salford University will host the 3rd Workshop on Intelligent Music Production, which also builds on this early research. There, Brecht De Man will present the paper ‘Ten Years of Automatic Mixing’, describing the evolution of the field, the approaches taken, the gaps in our knowledge and what appears to be the most exciting new research directions. Enrique, who is now CTO of Solid State Logic, will also be a panellist at the Workshop.

Here’s a video of one of the early Automatic Mixing demonstrators.

And here’s a list of all the early Automatic Mixing papers.

  • E. Perez Gonzalez and J. D. Reiss, A real-time semi-autonomous audio panning system for music mixing, EURASIP Journal on Advances in Signal Processing, v2010, Article ID 436895, p. 1-10, 2010.
  • Perez-Gonzalez, E. and Reiss, J. D. (2011) Automatic Mixing, in DAFX: Digital Audio Effects, Second Edition (ed U. Zölzer), John Wiley & Sons, Ltd, Chichester, UK. doi: 10.1002/9781119991298. ch13, p. 523-550.
  • E. Perez Gonzalez and J. D. Reiss, “Automatic equalization of multi-channel audio using cross-adaptive methods”, Proceedings of the 127th AES Convention, New York, October 2009
  • E. Perez Gonzalez, J. D. Reiss “Automatic Gain and Fader Control For Live Mixing”, IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, New York, October 18-21, 2009
  • E. Perez Gonzalez, J. D. Reiss “Determination and correction of individual channel time offsets for signals involved in an audio mixture”, 125th AES Convention, San Francisco, USA, October 2008
  • E. Perez Gonzalez, J. D. Reiss “An automatic maximum gain normalization technique with applications to audio mixing.”, 124th AES Convention, Amsterdam, Netherlands, May 2008
  • E. Perez Gonzalez, J. D. Reiss, “Improved control for selective minimization of masking using interchannel dependency effects”, 11th International Conference on Digital Audio Effects (DAFx), September 2008
  • E. Perez Gonzalez, J. D. Reiss, “Automatic Mixing: Live Downmixing Stereo Panner”, 10th International Conference on Digital Audio Effects (DAFx-07), Bordeaux, France, September 10-15, 2007
Advertisements

Joshua D Reiss – Professor of Audio Engineering

Intelligent Sound Engineering are pleased to announce our lead academic, Joshua D Reiss has been promoted to Professor of Audio Engineering.

Professor Reiss holds degrees in Physics and Mathematics, and a PhD in Chaos Theory from Georgia Institute of Technology. He has been an academic with the Centre for Digital Music in the Electronic Engineering and Computer Science department at Queen Mary University of London since 2003. His primary focus of research is on state-of-the-art signal processing techniques for sound engineering, publishing over 160 scientific papers and authoring the book “Audio Effects: Theory, Implementation and Application” together with Andrew McPherson. Along with pioneering research into intelligent audio production technologies, Professor Reiss also focuses on state of the art sound synthesis techniques.

Professor Reiss is a visiting Professor at Birmingham City University, an Enterprise Fellow of the Royal Academy of Engineering and has been a governor of the Audio Engineering Society from 2013 to present.

Blogs, blogs, blogs

We’re collaborating on a really interesting project called ‘Cross-adaptive processing as musical intervention,’ led by Professor Øyvind Brandtsegg of the Norwegian University of Science and Technology. Essentially, this project involves cross-adaptive audio effects, where the processing applied to one audio signal is dependent on analysis of other signals. We’ve used this concept quite a lot to build intelligent music production systems. But in this project, Øyvind and his collaborators are exploring creative uses of cross-adaptive audio effects in live performance. The effects applied to one source may change depending on what and how another performer plays, so a performer may change what they play to overtly influence everyone else’s sound, thus taking the interplay in a jam session to a whole new level.

One of the neat things that they’ve done to get this project off the ground is created a blog, http://crossadaptive.hf.ntnu.no/ , which is a great way to get all the reports and reflections out there quickly and widely.

This got me thinking of a few other blogs that we should mention. First and foremost is Prof, Trevor Cox of the University of Salford’s wonderful blog, ‘The Sound Blog: Dispatches from Acoustic and Audio Engineering,’ is available at https://acousticengineering.wordpress.com/ . This blog was one of the principal inspirations for our own blog here.

Another leading researcher’s interesting blog is https://marianajlopez.wordpress.com/ – Mariana is looking into aspects of sound design that I feel really don’t get enough attention from the academic community… yet. Hopefully, that will change soon.

There’s plenty of blogs about music production. A couple of good ones are http://thestereobus.com/ and http://productionadvice.co.uk/blog/ . They are full of practical advice, insights and tutorials.

A lot of the researchers in the Audio Engineering team have their own personal blogs, which discuss their research, their projects and various other things related to their career or just cool technologies.

See,

http://brechtdeman.com/blog.html – Brecht De Man ‘s blog. He’s researching semantic and knowledge engineering approaches to music production systems (and a lot more).

https://auralcharacter.wordpress.com/ – Alessia Milo’s blog. She’s looking at (and listening to) soundscapes, and their importance in architecture

http://davemoffat.com/wp/ – Dave Moffat is investigating evaluation of sound synthesis techniques, and how machine learning can be applied to synthesize a wide variety of sound effects.

https://rodselfridge.wordpress.com/ – Rod Selfridge is looking at real-time physical modelling techniques for procedural audio and sound synthesis.

More to come on all of them, I’m sure.

Let us know of any other blogs that we should mention, and we’ll update this entry or add new entries.

Welcome to our blog

Hi everyone, and welcome to our new blog, provisionally titled ‘Intelligent Sound Engineering.’ We are the Audio Engineering research team within the Centre for Digital Music at Queen Mary University of London.

This blog is for us to discuss anything of interest to us. It will touch on research subjects like audio effects, sound synthesis, music production, acoustics, psychoacoustics, intelligent systems design and more. But we’ll also chat about any interesting news items, what we (or our colleagues) have been doing, and what it is like to be engaged in academic research.

Don’t forget to check out our youtube channel, IntelligentSoundEng, with around 50 videos.

Please get in touch.