C4DM at recent Audio Engineering Society Conferences

Featuring contributions from Dave Moffat and Brecht De Man



As you might know, or can guess, we’re heavily involved in the Audio Engineering Society, which is the foremost professional organisation in this field. We had a big impact at two of their recent conferences.

The 60th AES conference on Dereverberation and Reverberation of Audio Music and Speech took place in Leuven, Belgium, 3-5 February 2016.. The conference was based around a European funded project of the same name (DREAMS – http://www.dreams-itn.eu/) and aimed to bring together all expertiese in reverb and reverb removal.

The conference started out with a fantastic overview of reverberation technology, and how it has progressed over the past 50 years, by Vesa Välimäki. The day then went on to present work on object based coding and reverberation, computation dereverberation techniques.car7-bmweaa5pap

Day two started with Thomas Brand discussing sound spatialisation and how participants are much more tolerant of reverberation in binaural listening conditions. Further work then presented on physical modelling approaches to reverberation simulation, user perception, and spatialisation of audio in the binaural context.

Day three began with Emanuël Habets, presenting on the past 50 years of reverberation removal, discussing that since we started modelling reverberation, we have also been trying to remove it from audio signals. Work was then presented on multichannel dereverberation and computational sound field modelling techniques.

The Audio Engineering group from C4DM were there in strength, presenting two papers and a demo session. David Moffat presented work on the impact dereverberation can make when combined with state of the art source separation technologies. Emmanouil Theofanis Chourdakis presented a hybrid model which, based on machine learning technologies, can intelligently apply reverberation to an audio track. Brecht De Man presented his latest research, as part of the demo session and again in a plenary lecture, on analysis of studio mixing practices, focused on analysing the perception of reverberation in multitrack mixes.


The following week was the AES Audio for Games conference in London. This is the fifth game audio conference they’ve had, and we’ve been involved in this conference series since its inception in 2009. C4DM researchers Dave Moffat, Will Wilkinson and Christian Heinrichs all presented work related to sound synthesis and procedural audio, which is becoming a big focus of our efforts (more to come!).

Brecht De Man put together an excellent report of the conference, where you can find out a lot more.

A short history of graphic and parametric equalization

Early equalizers were fixed and integrated into the circuits of audio receivers or phonograph playback systems. The advent of motion picture sound saw the emergence of variable equalization. Notably, John Volkman’s external equalizer design from the 1930s featured a set of selectable frequencies with boosts and cuts, and is sometimes considered to be the first operator-variable equalizer.
Throughout the 1950s and 1960s, equalizers grew in popularity, finding applications in sound post-production and speech enhancement. The Langevin Model EQ-251A, an early program equalizer with slide controls, was a precursor to the graphic equalizer. One slider controlled a bass shelving filter, and the other provided peaking boost/cut with four switchable center frequencies. Each filter had switchable frequencies and used a 15-position slide switch to adjust the gain. Cinema Engineering introduced the first graphic equalizer. It could adjust six bands with a boost or cut range . However, with graphic equalizers, engineers were still limited to the constraints imposed by the number and location of bands.
By 1967, Saul Walker introduced the API 550A equalizer, whose bandwidth is inherently altered relative to the amount of signal boosted. This EQ, like others of its time, featured a fixed selection of frequencies, and variable boost or cut controls at those frequencies. In 1971, Daniel Flickinger invented an important tunable equalizer. His circuit, known as `sweepable EQ’, allowed arbitrary selection of frequency and gain in three overlapping bands.
In 1966, Burgess Macneal and George Massenburg began work on a new recording console. Macneal and Massenburg, who was still a teenager, conceptualized an idea for a sweep-tunable EQ that would avoid inductors and switches. Soon after, Bob Meushaw, a friend of Massenburg, built a three-band, frequency adjustable, fixed-Q equalizer. When asked who invented the parametric equalizer, Massenburg stated “four people could possibly lay claim to the modern concept: Bob Meushaw, Burgess Macneal, Daniel Flickinger, and myself… Our (Bob’s, Burgess’ and my) sweep-tunable EQ was borne, more or less, out of an idea that Burgess and I had around 1966 or 1967 for an EQ… three controls adjusting, independently, the parameters for each of three bands for a recording console… I wrote and delivered the AES paper on Parametrics at the Los Angeles show in 1972… It’s the first mention of `Parametric’ associated with sweep-tunable EQ.”
  • Bohn, D.A. Operator adjustable equalizers: An overview. In Proc. Audio Eng. Soc. 6th Int. Conf.: Sound Reinforcement; 1988; pp. 369–381.
  • Reiss, J.D.; McPherson, A. Filter effects (Chapter 4). In Audio Effects: Theory, Implementation and Application; CRC Press: Boca Raton, FL, USA, 2015; pp. 89–124.
  • Flickinger, D. Amplifier system utilizing regenerative and degenerative feedback to shape the frequency response. U.S. Patent #3,752,928 1973.
  • Massenburg, G. Parametric equalization. In Proc. Audio Eng. Soc. 42nd Conv.; 1972.

Wah-wah wacka-wacka

The wah-wah audio effect is incredibly expressive. Its associated with whole genres of music, and it can be heard on many of the most influential funk, soul, jazz and rock recordings over the past 50 years.

Jimi Hendrix would sometimes use the wah-wah effect while leaving the pedal in a particular location, creating a unique filter effect that did not change over time. However, in ‘Voodoo Child (slight return)’, Hendrix muted the strummed strings while rocking the pedal, creating a percussive effect. The sweeping of the wah-wah pedal is more dramatic in the louder versus and the chorus, emphasizing the song’s blues styling.

The ‘wacka-wacka’ sound that Hendrix created soon became a trademark of a whole subgenre of 1970s funk and soul. Melvin ‘Wah-Wah Watson’ Ragin, a highly respected Motown session musician, is renowned for his use of the wah-wah pedal, especially on The Temptations ‘Papa Was A Rolling Stone’. This distinctive ‘wacka-wacka’ funk style of soon became a feature of urban black crime dramas, such as in Isaac Hayes’ ‘Theme from Shaft,’ Bobby Womack’s score to ‘Across 110th Street’ and Curtis Mayfield’s ‘Superfly.’

Another unusual use of the wah-wah pedal can be heard on the Pink Floyd song ‘Echoes.’ Here, screaming sounds were created by plugging in the pedal back to front, that is, the amplifier was connected to the input and he guitar was connected to the pedal’s output.

Of course, use of wah pedals is not reserved just to guitar. Bass players have used wah-wah pedals on well-known recordings (Michael Henderson playing with Miles Davis, Cliff Burton of Metallica, …). John Medeski and Garth Hudson use the pedals with Clavinets. Rick Wright employed a wah-wah pedal on a Wurlitzer electric piano on the Pink Floyd song ‘Money,’ and Dick Sims used it with a Hammond organ. Miles Davis’s ensembles used it to great extent, both on trumpet and on electric pianos. The wah-wah is frequently used by electric violinists, such as Boyd Tinsley of the Dave Matthews Band. Wah wah pedals applied to amplified saxophone also feature on albums by Frank Zappa and David Bowie.