Analogue matched digital EQ: How far can you go linearly?

(Background post for the paper “Improving the frequency response magnitude and phase of
analogue-matched digital filters” by John Flynn & Josh Reiss for AES Milan 2018)

Professional audio mastering is a field that is still dominated by analogue hardware. Many mastering engineers still favour their go-to outboard compressors and equalisers over digital emulations. As a practising mastering engineer myself, I empathise. Quality analogue gear has a proven track record in terms of sonic quality spanning about a century. Even though digital approximations of analogue tools have gotten better, particularly over the past decade, I too have tended to reach for analogue hardware. However, through my research at Queen Mary with Professor Josh Reiss, that is changing.

When modelling an analogue EQ, a lot of focus has been in modelling distortions and other non-linearities, we chose to look at the linear component. Have we reached a ceiling in terms of modelling an analogue prototype filter in the digital domain? Can we do better? We found that yes there was room for improvement and yes we can do better.

The milestone of research in this area is Orfanidis’ 1997 paper “Digital parametric equalizer design with prescribed Nyquist-frequency gain“, the first major improvement over the bilinear transform which has a reknowned ‘cramped’ sound in the high frequencies. Basically, the bilinear transform is what all first generation digital equalisers is based on. It’s high frequencies towards 20kHz drops sharply, giving a ‘closed/cramped’ sound. Orfanidis and later improvements by Massberg [9] & Gunness/Chauhan [10] give a much better approximation of an analogue prototype.


However [9],[10] improve magnitude, they don’t capture analogue phase. Bizarrely, the bilinear transform performs reasonably well on phase. So we knew it was possible.

So the problem is: how do you get a more accurate magnitude match to analogue than [9],[10]? While also getting a good match to phase? Many attempts, including complicated iterative Parks/McClellen filter design approaches, fell flat. It turned out that Occam was right, in this case a simple answer was the better answer.

By combining a matched-z transform, frequency sampling filter design and a little bit of clever coefficient manipulation, we achieved excellent results. A match to the analogue prototype to an arbitrary degree. At low filter lengths you get a filter that performs as well as [9],[10] in magnitude but also matches analogue phase. By using longer filter lengths the match to analogue is extremely precise, in both magnitude and phase (lower error is more accurate)



Since submitting the post I have released the algorithm in a plugin with my mastering company and been getting informal feedback from other mastering engineers about how this sounds in use.


Overall the word back has been overwhelmingly positive, with one engineer claiming it to be the “the best sounding plugin EQ on the market to date”. It’s nice know that those long hours staring at decibel error charts have not been in vain.

Are you heading to AES Milan next month? Come up and say hello!



The edgiest tone yet…

As my PhD is coming to an end and the writing phase is getting more intense, it seemed about time I described the last of the aeroacoustic sounds I have implemented as a sound effect model. May 24th at the 144th Audio Engineering Society Convention in Milan, I will present ‘Physically Derived Synthesis Model of an Edge Tone.’
The edge tone is the sound created when a planar jet of air strikes an edge or wedge. The edge tone is probably most often seen as means of excitation for flue instruments. These instruments are ones like a recorder, piccolo, flute and pipe organ. For example, in a recorder air is blown by the mouth through a mouthpiece into a planar jet and then onto a wedge. The forces generated couple with the tube body of the recorder and a tone based on the dimension of the tube is generated.


Mouthpiece of a recorder


The edge tone model I have developed is viewed in isolation rather than coupled to a resonator as in the musical instruments example. While researching the edge tone it seemed clear to me that this tone has not had the same attention as the Aeolian tone I have previously modelled (here) but a volume of research and data was available to help understand and develop this model.

How does the edge tone work?

The most important process in generating the edge tone is the set up of a feedback loop from the nozzle exit to the wedge. This is similar to the process that generates the cavity tone which I discussed here. The diagram below will help with the explanation.


Illustration of jet of air striking a wedge


The air comes out of the nozzle and travels towards the wedge. A jet of air naturally has some instabilities which are magnified as the jet travels and reaches the wedge. At the wedge, vortices are shed on opposite sides of the wedge and an oscillating pressure pulse is generated. The pressure pulse travels back towards the nozzle and re-enforces the instabilities. At the correct frequency (wavelength) a feedback loop is created and a strong discrete tone can be heard.



To make the edge tone more complicated, if the air speed is varied or the distance between the nozzle exit to the wedge is varies, different modes exist. The values at which the modes change also exhibit hysteresis – the mode changes up and down do not occur at the same airspeed or distance.

Creating a synthesis model

There are a number of equations defined by researchers from the fluid dynamics field, each unique but depend on an integer mode number. Nowhere in my search did I find a method of predicting the mode number. Unlike previous modelling approaches, I decided to collate all the results I had where the mode number was given, both wind tunnel measurements and computational simulations. These were then input to the Weka machine learning workbench and a decision tree was devised. This was then implemented to predict the mode number.


All the prediction equations had a significant error compared to the measured and simulated results so again the results were used to create a new equation to predict the frequency for each mode.


With the mode predicted and the subsequent frequency predicted, the actual sound synthesis was generated by noise shaping with a white noise source and a bandpass filter. The Q value for the filter was unknown but, as with the cavity tone, it is known that the more turbulent the flow the smaller and more diffuse the vortices and the wider the band of frequencies around the predicted edge tone is. The Q value for the bandpass was set to be proportional to this.

And what next…?

Unlike the Aeolian tone where I was able to create a number of sound effects, the edge tone has not yet been implemented into a wider model. This is due to time rather than anything else. One area of further development which would be of great interest would be to couple the edge tone model to a resonator to emulate a musical instrument. Some previous synthesis models use a white noise source and an excitation or a signal based on the residual between an actual sample and the model of the resonator.


Once a standing wave has been established in the resonator, the edge tone locks in at that frequency rather than the one predicted in the equation. So the predicted edge tone may only be present while a musical note is in the transient state but it is known that this has a strong influence over the timbre and may have interesting results.


For an analysis of whistles and how their design affects their sound check out his article. The feedback mechanism described for the edge tone also very similar to the one that generates the hole tone. This is the discrete tone that is generated by a boiling kettle. This is usually a circular jet striking a plate with a circular hole and a feedback loop established.


Hole tone form a kettle


A very similar tone can be generated by a vertical take-off and landing vehicle when the jets from the lift fans are pointing down to the ground or deck. These are both areas for future development and where interesting sound effects could be made.


Vertical take-off of a Harrier jet


Sound Synthesis – Are we there yet?

TL;DR. Yes

At the beginning of my PhD, I began to read the sound effect synthesis literature, and I quickly discovered that there was little to no standardisation or consistency in evaluation of sound effect synthesis models – particularly in relations to the sounds they produce. Surely one of the most important aspects of a synthetic system, is whether it can artifically produce a convincing replacement for what it is intended to synthesize. We could have the most intractable and relatable sound model in the world, but if it does not sound anything like it is intended to, then will any sound designers or end users ever use it?

There are many different methods for measuring how effective a sound synthesis model is. Jaffe proposed evaluating synthesis techniques for music based on ten criteria. However, only two of the ten criteria actually consider any sounds made by the synthesiser.

This is crazy! How can anyone know what synthesis method can produce a convincingly realistic sound?

So, we performed a formal evaluation study, where a range of different synthesis techniques where compared in a range of different situations. Some synthesis techniques are indistinguishable from a recorded sample, in a fixed medium environment. In short – Yes, we are there yet. There are sound synthesis methods that sound more realistic than high quality recorded samples. But there is clearly so much more work to be done…

For more information read this paper

Creative projects in sound design and audio effects

This past semester I taught two classes (modules), Sound Design and Digital Audio Effects. In both classes, the final assignment involves creating an original work that involves audio programming and using concepts taught in class. But the students also have a lot of free reign to experiment and explore their own ideas.

The results are always great. Lots of really cool ideas, many of which could lead to a publication, or would be great to listen to regardless of the fact that it was an assignment. Here’s a few examples.

From the Sound Design class;

  • Synthesizing THX’s audio trademark, Deep Note. This is a complex sound, ‘a distinctive synthesized crescendo that glissandos from a low rumble to a high pitch’. It was created by the legendary James Moorer, who is responsible for some of the greatest papers ever published in the Journal of the Audio Engineering Society.
  • Recreating the sound of a Space Shuttle launch, with separate components for ‘Air Burning/Lapping’ and ‘Flame Eruption/Flame Exposing’ by generating the sounds of the Combustion chain and the Exhaust chain.
  • A student created a soundscape inspired by the 1968 Romanian play ‘Jonah (A four scenes tragedy)’,  written by Marin Sorescu. Published in 1968, when Romania was ruled by the communist regime. By carefully modulating the volume of filtered noise, she was able to achieve some great synthesis of waves crashing on a shore.
  • One student made a great drum and bass track, manipulating samples and mixing in some of his own recorded sounds. These included a nice ‘thud’ by filtering the sound of a tightened towel, percussive sounds by shaking rice in a plastic container. and the sizzling sound of frying bacon for tape hiss.
  • Synthesizing the sound of a motorbike, including engine startup, gears and driving sound, gear lever click and indicator.
  • A short audio piece to accompany a ghost story, using synthesised and recorded sounds. What I really like is that the student storyboarded it.


  • A train on a stormy day, which had the neat trick of converting a footstep synthesis model into the chugging of a train.
  • The sounds of the London Underground, doors sliding and beeping, bumps and breaks… all fully synthesized.

And from the Digital Audio Effects class;

  • An autotune specifically for bass guitar. We discussed auto-tune and its unusual history previously.
  • Sound wave propagation causes temperature variation, but speed of sound is a function of temperature. Notably, the positive half cycle of a wave (compression) causes an increase in temperature and velocity, while the negative half (rarefaction) causes a decrease in temperature and velocity, turning a sine wave into something like a sawtooth. This effect is only significant in high pressure sound waves. Its also frequency dependent; high frequency components travel faster than low frequency components.
    Mark Daunt created a MIDI instrument as a VST Plug-in that generates sounds based on this shock-wave formation formula. Sliders allow the user to adjust parameters in the formula and use a MIDI keyboard to play tones that express characteristics of the calculated waveforms.

  • Synthesizing applause, a subject which we have discussed here before. The student has been working in this area for another project, but made significant improvements for the assignment, including adding presets for various conditions.
  • A student devised a distortion effect based on waveshaping in the form of a weighted sum of Legendre polynomials. These are interesting functions and her resulting sounds are surprising and pleasing. Its the type of work that could be taken a lot further.
  • One student had a bug in an implementation of a filter. Noticing that it created some interesting sounds, he managed to turn it into a cool original distortion effect.
  • There’s an Octagon-shaped room with strange acoustics here on campus. Using a database of impulse response measurements from the room, one student created a VST plug-in that allows the user to hear how audio sounds for any source and microphone positions. In earlier blog entries, we discussed related topics, acoustic reverberators and anechoic chambers.

Screen Shot 2018-03-22 at 20.21.58-14

  • Another excellent sounding audio effect was a spectral delay using the phase vocoder, with delays applied differently depending on frequency bin. This created a sound like ‘stars falling from the sky’. Here’s a sine sweep before and after the effect is applied.

There were many other interesting assignments (plucked string effect for piano synthesizer, enhanced chorus effects, inharmonic resonator, an all-in-one plug-in to recreate 80s rock/pop guitar effects…). But this selection really shows both the talent of the students and the possibilities to create new and interesting sounds.