AES Berlin 2017: Keynotes from the technical program

aes2017

The 142nd AES Convention was held last month in the creative heart of Berlin. The four-day program and its more than 2000 attendees covered several workshops, tutorials, technical tours and special events, all related to the latest trends and developments in audio research. But as much as scale, it’s attention to detail that makes AES special. There’s an emphasis on the research side of audio topics as much as the side of panels of experts discussing a range of provocative and practical topics.

It can be said that 3D Audio: Recording and Reproduction, Binaural Listening and Audio for VR were the most popular topics among workshops, tutorial, papers and engineering briefs. However, a significant portion of the program was also devoted to common audio topics such as digital filter design, live audio, loudspeaker design, recording, audio encoding, microphones, and music production techniques just to name a few.

For this reason, here at the Audio Engineering research team within C4DM, we bring you what we believe were the highlights, the key talks or the most relevant topics that took place during the convention.

The future of mastering

What better way to start AES than with a mastering experts’ workshop discussing about the future of the field?  Jonathan Wyner (iZotope) introduced us to the current challenges that this discipline faces.  This related to the demographic, economic and target formatting issues that are constantly evolving and changing due to advances in the music technology industry and its consumers.

When discussing the future of mastering, the panel was reluctant to a fully automated future. But pointed out that the main challenge of assistive tools is to understand artistry intentions and genre-based decisions without the need of the expert knowledge of the mastering engineer. Concluding that research efforts should go towards the development of an intelligent assistant, able to function as an smart preset that provides master engineers a starting point.

Virtual analog modeling of dynamic range compression systems

This paper described a method to digitally model an analogue dynamic range compression. Based on the analysis of processed and unprocessed audio waveforms, a generic model of dynamic range compression is proposed and its parameters are derived from iterative optimization techniques.

Audio samples were reproduced and the quality of the audio produced by the digital model was demonstrated. However, it should be noted that the parameters of the digital compressor can not be changed, thus, this could be an interesting future work path, as well as the inclusion of other audio effects such as equalizers or delay lines.

Evaluation of alternative audio mixing interfaces

In the paperFormal Usability Evaluation of Audio Track Widget Graphical Representation for Two-Dimensional Stage Audio Mixing Interface‘  an evaluation of different graphical track visualization styles is proposed. Multitrack visualizations included text only, different colour conventions for circles containing text or icons related to the type of instruments, circles with opacity assigned to audio features and also a traditional channel strip mixing interface.

Efficiency was tested and it was concluded that subjects preferred instrument icons as well as the traditional mixing interface. In this way, taking into account several works and proposals on alternative mixing interfaces (2D and 3D), there is still a lot of scope to explore on how to build an intuitive, efficient and simple interface capable of replacing the good known channel strip.

Perceptually motivated filter design with application to loudspeaker-room equalization

This tutorial, was based on the engineering briefQuantization Noise of Warped and Parallel Filters Using Floating Point Arithmetic’  where warped parallel filters are proposed, which aim to have the frequency resolution of the human ear.

Thus, via Matlab, we explored various approaches for achieving this goal, including warped FIR and IIR, Kautz, and fixed-pole parallel filters. Providing in this way a very useful tool that can be used for various applications such as room EQ, physical modelling synthesis and perhaps to improve existing intelligent music production systems.

Source Separation in Action: Demixing the Beatles at the Hollywood Bowl

Abbey Road’s James Clarke presented a great poster with the actual algorithm that was used for the remixed, remastered and expanded version of The Beatles’ album Live at the Hollywood Bowl. The method achieved to isolate the crowd noise, allowing to separate into clean tracks everything that Paul McCartney, John Lennon, Ringo Starr and George Harrison played live in 1964.

The results speak for themselves (audio comparison). Thus, based on a Non-negative Matrix Factorization (NMF) algorithm, this work provides a great research tool for source separation and reverse-engineer of mixes.

Other keynotes worth to mention:

Close Miking Empirical Practice Verification: A Source Separation Approach

Analysis of the Subgrouping Practices of Professional Mix Engineers

New Developments in Listening Test Design

Data-Driven Granular Synthesis

A Study on Audio Signal Processed by “Instant Mastering” Services

The rest of the paper proceedings are available in the AES E-library.

Applause, applause! (thank you, thank you. You’re too kind)

“You must be prepared to work always without applause.”
―  Ernest Hemingway, By-line

In a recent blog entry , we discussed research into the sound of screams. Its one of those everyday sounds that we are particularly attuned to, but that there hasn’t been much research on. This got me thinking about what are some other under-researched sounds. Applause certainly fits. We all know when we hear it, and a quick search of famous quotes reveals that there are many ways to describe the many types of applause; thunderous applause, tumultuous applause, a smattering of applause, sarcastic applause, and of course, the dreaded slow hand clap. But from an auditory perspective, what makes it special?

Applause is nothing more than the sound of many people gathered in one place clapping their hands. Clapping your hands together is one of the simplest ways in which we can approximate an impulse, or short broadband sound, without the need for any equipment. Impulsive sounds are used for rhythm, for tagging important moments on a timeline, or for estimating the acoustic properties of a room. clappers and clapsticks are musical instruments, typically consisting of two pieces of wood that are clapped together to produce percussive sounds. In film and television, clapperboards have widespread use. The clapperboard produces a sharp clap noise that can be easily identified on the audio track, and the shutting of the clapstick at the top of the board can similarly be identified on the visual track. Thus, they are effective used to synchronising sound and picture, as well as to designate the starts of scenes or takes during production. And in acoustic measurement, if one can produce an impulsive sound at a given location and record the result, one can get an idea of the reverberation that the room will apply to any sound produced from that location.

Murri_artefacts_clapsticksclapstick
But a hand clap is a crude approximation for an impulse. Hand claps do not have completely flat impulse responses, are not completely omnidirectional, have significant duration and are not very high energy. Seetharaman and colleagues investigated the effectiveness of hand claps as impulse sources. They found that, with a small amount of additional but automated signal processing, the claps can produce reliable acoustical measurements.
Hanahara, Tada and Muroi exploited the impulse-like nature of hand claps for devising a means of Human-Robot Communication. The hand claps and their timing are relatively easy for a robot to decode, and not that difficult for a human to encode. But why the authors completely dismissed Morse code and all other simple forms of binary encoding is beyond me. And as voice recognition and related technologies continue to advance, the need for hand clap-based communication diminishes.
So what does a single hand clap sound like? This whole field of applause and clapping studies originated with a well-cited 1987 study by Bruno Repp, “The sound of two hands clapping.” He distinguished 8 hand clap positions;
Hands parallel and flat
P1: palm-to-palm
P2: halfway between P1 and P3
P3: fingers-to-palm

Hands held at an angle
A1: palm-to-palm
A2: halfway between P1 and P3
A3: fingers-to-palm
A1+: A1 with hands very cupped
A1-: A1 with hands fully flat

The figure below shows photos of these eight configurations of hand claps, excerpted from Leevi Peltola’s 2004 MSc thesis.

clap positions.png

Repp’s acoustic analyses and perceptual experiments mainly involved 20 test subjects who were each asked to clap at their normal rate for 10 seconds in a quiet room. The spectra of individual claps varied widely, but there was no evidence of influence of sex or hand size on the clap spectrum. He also measured his own clapping with the eight modes above. If the palms struck each other (P1, A1) there was a narrow frequency peak below 1 kHz together with a notch around 2.5 kHz. If the fingers of one hand struck the palm of the other hand (P3, A3) there was a broad spectral peak near 2 kHz.

Repp then tried to determine whether the subjects were able to extract information about the clapper from listening to the signal. Subjects generally assumed that slow, loud and low-pitched hand claps were from male clappers, and fast, soft and high-pitched hand claps were from female clappers. But this was not the case. The speed, intensity and pitch were uncorrelated with sex and thus it seemed that test subjects could correctly identify genre only slightly better than chance. Perceived differences were attributed mainly to hand configurations rather than hand size.

So much for individuals clapping, but what about applause. That’s when some interesting physics comes into play. Neda and colleagues recorded applause from several theatre and opera performances. They observed that the applause begins with incoherent random clapping, but then synchronization and periodic behaviour develops after a few seconds. This transition can be quite sudden and very strong, and is an unusual example of self-organization in a large coupled system. Neda gives quite a clear explanation of what is happening, and why.

Here’s a nice video of the phenomenon.

The fact that sonic aspects of hand claps can differ so significantly, and can often be identified by listeners, suggests that it may be possible to tell a lot about the source by signal analysis. Such was the case in work by Jylhä and colleagues, who proposed methods to identify a person by their hand claps, or identify the configuration (à  la Repp’s study) of the hand clap. Christian Uhle looked at the more general question of identifying applause in an audio stream.

Understanding of applause, beyond the synchronization phenomenon observed by Neda, is quite useful for encoding applause signals which so often accompany musical recordings- especially those recordings that are considered worth redistributing! And the important spatial and temporal aspects of applause signals are known to make then particularly tricky signals to encode and decode. As noted in research by Adami and colleagues, the more standard perceptual features like pitch or loudness do not do a good job of characterising grainy sound textures like applause. They introduced a new feature, applause density, which is loosely related to the overall clapping rate, but derived from perceptual experiments. Just a month before this blog entry, Adami and co-authors published a follow-up paper which used density and other characteristics to investigate the realism of upmixed (mono to stereo) applause signals. In fact, talking with one of the co-authors was a motivation for me to write this entry.

Upmixing is an important problem in its own right. But the placement and processing of sounds for a stereo or multichannel environment can be considered part of the general problem of sound synthesis. Synthesis of clapping and applause sounds was covered in detail, and to great effect, by Peltola and co-authors. They presented physics-based analysis, synthesis, and control systems capable of both producing individual hand-claps, or mimicking the applause of a group of clappers. The synthesis models were derived from experimental measurements and built both on the work of Repp and of Neda. Researchers here in the Centre for Digital Music’s Audio Engineering research team are trying to build on their work, creating a synthesis system that could incorporate cheering and other aspects of an appreciative crowd. More on that soon, hopefully.

“I think that’s just how the world will come to an end: to general applause from wits who believe it’s a joke.”
― Søren Kierkegaard, Either/Or, Part I

And for those who might be interested, here’s a short bibliography of applause and hand-clapping references;

1. Adami, A., Disch, S., Steba, G., & Herre, J. ‘Assessing Applause Density Perception Using Synthesized Layered Applause Signals,’ 19th International Conference on Digital Audio Effects (DAFx-16), Brno, Czech Republic, 2016
2. Adami, A.; Brand, L.; Herre, J., ‘Investigations Towards Plausible Blind Upmixing of Applause Signals,’ 142nd AES Convention, May 2017
3. W. Ahmad, AM Kondoz, Analysis and Synthesis of Hand Clapping Sounds Based on Adaptive Dictionary. ICMC, 2011
4. K. Hanahara, Y. Tada, and T. Muroi, “Human-robot communication by means of hand-clapping (preliminary experiment with hand-clapping language),” IEEE Int. Conf. on Systems, Man and Cybernetics(ISIC-2007),Oct2007,pp.2995–3000.
5. Farner, Snorre; Solvang, Audun; Sæbo, Asbjørn; Svensson, U. Peter ‘Ensemble Hand-Clapping Experiments under the Influence of Delay and Various Acoustic Environments’, Journal of the Audio Engineering Society, Volume 57 Issue 12 pp. 1028-1041; December 2009
6. A. Jylhä and C. Erkut, “Inferring the Hand Configuration from Hand Clapping Sounds,” 11th International Conference on Digital Audio Effects (DAFx-08), Espoo, Finland, 2008.
7. Jylhä, Antti; Erkut, Cumhur; Simsekli, Umut; Cemgil, A. Taylan ‘Sonic Handprints: Person Identification with Hand Clapping Sounds by a Model-Based Method’, AES 45th Conference, March 2012
8. Kawahara, Kazuhiko; Kamamoto, Yutaka; Omoto, Akira; Moriya, Takehiro ‘Evaluation of the Low-Delay Coding of Applause and Hand-Clapping Sounds Caused by Music Appreciation’ 138th AES Convention, May 2015.
9. Kawahara, Kazuhiko; Fujimori, Akiho; Kamamoto, Yutaka; Omoto, Akira; Moriya, Takehiro Implementation and Demonstration of Applause and Hand-Clapping Feedback System for Live Viewing,’ 141st AES Convention, September 2016.
10. Laitinen, Mikko-Ville; Kuech, Fabrian; Disch, Sascha; Pulkki, ‘Ville Reproducing Applause-Type Signals with Directional Audio Coding,’ Journal of the Audio Engineering Society, Volume 59 Issue 1/2 pp. 29-43; January 2011
11. Z. Néda, E. Ravasz, T. Vicsek, Y. Brechet, and A.-L. Barabási, “Physics of the rhythmic applause,” Phys. Rev. E, vol. 61, no. 6, pp. 6987–6992, 2000.
12. Z. Néda, E. Ravasz, Y. Brechet, T. Vicsek, and A.-L. Barabási, “The sound of many hands clapping: Tumultuous applause can transform itself into waves of synchronized clapping,” Nature, vol. 403, pp. 849–850, 2000.
13. Z. Néda, A. Nikitin, and T. Vicsek. ‘Synchronization of two-mode stochastic oscillators: a new model for rythmic applause an much more,’ Physica A: Statistical Mechanics and its Applications, 321:238–247, 2003.
14. L. Peltola, C. Erkut, P. R. Cook, and V. Välimäki, “Synthesis of Hand Clapping Sounds,”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 3, pp. 1021– 1029, 2007.
15. B. H. Repp. ‘The sound of two hands clapping: an exploratory study,’ J. of the Acoustical Society of America, 81:1100–1109, April 1987.
16. P. Seetharaman, S. P. Tarzia, ‘The Hand Clap as an Impulse Source for Measuring Room Acoustics,’ 132nd AES Convention, April 2012.
17. Uhle, C. ‘Applause Sound Detection’ , Journal of the Audio Engineering Society, Volume 59 Issue 4 pp. 213-224, April 2011

Cool stuff at the Audio Engineering Society Convention in Berlin

aesberlin17_IDS_headerThe next Audio Engineering Society convention is just around the corner, May 20-23 in Berlin. This is an event where we always have a big presence. After all, this blog is brought to you by the Audio Engineering research team within the Centre for Digital Music, so its a natural fit for a lot of what we do.

These conventions are quite big, with thousands of attendees, but not so big that you get lost or overwhelmed. The attendees fit loosely into five categories: the companies, the professionals and practitioners, students, enthusiasts, and the researchers. That last category is where we fit.

I thought I’d give you an idea of some of the highlights of the Convention. These are some of the events that we will be involved in or just attending, but of course, there’s plenty else going on.

On Saturday May 20th, 9:30-12:30, Dave Ronan from the team here will be presenting a poster on ‘Analysis of the Subgrouping Practices of Professional Mix Engineers.’ Subgrouping is a greatly understudied, but important part of the mixing process. Dave surveyed 10 award winning mix engineers to find out how and why they do subgrouping. He then subjected the results to detailed thematic analysis to uncover best practices and insights into the topic.

2:45-4:15 pm there is a workshop on ‘Perception of Temporal Response and Resolution in Time Domain.’ Last year we published an article in the Journal of the Audio Engineering Society  on ‘A meta-analysis of high resolution audio perceptual evaluation.’ There’s a blog entry about it too. The research showed very strong evidence that people can hear a difference between high resolution audio and standard, CD quality audio. But this brings up the question of why? Many people have suggested that the fine temporal resolution of oversampled audio might be perceived. I expect that this Workshop will shed some light on this as yet unresolved question.

Overlapping that workshop, there are some interesting posters from 3 to 6 pm. ‘Mathematical Model of the Acoustic Signal Generated by the Combustion Engine‘ is about synthesis of engine sounds, specifically for electric motorbikes. We are doing a lot of sound synthesis research here, and so are always on the lookout for new approaches and new models. ‘A Study on Audio Signal Processed by “Instant Mastering” Services‘ investigates the effects applied to ten songs by various online, automatic mastering platforms. One of those platforms, LandR, was a high tech spin-out from our research a few years ago, so we’ll be very interested in what they found.

For those willing to get up bright and early Sunday morning, there’s a 9 am panel on ‘Audio Education—What Does the Future Hold,’ where I will be one of the panellists. It should have some pretty lively discussion.

Then there’s some interesting posters from 9:30 to 12:30. We’ve done a lot of work on new interfaces for audio mixing, so will be quite interested in ‘The Mixing Glove and Leap Motion Controller: Exploratory Research and Development of Gesture Controllers for Audio Mixing.’ And returning to the subject of high resolution audio, there is ‘Discussion on Subjective Characteristics of High Resolution Audio,’ by Mitsunori Mizumachi. Mitsunori was kind enough to give me details about his data and experiments in hi-res audio, which I then used in the meta-analysis paper. He’ll also be looking at what factors affect high resolution audio perception.

From 10:45 to 12:15, our own Brecht De Man will be chairing and speaking in a Workshop on ‘New Developments in Listening Test Design.’ He’s quite a leader in this field, and has developed some great software that makes the set up, running and analysis of listening tests much simpler and still rigorous.

From 1 to 2 pm, there is the meeting of the Technical Committee on High Resolution Audio, of which I am co-chair along with Vicki Melchior. The Technical Committee aims for comprehensive understanding of high resolution audio technology in all its aspects. The meeting is open to all, so for those at the Convention, feel free to stop by.

Sunday evening at 6:30 is the Heyser lecture. This is quite prestigious, a big talk by one of the eminent people in the field. This one is given by Jorg Sennheiser of, well, Sennheiser Electronic.

Monday morning 10:45-12:15, there’s a tutorial on ‘Developing Novel Audio Algorithms and Plugins – Moving Quickly from Ideas to Real-time Prototypes,’ given by Mathworks, the company behind Matlab. They have a great new toolbox for audio plugin development, which should make life a bit simpler for all those students and researchers who know Matlab well and want to demo their work in an audio workstation.

Again in the mixing interface department, we look forward to hearing about ‘Formal Usability Evaluation of Audio Track Widget Graphical Representation for Two-Dimensional Stage Audio Mixing Interface‘ on Tuesday, 11-11:30. The authors gave us a taste of this work at the Workshop on Intelligent Music Production which our group hosted last September.

In the same session – which is all about ‘Recording and Live Sound‘ so very close to home – a new approach to acoustic feedback suppression is discussed in ‘Using a Speech Codec to Suppress Howling in Public Address Systems‘, 12-12:30. With several past projects on gain optimization for live sound, we are curious to hear (or not hear) the results!

The full program can be explored on the AES Convention planner or the Convention website. Come say hi to us if you’re there!

 

 

Sound Synthesis of an Aeolian Harp

Introduction

Synthesising the Aeolian harp is part of a project into synthesising sounds that fall into a class called aeroacoustics. The synthesis model operates in real-time and is based on the physics that generate the sounds in nature. 

The Aeolian harp is an instrument that is played by the wind. It is believed to date back to ancient Greece; legend states that King David hung a harp in the tree to hear it being played by the wind. They became popular in Europe in the romantic period and Aeolian harps can be designed as garden ornaments, part of sculptures or large scale sound installations.  

The sound created by Aeolian harp has often been described as meditative and inspiring. A poem by Ralph Emerson describes it as follows:
 
Keep your lips or finger-tips
For flute or spinet’s dancing chips; 
I await a tenderer touch
I ask more or not so much:

Give me to the atmosphere.

aeolian3s

 
The harp in the picture is taken from Professor Henry Gurr’s website. This has an excellent review of the principles behind design and operation of Aeolian harps. 
Basic Principles

As air flows past a cylinder vortices are shed at a frequency that is proportional to the cylinder diameter and speed of the air. This has been discussed in the previous blog entry on Aeolian tones. We now think of the cylinders as a string, like that of a harp, guitar, violin, etc. When a string of one of these instruments is plucked it vibrates at it’s natural frequency. The natural frequency is proportional to the tension, length and mass of the string.  

Instead of a pluck or a bow exciting a string, in an Aeolian harp it is the vortex shedding that stimulates the strings. When the frequency of the vortex shedding is in the region of the natural vibration frequency of the string, or one of it’s harmonics, a phenomenon known as lock-in occurs. While in lock-in the string starts to vibrate at the relevant harmonic frequency. For a range of airspeed the string vibration is the dominant factor that dictates the frequency of the vortex shedding; changing the air speed does not change the frequency of vortex shedding, hence the process is locked-in. 

While in lock-in a FM type acoustic output is generated giving the harp its unique sound, described by the poet Samuel Coleridge as a “soft floating witchery of sound”.
Our Model 

As with the Aeolian tone model we calculate the frequency of vortex shedding for a given string dimensions and airspeed. We also calculate the fundamental natural vibrational frequency and harmonics of a string given its properties. 

There is a specific area of airspeed that leads to string vibration and vortex shedding locking in. This is calculated and the specific frequencies for the FM acoustic signal generated. There is a hysteresis effect on the vibration amplitude based on the increase and decrease of the airspeed which is also implemented. 

 A used interface is provided that allows a user to select up to 13 strings, adjusting their length, diameter, tension, mass and the amount of damping (which reduces the vibration effects as the harmonic number increases). This interface is shown below which includes presets of an number of different string and wind configurations. 

1
A copy of the pure data patch can be downloaded here. The video below was made to give an overview of the principles, sounds generated and variety of Aeolian harp constructions.

The Swoosh of the Sword

When we watch Game of Thrones or play the latest Assassin’s Creed the sound effect added to a sword being swung adds realism, drama and overall excitement to our viewing experience.

There are a number of methods for producing sword sound effects, from filtering white noise with a bandpass filter to solving the fundamental equations for fluid dynamics using finite volume methods. One method investigated by the Audio Engineering research team at QMUL was to find semi-empirical equations used in the Aeroacoustic community as an alternative to solving the full Navier Stokes equations. Running in real-time these provide computationally efficient methods of achieving accurate results – we can model any sword, swung at any speed and even adjust the model to replicate the sound of a baseball bat or golf club!

The starting point for these sound effect models is that of the Aeolian tone, (see previous blog entry – https://intelligentsoundengineering.wordpress.com/2016/05/19/real-time-synthesis-of-an-aeolian-tone/). The Aeolian tone is the sound generated as air flows around an object, in the case of our model, a cylinder. In the previous blog we describe the creation of a sound synthesis model for the Aeolian tone, including a link to a demo version of the model.

For a sword we take a number of the Aeolian tone models and place them on a virtual sword at different place settings. This is shown below:

coordSwordSource

Each Aeolian tone model is called a compact source. It can be seen that more are placed at the tip of the sword rather than the hilt. This is because the acoustic intensity is far higher for faster moving sources. There are 6 sources placed at the tip, positioned at a distance of 7 x the sword diameter. This distance is based on when the aerodynamic effects become de-correlated, although a simplification. One source is placed at the hilt and the final source equidistant between the last tip source and the hilt.

The complete model is presented in a GUI as shown below:

SwordDemoGUI

Referring to the both previous figures, it can be seen that the user is able to move the observer position within a 3D space. The thickness of the blade can be set at the tip and the hilt as well as the length of the blade. It is then linearly interpolated over the blade length so that each source diameter can be calculated.

The azimuth and elevation of the sword pre and post swing can be set. The strike position is fixed to an azimuth of 180 degrees and this is the point where the sword reaches its maximum speed. The user sets the top speed of the tip from the GUI. The Prime button makes sure all the variables are pushed through into the correct places in equations and the Go button triggers the swing.

It can be seen that there are 4 presets. Model 1 is a thin fencing type sword and Model 2 is a thicker sword. To test versatility of the model we decided to try and model a golf club. The preset PGA will set the model to implement this. The golf club model involves making the diameter of the source at the tip much larger, to represent the striking face of a golf club. It was found that those unfamiliar with golf did not identify the sound immediately so a simple golf ball strike sound is synthesised as the club reaches top speed.

To test versatility further, we created a model to replicate the sound of a baseball bat; preset MLB. This is exactly the same model as the sword with the dimensions just adjusted to the length of a bat plus the tip and hilt thickness. A video with all the preset sounds is given below. This includes two sounds created by a model with reduced physics, LoQ1 & LoQ2. These were created to investigate if there is any difference in perception.

The demo model was connected to the animation of a knight character in the Unity game engine. The speed of the sword is directly mapped from the animation to the sound effect model and the model observer position set to the camera position. A video of the result is given below:

Real-Time Synthesis of an Aeolian tone

Aeroacoustics are sounds generated by objects and the air and is a unique group of sounds. Examples of these sounds are a sword swooshing through the air, jet engines, propellers as well as the wind blowing through cracks, etc.  The Aeolian tone is one of the fundamental sounds; the cavity tone and edge tone being others. When designing these sound effects we want to model these fundamental sounds. It then should be possible to make a wide range of sound effects based on these. We want the sounds to be true to the physics generating them and operate in real-time. Completed effects will be suitable for use in video games, TV, film and virtual or augmented reality.

The Aeolian tone is the sound generated when air moves past a string, cylinder or similar object. It’s the whistling noise we may hear coming from a fence in the wind or the swoosh of a sword. An Aeolian Harp is a wind instrument that has been harnessing the Aeolian tone for hundreds of years. If fact, the word Aeolian comes from the Greek god of wind Aeolus.

The physics behind this sound….

When air moves past a cylinder spirals called vortices form behind it, moving away with the air flow. The vortices build up on both sides of the cylinder and detach in an alternating sequence. We call this vortex shedding and the downstream trail of vortices, a Von Karman Vortex Street. An illustration of this is given below:

strouh2

As a vortex sheds from each side there is a change in the lift force from one side to the other. It’s the frequency of this oscillating force that is the fundamental tone frequency. The sound radiates in a direction perpendicular to the flow. There is also a smaller drag force associated with each vortex shed. It is much smaller than the lift force, twice the frequency and radiates parallel to the flow. Both the lift and drag tones have harmonics present.

Can we replicate this…?

In 1878 Vincent Strouhal realized there was a relationship between the diameter of a string, the speed it was travelling thought the air and the frequency of tone produces. We find the Strouhal number varies with the turbulence around the cylinder. Luckily, we have a parameter that represents the turbulence called the Reynolds number. It’s calculated from the viscosity, density and velocity of air, and the diameter of the string. From this we can calculate the Strouhal number and get the fundamental tone frequency.

This is the heart of our model and was the launching point for our model. Acoustic sound sources can be often represented by compact sound sources. These are monopoles, dipoles and quadrupoles. For the Aeolian tone the compact sound source is a dipole.

We have an equation for the acoustic intensity. This is proportional to airspeed to the power of 6. It also includes the relationship between the sound source and listener. The bandwidth around the fundamental tone peak is proportional to the Reynolds number. We calculate this from published experimental results.

The vortex wake acoustic intensity is also calculated. This is much lower that the tone dipole at low airspeed but is proportional to airspeed to the power of 8. There is little wake sound below the fundamental tone frequency and it decreases proportional to the frequency squared.

We use the graphical programming language Pure Data to realise the equations and relationships. A white noise source and bandpass filters can generate the tone sounds and harmonics. The wake noise is a brown noise source shaped by high pass filtering. You can get the Pure Data patch of the model by clicking here.

Our sound effect operates in real-time and is interactive. A user or game engine can adjust:

  • Airspeed
  • Diameter and length of the cylinder
  • Distance between observer and source
  • Azimuth and elevation between observer and source
  • Panning and gain

We can now use the sound source to build up further models. For example, an airspeed model that replicates the wind can reproduce the sound of wind through a fence. The swoosh of a sword is sources lines up in a row with speed adjusted to radius of the arc.

Model complete…?

Not quite. We can calculate the bandwidth of the fundamental tone but have no data for the bandwidth of harmonics. In the current model we set them at the same value. The equation of the acoustic intensity of the wake is an approximation. The equation represents the physics but is not an exact value. We have to use best judgement when scaling it to the acoustic intensity of the fundamental tone.

A string or wire has a natural vibration frequency. There is an interaction between this and the vortex shedding frequency. This modifies the sound heard by a significant factor.