International Congress on Sound and Vibration (ICSV) London 2017

The International Congress on Sound and Vibration (ICSV) may not be the first conference you would think of for publishing the results of research into a sound effect but that’s exactly what we have just returned from. I presented our paper on the Real-Time Physical Model of an Aeolian harp to a worldwide audience of the top researchers in sound and vibration.

 

The Congress opened with a keynote from Professor Eric Heller discussing acoustics resonance and formants following by a whole day of musical acoustics chaired by Professor Murray Campbell from Edinburgh University. One interesting talk was given by Stephen Dance of London South Bank University where a hearing study of music students was carried out. Their results showed that the hearing of the music students improved over the 3 years of their course even though none of the students would wear ear protection while playing. The only degradation of hearing was experienced by oboe players. Possible reasons being the fast attack time of the instrument and the fact that the oboe players were stood directly in front of the brass players when playing as an orchestra.

 

The opening day also had a talk titled – Artificial neural network based model for the crispness impression of the potato chip sounds  by Ercan Altinsoy from Dresden University of Technology. This researched looked into the acoustical properties of food and the impression of freshness that was inferred from this.

 

I presented my research on the Real-time physical model of an aeolian harp, describing the sound synthesis of this unusual musical instrument. The synthesis model captures the interaction between the mechanical vibration properties of each string and the vortices being shed from the wind blowing around them.

 

The session ended with Application of sinusoidal curves to shape design of chord sound plate and experimental verification by Bor-Tsuen Wang Department of Mechanical Engineering, National Pingtung University of Science and Technology, Pingtung, Taiwan. This work reviews the design concept of chord sound plate (CSP) that is a uniform thickness plate with special curved shape designed by Bezier curve (B-curve) method. The CSP can generate the percussion sound with three tone frequencies that consist of the musical note frequencies of triad chord.

 

A presentation from Gaku Minorikawa, Hosei University, Department of Mechanical Engineering, Faculty of Science and Engineering, Tokyo, Japan, discussed his research into the reduction of noise from fans – highly relevant to audio engineers who want the quietest computers as possible for a studio. Prediction for noise reduction and characteristics of flow induced noise on axial cooling fan 

 

There was an interesting session on the noise experienced in open plan offices and how other noise sources are introduced to apply acoustic masking to certain areas. The presentation by Charles Edgington illustrated practical implementations of such masking and considerations that have to be made. Practical considerations and experiences with sound masking’s latest technology

 

The testing of a number of water features within an open plan office was presented in Audio-visual preferences of water features used in open-plan offices by Zanyar Abdalrahman from Heriot-Watt University, School of Energy, Geoscience, Infrastructure and Society, Edinburgh. Here a number of water feature contractions were examined.

 

The difficulty of understanding the speech of the participants in both rooms of a video conference  was researched by Charlotte Hauervig-Jørgensen from Technical University of Denmark. Subjective rating and objective evaluation of the acoustic and indoor climate conditions in video conferencing rooms. Moving away from office acoustics to house construction I saw a fascinating talk by Francesco D’Alessandro, University of Perugia. This paper aims at investigating the acoustic properties of straw bale constructions. Straw as an acoustic material

 

One session was dedicated to Sound Field Control and 3D Audio with a total of 18 papers presented on this topic. Filippo Fazi from University of Southampton presented a paper on A loudspeaker array for 2 people transaural reproduction which introduced a signal processing approach for performing 2-people Transaural reproduction using a combination of 2 single-listener cross-talk cancellation (CTC) beamformers, so that the CTC is maximised at one listener position and the beamformer side-lobes radiate little energy not to affect the other listening position.

 

Another session running was Thermoacoustics research in a gender-balanced setting. For this session alternate female and male speakers presented their work on thermoacoustics. Francesca Sogaro from Imperial College London presented her work on Sensitivity analysis of thermoacoustic instabilities. Presenting Sensescapes fascilitating life quality, Frans Mossberg of The Sound Environment Center at Lund University, Sweden is examine research into what can be done to raise awareness of the significance of sense- and soundscape for health, wellbeing and communication.

 

The hearing aid is a complex yet common device used to assist those suffering from hearing loss. In their paper on Speech quality enhancement in digital hearing aids: an active noise control approach, Somanath Pradhan, (Indian Institute of Technology Gandhinagar), has attempted to overcome limitations of noise reduction techniques by introducing a reduced complexity integrated active noise cancellation approach, along with noise reduction schemes.

 

Through a combination of acoustic computer modelling, network protocol, game design and signal processing, the paper Head-tracked auralisations for a dynamic audio experience in virtual reality sceneries proposes a method for bridging acoustic simulations and interactive technologies, i.e. fostering a dynamic acoustic experience for virtual scenes via VR-oriented auralisations. This was presented by Eric Ballesteros, London South Bank University.

 

The final day also included a number of additional presentations form our co-author, Dr Avital, including ‘Differences in the Non Linear Propagation of Crackle and Screech and Aerodynamic and Aeroacoustic Re-Design of Low Speed Blade Profile. The conference’s final night concluded with a banquet at the Sheraton Park Lane Hotel in its Grade 2 listed ballroom. The night included a string quartet, awards and Japanese opera singing. Overall this was a conference with a vast number of presentations from a number of different fields.

SMC Conference, Espoo, Finland

I have recently returned from the 14th Sound and Music Computing Conference hosted by Aalto University, Espoo, Finland. All 4 days were full of variety and quality, ensuring there was something of interest for all. There was also live performances during an afternoon session and 2 evenings as well as the banquet on Hanasaari, a small island in Espoo. This provided a friendly framework for all the delegates to interact, making or renew connections.
The paper presentations were the main content of the programme with presenters from all over the globe. Papers that stood out for me were Johnty Wang et al – Explorations with Digital Control of MIDI-enabled Pipe Organs where I heard the movement of an unborn child control the audio output of a pipe organ. I became aware of the Championship of Standstill where participants are challenged to standstill while a number of musical pieces are played – The Musical Influence on People’s Micromotion when Standing Still in Groups.
Does Singing a Low-Pitch Tone Make You Look Angrier? well it looked like it in  this interesting presentation! A social media music app was presented in Exploring Social Mobile Music with Tiny Touch-Screen Performances where we can interact with others by layering 5 second clips of sound to create a collaborative mix.
Analysis and synthesis was well represented with a presentation on Virtual Analog Simulation and Extensions of Plate Reverberation by Silvan Willemson et al and The Effectiveness of Two Audiovisual Mappings to Control a Concatenate Synthesiser by Augoustinos Tiros et al. The paper on Virtual Analog Model of the Lockhart Wavefolder explaining a method of modelling West Coast style analogue synthesiser.
Automatic mixing was also represented. Flavio Everard’s paper on Towards an Automated Multitrack Mixing Tool using Answer Set Programming, citing at least 8 papers from the Intelligent Audio Engineering group at C4DM.
In total 65 papers were presented orally or in the poster sessions with sessions on Music performance analysis and rendering, Music information retrieval, Spatial sound and sonification, Computer music languages and software, Analysis, synthesis and modification of sound, Social interaction, Computer-based music analysis and lastly Automatic systems and interactive performance. All papers are available at http://smc2017.aalto.fi/proceedings.html.
Award2
Having been treated to a wide variety of live music, technical papers and meeting colleagues from around the world, it was a added honour to be presented with one of the Best Paper Awards for our paper on Real-Time Physical Model for Synthesis of Sword Sounds. The conference closed with a short presentation from the next host….. SMC2018 – Cyprus!

Sound Synthesis of an Aeolian Harp

Introduction

Synthesising the Aeolian harp is part of a project into synthesising sounds that fall into a class called aeroacoustics. The synthesis model operates in real-time and is based on the physics that generate the sounds in nature. 

The Aeolian harp is an instrument that is played by the wind. It is believed to date back to ancient Greece; legend states that King David hung a harp in the tree to hear it being played by the wind. They became popular in Europe in the romantic period and Aeolian harps can be designed as garden ornaments, part of sculptures or large scale sound installations.  

The sound created by Aeolian harp has often been described as meditative and inspiring. A poem by Ralph Emerson describes it as follows:
 
Keep your lips or finger-tips
For flute or spinet’s dancing chips; 
I await a tenderer touch
I ask more or not so much:

Give me to the atmosphere.

aeolian3s

 
The harp in the picture is taken from Professor Henry Gurr’s website. This has an excellent review of the principles behind design and operation of Aeolian harps. 
Basic Principles

As air flows past a cylinder vortices are shed at a frequency that is proportional to the cylinder diameter and speed of the air. This has been discussed in the previous blog entry on Aeolian tones. We now think of the cylinders as a string, like that of a harp, guitar, violin, etc. When a string of one of these instruments is plucked it vibrates at it’s natural frequency. The natural frequency is proportional to the tension, length and mass of the string.  

Instead of a pluck or a bow exciting a string, in an Aeolian harp it is the vortex shedding that stimulates the strings. When the frequency of the vortex shedding is in the region of the natural vibration frequency of the string, or one of it’s harmonics, a phenomenon known as lock-in occurs. While in lock-in the string starts to vibrate at the relevant harmonic frequency. For a range of airspeed the string vibration is the dominant factor that dictates the frequency of the vortex shedding; changing the air speed does not change the frequency of vortex shedding, hence the process is locked-in. 

While in lock-in a FM type acoustic output is generated giving the harp its unique sound, described by the poet Samuel Coleridge as a “soft floating witchery of sound”.
Our Model 

As with the Aeolian tone model we calculate the frequency of vortex shedding for a given string dimensions and airspeed. We also calculate the fundamental natural vibrational frequency and harmonics of a string given its properties. 

There is a specific area of airspeed that leads to string vibration and vortex shedding locking in. This is calculated and the specific frequencies for the FM acoustic signal generated. There is a hysteresis effect on the vibration amplitude based on the increase and decrease of the airspeed which is also implemented. 

 A used interface is provided that allows a user to select up to 13 strings, adjusting their length, diameter, tension, mass and the amount of damping (which reduces the vibration effects as the harmonic number increases). This interface is shown below which includes presets of an number of different string and wind configurations. 

1
A copy of the pure data patch can be downloaded here. The video below was made to give an overview of the principles, sounds generated and variety of Aeolian harp constructions.

The Swoosh of the Sword

When we watch Game of Thrones or play the latest Assassin’s Creed the sound effect added to a sword being swung adds realism, drama and overall excitement to our viewing experience.

There are a number of methods for producing sword sound effects, from filtering white noise with a bandpass filter to solving the fundamental equations for fluid dynamics using finite volume methods. One method investigated by the Audio Engineering research team at QMUL was to find semi-empirical equations used in the Aeroacoustic community as an alternative to solving the full Navier Stokes equations. Running in real-time these provide computationally efficient methods of achieving accurate results – we can model any sword, swung at any speed and even adjust the model to replicate the sound of a baseball bat or golf club!

The starting point for these sound effect models is that of the Aeolian tone, (see previous blog entry – https://intelligentsoundengineering.wordpress.com/2016/05/19/real-time-synthesis-of-an-aeolian-tone/). The Aeolian tone is the sound generated as air flows around an object, in the case of our model, a cylinder. In the previous blog we describe the creation of a sound synthesis model for the Aeolian tone, including a link to a demo version of the model.

For a sword we take a number of the Aeolian tone models and place them on a virtual sword at different place settings. This is shown below:

coordSwordSource

Each Aeolian tone model is called a compact source. It can be seen that more are placed at the tip of the sword rather than the hilt. This is because the acoustic intensity is far higher for faster moving sources. There are 6 sources placed at the tip, positioned at a distance of 7 x the sword diameter. This distance is based on when the aerodynamic effects become de-correlated, although a simplification. One source is placed at the hilt and the final source equidistant between the last tip source and the hilt.

The complete model is presented in a GUI as shown below:

SwordDemoGUI

Referring to the both previous figures, it can be seen that the user is able to move the observer position within a 3D space. The thickness of the blade can be set at the tip and the hilt as well as the length of the blade. It is then linearly interpolated over the blade length so that each source diameter can be calculated.

The azimuth and elevation of the sword pre and post swing can be set. The strike position is fixed to an azimuth of 180 degrees and this is the point where the sword reaches its maximum speed. The user sets the top speed of the tip from the GUI. The Prime button makes sure all the variables are pushed through into the correct places in equations and the Go button triggers the swing.

It can be seen that there are 4 presets. Model 1 is a thin fencing type sword and Model 2 is a thicker sword. To test versatility of the model we decided to try and model a golf club. The preset PGA will set the model to implement this. The golf club model involves making the diameter of the source at the tip much larger, to represent the striking face of a golf club. It was found that those unfamiliar with golf did not identify the sound immediately so a simple golf ball strike sound is synthesised as the club reaches top speed.

To test versatility further, we created a model to replicate the sound of a baseball bat; preset MLB. This is exactly the same model as the sword with the dimensions just adjusted to the length of a bat plus the tip and hilt thickness. A video with all the preset sounds is given below. This includes two sounds created by a model with reduced physics, LoQ1 & LoQ2. These were created to investigate if there is any difference in perception.

The demo model was connected to the animation of a knight character in the Unity game engine. The speed of the sword is directly mapped from the animation to the sound effect model and the model observer position set to the camera position. A video of the result is given below:

Real-Time Synthesis of an Aeolian tone

Aeroacoustics are sounds generated by objects and the air and is a unique group of sounds. Examples of these sounds are a sword swooshing through the air, jet engines, propellers as well as the wind blowing through cracks, etc.  The Aeolian tone is one of the fundamental sounds; the cavity tone and edge tone being others. When designing these sound effects we want to model these fundamental sounds. It then should be possible to make a wide range of sound effects based on these. We want the sounds to be true to the physics generating them and operate in real-time. Completed effects will be suitable for use in video games, TV, film and virtual or augmented reality.

The Aeolian tone is the sound generated when air moves past a string, cylinder or similar object. It’s the whistling noise we may hear coming from a fence in the wind or the swoosh of a sword. An Aeolian Harp is a wind instrument that has been harnessing the Aeolian tone for hundreds of years. If fact, the word Aeolian comes from the Greek god of wind Aeolus.

The physics behind this sound….

When air moves past a cylinder spirals called vortices form behind it, moving away with the air flow. The vortices build up on both sides of the cylinder and detach in an alternating sequence. We call this vortex shedding and the downstream trail of vortices, a Von Karman Vortex Street. An illustration of this is given below:

strouh2

As a vortex sheds from each side there is a change in the lift force from one side to the other. It’s the frequency of this oscillating force that is the fundamental tone frequency. The sound radiates in a direction perpendicular to the flow. There is also a smaller drag force associated with each vortex shed. It is much smaller than the lift force, twice the frequency and radiates parallel to the flow. Both the lift and drag tones have harmonics present.

Can we replicate this…?

In 1878 Vincent Strouhal realized there was a relationship between the diameter of a string, the speed it was travelling thought the air and the frequency of tone produces. We find the Strouhal number varies with the turbulence around the cylinder. Luckily, we have a parameter that represents the turbulence called the Reynolds number. It’s calculated from the viscosity, density and velocity of air, and the diameter of the string. From this we can calculate the Strouhal number and get the fundamental tone frequency.

This is the heart of our model and was the launching point for our model. Acoustic sound sources can be often represented by compact sound sources. These are monopoles, dipoles and quadrupoles. For the Aeolian tone the compact sound source is a dipole.

We have an equation for the acoustic intensity. This is proportional to airspeed to the power of 6. It also includes the relationship between the sound source and listener. The bandwidth around the fundamental tone peak is proportional to the Reynolds number. We calculate this from published experimental results.

The vortex wake acoustic intensity is also calculated. This is much lower that the tone dipole at low airspeed but is proportional to airspeed to the power of 8. There is little wake sound below the fundamental tone frequency and it decreases proportional to the frequency squared.

We use the graphical programming language Pure Data to realise the equations and relationships. A white noise source and bandpass filters can generate the tone sounds and harmonics. The wake noise is a brown noise source shaped by high pass filtering. You can get the Pure Data patch of the model by clicking here.

Our sound effect operates in real-time and is interactive. A user or game engine can adjust:

  • Airspeed
  • Diameter and length of the cylinder
  • Distance between observer and source
  • Azimuth and elevation between observer and source
  • Panning and gain

We can now use the sound source to build up further models. For example, an airspeed model that replicates the wind can reproduce the sound of wind through a fence. The swoosh of a sword is sources lines up in a row with speed adjusted to radius of the arc.

Model complete…?

Not quite. We can calculate the bandwidth of the fundamental tone but have no data for the bandwidth of harmonics. In the current model we set them at the same value. The equation of the acoustic intensity of the wake is an approximation. The equation represents the physics but is not an exact value. We have to use best judgement when scaling it to the acoustic intensity of the fundamental tone.

A string or wire has a natural vibration frequency. There is an interaction between this and the vortex shedding frequency. This modifies the sound heard by a significant factor.