Aeroacoustic Sound Effects – Journal Article

I am delighted to be able to announce that my article on Creating Real-Time Aeroacoustic Sound Effects Using Physically Informed Models is in this months Journal of the Audio Engineering Society. This is an invited article following winning the best paper award at the Audio Engineering Society 141st Convention in LA. It is an open access article so free for all to download!

The article extends the original paper by examining how the Aeolian tone synthesis models can be used to create a number of sound effects. The benefits of these models are that the produce plausible sound effects which operate in real-time. Users are presented with a number of highly relevant parameters to control the effects which can be mapped directly to 3D models within game engines.

The basics of the Aeolian tone were given in a previous blog post. To summarise, a tone is generated when air passes around an object and vortices are shed behind it. Fluid dynamic equations are available which allow a prediction of the tone frequency based on the physics of the interaction between the air and object. The Aeolian tone is modelled as a compact sound source.

To model a sword or similar object a number of these compact sound sources are placed in a row. A previous blog post describes this in more detail. The majority of compact sound sources are placed at the tip as this is where the airspeed is greatest and the greatest sound is generated.

The behaviour of a sword when being swung has to be modelled which then used to control some of the parameters in the equations. This behaviour can be controlled by a game engine making fully integrated procedural audio models.

The sword model was extended to include objects like a baseball bat and golf club, as well as a broom handle. The compact sound source of a cavity tone was also added in to replicate swords which have grooved profiles. Subjective evaluation gave excellent results, especially for thicker objects which were perceived as plausible as pre-recorded samples.

The synthesis model could be extended to look at a range of sword cross sections as well as any influence of the material of the sword. It is envisaged that other sporting equipment which swing or fly through the air could be modelled using compact sound sources.

A propeller sound is one which is common in games and film and partially based on the sounds generated from the Aeolian tone and vortex shedding. As a blade passes through the air vortices are shed at a specific frequency along the length. To model individual propeller blades the profiles of a number were obtained with specific span length (centre to tip) and chord lengths (leading edge to trailing edge).

Another major sound source is the loading sounds generated by the torque and thrust. A procedure for modelling these sounds is outlined in the article. Missing from the propeller model is distortion sounds. These are more associated with rotors which turn in the horizontal plane.

An important sound when hearing a propeller powered aircraft is the engine sound. The one taken for this model was based on one of Andy Farnell’s from his book Designing Sound. Once complete a user is able to select an aircraft from a pre-programmed bank and set the flight path. If linked to a game engine the physical dimensions and flight paths can all be controlled procedurally.

Listening tests indicate that the synthesis model was as plausible as an alternative method but still not as plausible as pre-recorded samples. It is believed that results may have been more favourable if modelling electric-powered drones and aircraft which do not have the sound of a combustion engine.

The final model exploring the use of the Aeolian tone was that of an Aeolian Harp. This is a musical instrument that is activated by wind blowing around the strings. The vortices that are shed behind the string can activate a mechanical vibration if they are around the frequency of one of the strings natural harmonics. This produces a distinctive sound.

The digital model allows a user to synthesis a harp of up to 13 strings. Tension, mass density, length and diameter can all be adjusted to replicate a wide variety of string material and harp size. Users can also control a wind model modified from one presented in Andy Farnell’s book Designing Sound, with control over the amount of gusts. Listening tests indicate that the sound is not as plausible as pre-recorded ones but is as plausible as alternative synthesis methods.

The article describes the design processes in more detail as well as the fluid dynamic principles each was developed from. All models developed are open source and implemented in pure data. Links to these are in the paper as well as my previous publications. Demo videos can be found on YouTube.

Advertisements

The edgiest tone yet…

As my PhD is coming to an end and the writing phase is getting more intense, it seemed about time I described the last of the aeroacoustic sounds I have implemented as a sound effect model. May 24th at the 144th Audio Engineering Society Convention in Milan, I will present ‘Physically Derived Synthesis Model of an Edge Tone.’
The edge tone is the sound created when a planar jet of air strikes an edge or wedge. The edge tone is probably most often seen as means of excitation for flue instruments. These instruments are ones like a recorder, piccolo, flute and pipe organ. For example, in a recorder air is blown by the mouth through a mouthpiece into a planar jet and then onto a wedge. The forces generated couple with the tube body of the recorder and a tone based on the dimension of the tube is generated.

 

Mouthpiece of a recorder

 

The edge tone model I have developed is viewed in isolation rather than coupled to a resonator as in the musical instruments example. While researching the edge tone it seemed clear to me that this tone has not had the same attention as the Aeolian tone I have previously modelled (here) but a volume of research and data was available to help understand and develop this model.

How does the edge tone work?

The most important process in generating the edge tone is the set up of a feedback loop from the nozzle exit to the wedge. This is similar to the process that generates the cavity tone which I discussed here. The diagram below will help with the explanation.

 

Illustration of jet of air striking a wedge

 

The air comes out of the nozzle and travels towards the wedge. A jet of air naturally has some instabilities which are magnified as the jet travels and reaches the wedge. At the wedge, vortices are shed on opposite sides of the wedge and an oscillating pressure pulse is generated. The pressure pulse travels back towards the nozzle and re-enforces the instabilities. At the correct frequency (wavelength) a feedback loop is created and a strong discrete tone can be heard.

 

 

To make the edge tone more complicated, if the air speed is varied or the distance between the nozzle exit to the wedge is varies, different modes exist. The values at which the modes change also exhibit hysteresis – the mode changes up and down do not occur at the same airspeed or distance.

Creating a synthesis model

There are a number of equations defined by researchers from the fluid dynamics field, each unique but depend on an integer mode number. Nowhere in my search did I find a method of predicting the mode number. Unlike previous modelling approaches, I decided to collate all the results I had where the mode number was given, both wind tunnel measurements and computational simulations. These were then input to the Weka machine learning workbench and a decision tree was devised. This was then implemented to predict the mode number.

 

All the prediction equations had a significant error compared to the measured and simulated results so again the results were used to create a new equation to predict the frequency for each mode.

 

With the mode predicted and the subsequent frequency predicted, the actual sound synthesis was generated by noise shaping with a white noise source and a bandpass filter. The Q value for the filter was unknown but, as with the cavity tone, it is known that the more turbulent the flow the smaller and more diffuse the vortices and the wider the band of frequencies around the predicted edge tone is. The Q value for the bandpass was set to be proportional to this.

And what next…?

Unlike the Aeolian tone where I was able to create a number of sound effects, the edge tone has not yet been implemented into a wider model. This is due to time rather than anything else. One area of further development which would be of great interest would be to couple the edge tone model to a resonator to emulate a musical instrument. Some previous synthesis models use a white noise source and an excitation or a signal based on the residual between an actual sample and the model of the resonator.

 

Once a standing wave has been established in the resonator, the edge tone locks in at that frequency rather than the one predicted in the equation. So the predicted edge tone may only be present while a musical note is in the transient state but it is known that this has a strong influence over the timbre and may have interesting results.

 

For an analysis of whistles and how their design affects their sound check out his article. The feedback mechanism described for the edge tone also very similar to the one that generates the hole tone. This is the discrete tone that is generated by a boiling kettle. This is usually a circular jet striking a plate with a circular hole and a feedback loop established.

 

Hole tone form a kettle

 

A very similar tone can be generated by a vertical take-off and landing vehicle when the jets from the lift fans are pointing down to the ground or deck. These are both areas for future development and where interesting sound effects could be made.

 

Vertical take-off of a Harrier jet

 

The cavity tone……

In September 2017, I attended the 20th International Conference on Digital Audio Effects in Edinburgh. At this conference, I presented my work on a real-time physically derived model of a cavity tone. The cavity tone is one of the fundamental aeroacoustic sounds, similar to previously described Aeolian tone. The cavity tone commonly occurs in aircraft when opening bomb bay doors or by the cavities left when the landing gear is extended. Another example of the cavity tone can be seen when swinging a sword with a grooved profile.

The physics of operation is a can be a little complicated. To try and keep it simple, air flows over the cavity and comes into contact with air at a different velocity within the cavity. The movement of air at one speed over air at another cause what’s known as shear layer between the two. The shear layer is unstable and flaps against the trailing edge of the cavity causing a pressure pulse. The pressure pulse travels back upstream to the leading edge and re-enforces the instability. This causes a feedback loop which will occur at set frequencies. Away from the cavity the pressure pulse will be heard as an acoustic tone – the cavity tone!

A diagram of this is shown below:

Like the previously described Aeolian tone, there are equations to derive the frequency of the cavity tone. This is based on the length of the cavity and the airspeed. There are a number of modes of operation, usually ranging from 1 – 4. The acoustic intensity has also been defined which is based on airspeed, position of the listener and geometry of the cavity.

The implementation of an individual mode cavity tone is shown in the figure below. The Reynolds number is a dimensionless measure of the ratio between the inertia and viscous force in the flow and Q relates to the bandwidth of the passband of the bandpass filter.

Comparing our model’s average frequency prediction to published results we found it was 0.3% lower than theoretical frequencies, 2.0% lower than computed frequencies and 6.4% lower than measured frequencies. A copy of the pure data synthesis model can be downloaded here.

 

Applied Science Journal Article

We are delighted to announce the publication of our article titled, Sound Synthesis of Objects Swinging through Air Using Physical Models in the Applied Science Special Issue on Sound and Music Computing.

 

The Journal is a revised and extended version of our paper which won a best paper award at the 14th Sound and Music Computing Conference which was held in Espoo, Finland in July 2017. The initial paper presented a physically derived synthesis model used to replicate the sound of sword swings using equations obtained from fluid dynamics, which we discussed in a previous blog entry. In the article we extend listening tests to include sound effects of metal swords, wooden swords, golf clubs, baseball bats and broom handles as well as adding in a cavity tone synthesis model to replicate grooves in the sword profiles. Further test were carried out to see if participants could identify which object our model was replicating by swinging a Wii Controller.
The properties exposed by the sound effects model could be automatically adjusted by a physics engine giving a wide corpus of sounds from one simple model, all based on fundamental fluid dynamics principles. An example of the sword sound linked to the Unity game engine is shown in this video.
 

 

Abstract:
A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.
The Applied Science journal is open access and a copy of our article can be downloaded here.

Physically Derived Sound Synthesis Model of a Propeller

I recently presented my work on the real-time sound synthesis of a propeller at the 12th International Audio Mostly Conference in London. This sound effect is a continuation of my research into aeroacoustic sounds generated by physical models; an extension of my previous work on the Aeolian harp, sword sounds and Aeolian tones.

A demo video of the propeller model attached to an aircraft object in unity is given here. I use the Unity Doppler effect which I have since discovered is not the best and adds a high-pitched artefact but you’ll get the idea! The propeller physical model was implemented in Pure Data and transferred to Unity using the Heavy compiler.

So, when I was looking for an indication of the different sound sources in a propeller sound I found an excellent paper by JE Marte and DW Kurtz. (A review of aerodynamic noise from propellers, rotors, and lift fans. Jet Propulsion Laboratory, California Institute of Technology, 1970) This paper provides a breakdown of the different sound sources, replicated for you here.

The sounds are split into periodic and broadband groups. In the periodic sounds, there are rotational sounds associated with the forces on the blade and interaction and distortion effects. The first rotational sound is the Loading sounds. These are associated with the thrust and torque of each propeller blade.

To picture these forces, imagine you are sitting on an aircraft wing, looking down the span, travelling at a fixed speed and uniform air flowing over the aerofoil. From your point of view the wing will have a lift force associated with it and a drag force. Now if we change the aircraft wing to a propeller blade with similar profile to an aerofoil, spinning at a set RPM. If you are sitting at a point on the blade the thrust and torque will be constant at the point you are sat.

Now stepping off the propeller blade and examining the disk of rotation the thrust and torque forces will appear as pulses at the blade passing frequency. For example, a propeller with 2 blades, rotating at 2400 RPM will have a blade passing frequency of 80Hz. A similar propeller with 4 blades, rotating at the same RPM will have a blade passing frequency of 160Hz.

Thickness noise is the sound generated as the blade moves the air aside when passing. This sound is found to be small when blades are moving at the speed of sound, 343 m/s, (known as a speed of Mach 1), and is not considered in our model.

Interaction and distortion effects are associated with helicopter rotors and lift fans. Because these have horizontally rotating blades an effect called blade slap occurs, where the rotating blade passes through the vortices shed by the previous blade causing a large slapping sound. Horizontal blades also have AM and FM modulated signals related with them as well as other effects. Since we are looking at propellers that spin mostly vertically, we have omitted these effects.

The broadband sounds of the propeller are closely related to the Aeolian tone models I have spoken about previously. The vortex sounds are from the vortex shedding, identical to out sword model. This difference in this case is that a propeller has a set shape which more like an aerofoil than a cylinder.

In the Aeolian tone paper, published at AES, LA, 2016, it was found that for a cylinder the frequency can be determined by an equation defined by Strouhal. The ratio of the diameter, frequency and airspeed are related by the Strouhal number, found for a cylinder to be approximately 0.2. In the paper D Brown and JB Ollerhead, Propeller noise at low tip speeds. Technical report, DTIC Document, 1971, a Strouhal number of 0.85 was found for propellers. This was used in our model, along with the chord length of the propeller instead of the diameter.

We also include the wake sound in the Aeolian tone model which is similar to the turbulence sounds. These are only noticeable at high speeds.

The paper by Martz et. al. outlines a procedure by Hamilton Standard, a propeller manufacturer, for predicting the far field loading sounds. Along with the RPM, number of blades, distance, azimuth angle we need the blade diameter, and engine power. We first decided which aircraft we were going to model. This was determined by the fact that we wanted to carry out a perceptual test and had a limited number of clips of known aircraft.

We settled on a Hercules C130, Boeing B17 Flying Fortress, Tiger Moth, Yak-52, Cessna 340 and a P51 Mustang. The internet was searched for details like blade size, blade profile (to calculate chord lengths along the span of the blade), engine power, top speed and maximum RPM. This gave enough information for the models to be created in pure data and the sound effect to be as realistic as possible.

This enables us to calculate the loading sounds and broadband vortex sounds, adding in a Doppler effect for realism. What was missing is an engine sound – the aeroacoustic sounds will not happen in isolation in our model. To rectify this a model from Andy Farnell’s Designing Sound was modified to act as our engine sound.

A copy of the pure data software can be downloaded from this site, https://code.soundsoftware.ac.uk/hg/propeller-model. We performed listening tests on all the models, comparing them with an alternative synthesis model (SMS) and the real recordings we had. The tests highlighted that the real sounds are still the most plausible but our model performed as well as the alternative synthesis method. This is a great result considering the alternative method starts with a real recording of a propeller, analyses it and re-synthesizes it. Our model starts with real world physical parameters like the blade profile, engine power, distance and azimuth angles to produce the sound effect.

An example of the propeller sound effect is mixed into this famous scene from North by Northwest. As you can hear the effect still has some way to go to be as good as the original but this physical model is the first step in incorporating fluid dynamics of a propeller into the synthesis process.

From the editor: Check out all Rod’s videos at https://www.youtube.com/channel/UCIB4yxyZcndt06quMulIpsQ

A copy the paper published at Audio Mostly 2017 can be found here >> Propeller_AuthorsVersion

12th International Audio Mostly Conference, London 2017

by Rod Selfridge & David Moffat. Photos by Beici Liang.

Audio Mostly – Augmented and Participatory Sound and Music Experiences, was held at Queen Mary University of London between 23 – 26 August. The conference brought together a wide variety of audio and music designers, technologists, practitioners and enthusiasts from all over the world.

The opening day of the conference ran in parallel with the Web Audio Conference, also being held at Queen Mary, with sessions open to all delegates. The day opened with a joint Keynote from the computer scientist and author of the highly influential sound effect book – Designing Sound, Andy Farnell. Andy covered a number of topics and invited audience participation which grew into a discussion regarding intellectual property – the pros and cons if it was done away with.

Andy Farnell

The paper session then opened with an interesting talk by Luca Turchet from Queen Mary’s Centre for Digital Music. Luca presented his paper on The Hyper Mandolin, an augmented music instrument allowing real-time control of digital effects and sound generators. The session concluded with the second talk I’ve seen in as many months by Charles Martin. This time Charles presented Deep Models for Ensemble Touch-Screen Improvisation where an artificial neural network model has been used to implement a live performance and sniffed touch gestures of three virtual players.

In the afternoon, I got to present my paper, co-authored by David Moffat and Josh Reiss, on a Physically Derived Sound Synthesis Model of a Propeller. Here I continue the theme of my PhD by applying equations obtained through fluid dynamics research to generate authentic sound synthesis models.

Rod Selfridge

The final session of the day saw Geraint Wiggins, our former Head of School at EECS, Queen Mary, present Callum Goddard’s work on designing Computationally Creative Musical Performance Systems, looking at questions like what makes performance virtuosic and how this can be implemented using the Creative Systems Framework.

The oral sessions continued throughout Thursday, one presentation that I found interesting was by Anna Xambo titles Turn-Taking and Chatting in Collaborative Music Live Coding. In this research the authors explored collaborative music live coding using the live coding environment and pedagogical tool EarSketch, focusing on the benefits to both performance and education.

Thursday’s Keynote was by Goldsmith’s Rebecca Fiebrink, who was mentioned in a previous blog, discussing how machine learning can be used to support human creative experiences, aiding human designers for rapid prototyping and refinement of new interactions within sound and media.

Rebecca Fiebrink

The Gala Dinner and Boat Cruise was held on Thursday evening where all the delegates were taken on a boat up and down the Thames, seeing the sites and enjoying food and drink. Prizes were awarded and appreciation expressed to the excellent volunteers, technical teams, committee members and chairpersons who brought together the event.

Tower Bridge

A session on Sports Augmentation and Health / Safety Monitoring was held on Friday Morning which included a number of excellent talks. The presentation of the conference went to Tim Ryan who presented his paper on 2K-Reality: An Acoustic Sports Entertainment Augmentation for Pickup Basketball Play Spaces. Tim re-contextualises sounds appropriated from a National Basketball Association (NBA) video game to create interactive sonic experiences for players and spectators. I was lucky enough to have a play around with this system during a coffee break and can easily see how it could give an amazing experience for basketball enthusiasts, young and old, as well as drawing in a crowd to share.

Workshops ran on Friday afternoon. I went to Andy Farnell’s Zero to Hero Pure Data Workshop where participants managed to go from scratch to having a working bass drum, snare and high-hat synthesis models. Andy managed to illustrate how quickly these could be developed and included in a simple sequencer to give a basic drum machine.

Throughout the conference a number of fixed media, demos were available for delegates to view as well as poster sessions where authors presented their work.

Alessia Milo

Live music events were held on both Wednesday and Friday. A joint session titled Web Audio Mostly Concert was held on Wednesday which was a joint event for delegates of Audio Mostly and the Web Audio Conference. This included an augmented reality musical performance, a human-playable robotic zither, the Hyper Mandolin and DJs.

The Audio Mostly Concert on the Friday included a Transmusicking performance from a laptop orchestra from around the world, where 14 different performers collaborated online. The performance was curated by Anna Xambo. Alan Chamberlain and David De Roure performed The Gift of the Algorithm, which was a computer music performance inspired by Ada Lovelace. The wood and the water was an immersive performance of interactivity and gestural control of both a Harp and lighting for the performance, by Balandino Di Donato and Eleanor Turner. GrainField, by Benjamin Matuszewski and Norbert Schnell, was an interactive audio performance that demanded entire audience involvement, for the performance to exist, this collective improvisational piece demonstrated a how digital technology can really be used to augment the traditional musical experience. GrainField was awarded the prize for the best musical performance.

Adib Mehrabi

The final day of the conference was a full day’s workshop. I attended the one titled Designing Sounds in the Cloud. The morning was spent presenting two ongoing European Horizon 2020 projects, Audio Commons (www.audiocommons.org/) and Rapid-Mix. The Audio Commons initiative aims to promote the use of open audio content by providing a digital ecosystem that connects content providers and creative end users. The Rapid-Mix project focuses on multimodal and procedural interactions leveraging on rich sensing capabilities, machine learning and embodied ways to interact with sound.

Before lunch we took part in a sound walk around the Queen Mary Mile End Campus, with one of each group blindfolded, informing the other what they could hear. The afternoon session had teams of participants designing and prototyping new ways to use the APIs from each of the two Horizon 2020 projects – very much in the feel of a hackathon. We devised a system which captured expressive Italian hand gestures using the Leap Motion and classified them using machine learning techniques. Then in pure data each new classification triggered a sound effect taken from the Freesound website (part of the audio commons project). If time would have allowed the project would have been extended to have pure data link to the audio commons API and play sound effects straight from the web.

Overall, I found the conference informative, yet informal, enjoyable and inclusive. The social events were spectacular and ones that will be remembered by delegates for a long time.

International Congress on Sound and Vibration (ICSV) London 2017

The International Congress on Sound and Vibration (ICSV) may not be the first conference you would think of for publishing the results of research into a sound effect but that’s exactly what we have just returned from. I presented our paper on the Real-Time Physical Model of an Aeolian harp to a worldwide audience of the top researchers in sound and vibration.

 

The Congress opened with a keynote from Professor Eric Heller discussing acoustics resonance and formants following by a whole day of musical acoustics chaired by Professor Murray Campbell from Edinburgh University. One interesting talk was given by Stephen Dance of London South Bank University where a hearing study of music students was carried out. Their results showed that the hearing of the music students improved over the 3 years of their course even though none of the students would wear ear protection while playing. The only degradation of hearing was experienced by oboe players. Possible reasons being the fast attack time of the instrument and the fact that the oboe players were stood directly in front of the brass players when playing as an orchestra.

 

The opening day also had a talk titled – Artificial neural network based model for the crispness impression of the potato chip sounds  by Ercan Altinsoy from Dresden University of Technology. This researched looked into the acoustical properties of food and the impression of freshness that was inferred from this.

 

I presented my research on the Real-time physical model of an aeolian harp, describing the sound synthesis of this unusual musical instrument. The synthesis model captures the interaction between the mechanical vibration properties of each string and the vortices being shed from the wind blowing around them.

 

The session ended with Application of sinusoidal curves to shape design of chord sound plate and experimental verification by Bor-Tsuen Wang Department of Mechanical Engineering, National Pingtung University of Science and Technology, Pingtung, Taiwan. This work reviews the design concept of chord sound plate (CSP) that is a uniform thickness plate with special curved shape designed by Bezier curve (B-curve) method. The CSP can generate the percussion sound with three tone frequencies that consist of the musical note frequencies of triad chord.

 

A presentation from Gaku Minorikawa, Hosei University, Department of Mechanical Engineering, Faculty of Science and Engineering, Tokyo, Japan, discussed his research into the reduction of noise from fans – highly relevant to audio engineers who want the quietest computers as possible for a studio. Prediction for noise reduction and characteristics of flow induced noise on axial cooling fan 

 

There was an interesting session on the noise experienced in open plan offices and how other noise sources are introduced to apply acoustic masking to certain areas. The presentation by Charles Edgington illustrated practical implementations of such masking and considerations that have to be made. Practical considerations and experiences with sound masking’s latest technology

 

The testing of a number of water features within an open plan office was presented in Audio-visual preferences of water features used in open-plan offices by Zanyar Abdalrahman from Heriot-Watt University, School of Energy, Geoscience, Infrastructure and Society, Edinburgh. Here a number of water feature contractions were examined.

 

The difficulty of understanding the speech of the participants in both rooms of a video conference  was researched by Charlotte Hauervig-Jørgensen from Technical University of Denmark. Subjective rating and objective evaluation of the acoustic and indoor climate conditions in video conferencing rooms. Moving away from office acoustics to house construction I saw a fascinating talk by Francesco D’Alessandro, University of Perugia. This paper aims at investigating the acoustic properties of straw bale constructions. Straw as an acoustic material

 

One session was dedicated to Sound Field Control and 3D Audio with a total of 18 papers presented on this topic. Filippo Fazi from University of Southampton presented a paper on A loudspeaker array for 2 people transaural reproduction which introduced a signal processing approach for performing 2-people Transaural reproduction using a combination of 2 single-listener cross-talk cancellation (CTC) beamformers, so that the CTC is maximised at one listener position and the beamformer side-lobes radiate little energy not to affect the other listening position.

 

Another session running was Thermoacoustics research in a gender-balanced setting. For this session alternate female and male speakers presented their work on thermoacoustics. Francesca Sogaro from Imperial College London presented her work on Sensitivity analysis of thermoacoustic instabilities. Presenting Sensescapes fascilitating life quality, Frans Mossberg of The Sound Environment Center at Lund University, Sweden is examine research into what can be done to raise awareness of the significance of sense- and soundscape for health, wellbeing and communication.

 

The hearing aid is a complex yet common device used to assist those suffering from hearing loss. In their paper on Speech quality enhancement in digital hearing aids: an active noise control approach, Somanath Pradhan, (Indian Institute of Technology Gandhinagar), has attempted to overcome limitations of noise reduction techniques by introducing a reduced complexity integrated active noise cancellation approach, along with noise reduction schemes.

 

Through a combination of acoustic computer modelling, network protocol, game design and signal processing, the paper Head-tracked auralisations for a dynamic audio experience in virtual reality sceneries proposes a method for bridging acoustic simulations and interactive technologies, i.e. fostering a dynamic acoustic experience for virtual scenes via VR-oriented auralisations. This was presented by Eric Ballesteros, London South Bank University.

 

The final day also included a number of additional presentations form our co-author, Dr Avital, including ‘Differences in the Non Linear Propagation of Crackle and Screech and Aerodynamic and Aeroacoustic Re-Design of Low Speed Blade Profile. The conference’s final night concluded with a banquet at the Sheraton Park Lane Hotel in its Grade 2 listed ballroom. The night included a string quartet, awards and Japanese opera singing. Overall this was a conference with a vast number of presentations from a number of different fields.