The cavity tone……

In September 2017, I attended the 20th International Conference on Digital Audio Effects in Edinburgh. At this conference, I presented my work on a real-time physically derived model of a cavity tone. The cavity tone is one of the fundamental aeroacoustic sounds, similar to previously described Aeolian tone. The cavity tone commonly occurs in aircraft when opening bomb bay doors or by the cavities left when the landing gear is extended. Another example of the cavity tone can be seen when swinging a sword with a grooved profile.

The physics of operation is a can be a little complicated. To try and keep it simple, air flows over the cavity and comes into contact with air at a different velocity within the cavity. The movement of air at one speed over air at another cause what’s known as shear layer between the two. The shear layer is unstable and flaps against the trailing edge of the cavity causing a pressure pulse. The pressure pulse travels back upstream to the leading edge and re-enforces the instability. This causes a feedback loop which will occur at set frequencies. Away from the cavity the pressure pulse will be heard as an acoustic tone – the cavity tone!

A diagram of this is shown below:

Like the previously described Aeolian tone, there are equations to derive the frequency of the cavity tone. This is based on the length of the cavity and the airspeed. There are a number of modes of operation, usually ranging from 1 – 4. The acoustic intensity has also been defined which is based on airspeed, position of the listener and geometry of the cavity.

The implementation of an individual mode cavity tone is shown in the figure below. The Reynolds number is a dimensionless measure of the ratio between the inertia and viscous force in the flow and Q relates to the bandwidth of the passband of the bandpass filter.

Comparing our model’s average frequency prediction to published results we found it was 0.3% lower than theoretical frequencies, 2.0% lower than computed frequencies and 6.4% lower than measured frequencies. A copy of the pure data synthesis model can be downloaded here.

 

Advertisements

The final whistle blows

Previously, we discussed screams, applause, bouncing and pouring water. Continuing our examination of every day sounds, we bring you… the whistle.

This one is a little challenging though. To name just a few, there are pea whistles, tin whistles, steam whistles, dog whistles and of course, human whistling. Covering all of this is a lot more than a single blog entry. So lets stick to the standard pea whistle or pellet whistle (or ‘escargot’ or barrel whistle because of its snail-like shape), which is the basis for a lot of the whistles that you’ve heard.

metal pea whistle

 

Typical metal pea whistle, featuring mouthpiece,  bevelled edge and sound hole where air can escape, and barrel-shaped air chamber and a pellet inside.

 

Whistles are the oldest known type of flute. They have a stopped lower end and a flue that directs the player’s breath from the mouth hole at the upper end against the edge of a hole cut in the whistle wall, causing the enclosed air to vibrate. Most whistle instruments have no finger holes and sound only one pitch.

A whistle produces sound from a stream of gas, most commonly air, and typically powered by steam or by someone blowing air. The conversion of energy to sound comes from an interaction between the air stream and a solid material.

In a pea whistle, the air stream enters through the mouthpiece. It hits the bevel (sloped edge for the opening) and splits, outwards into the air and inwards filling the air chamber. It continues to swirl around and fill the chamber until the air pressure inside  is so great that it pops out of the sound hole (a small opening next to the bevel), making room for the process to start over again. The dominant pitch of the whistle is determined by the rate at which air packs and unpacks the air chamber. The movement of air forces the pea or pellet inside the chamber to move around and around. This sometimes interrupts the flow of air and creates a warble to the whistle sound.

The size of the whistle cavity determines the volume of air contained in the whistle and the pitch of the sound produced. The air fills and empties from the chamber so many times per second, which gives the fundamental frequency of the sound.

The whistle construction and the design of the mouthpiece also have a dramatic effect on sound. A whistle made from a thick metal will produce a brighter sound compared to the more resonant mellow sound if thinner metal is used. Modern whistles are produce using different types of plastic, which increases the tones and sounds now available. The design of the mouthpiece can also dramatically alter the sound. Even a few thousandths of an inch difference in the airway, angle of the blade, size or width of the entry hole, can make a drastic difference as far as volume, tone, and chiff (breathiness or solidness of the sound) are concerned. And according to the whistle Wiki page, which might be changed by the time you read this, ‘One characteristic of a whistle is that it creates a pure, or nearly pure, tone.’

Well, is all of that correct? When we looked at the sounds of pouring hot and cold water we found that the simple explanations were not correct. In explaining the whistle, can we go a bit further than a bit of handwaving about the pea causing a warble? Do the different whistles differ a lot in sound?

Lets start with some whistle sounds. Here’s a great video where you get to hear a dozen referee’s whistles.

Looking at the spectrogram below, you can see that all the whistles produce dominant frequencies somewhere between 2200 and 4400 Hz. Some other features are also apparent. There seems to be some second and even third harmonic content. And it doesn’t seem to be just one frequency and its overtones. Rather, there are two or three closely spaced frequencies whenever the whistle is blown.

Referee Whistles

But this sound sample is all fairly short whistle blows, which could be why the pitches are not constant. And one should never rely on just one sample or one audio file (as the authors did here). So lets look at just one long whistle sound.

joe whistle spec

joe whistle

You can see that it remains fairly constant, and the harmonics are clearly present, though I can’t say if they are partly due to dynamic range compression or any other processing. However, there are semi-periodic dips or disruptions in the fundamental pitch. You can see this more clearly in the waveform, and this is almost certainly due to the pea temporarily blocking the sound hole and weakening the sound.

The same general behaviour appears with other whistles, though with some variation in the dips and their rate of occurrence, and in the frequencies and their strengths.

Once I started writing this blog, I was pointed to the fact that Perry Cook had already discussed synthesizing whistle sounds in his wonderful book Real Sound Synthesis for Interactive Applications. In building up part of a model of a police/referee whistle, he wrote

 ‘Experiments and spectrograms using real police/referee whistles showed that when the pea is in the immediate region of the jet oscillator, there is a decrease in pitch (about 7%), an increase in amplitude (about 6 dB), and a small increase in the noise component (about 2 dB)… The oscillator exhibits three significant harmonics: f, 2f and 3f at 0 dB, -10 dB and -25 dB, respectively…’

With the exception of the increase in amplitude due to the pea (was that a typo?), my results are all in rough agreement with his. So depending on whether I’m a glass half empty / glass half full kind of person, I could either be disappointed that I’m just repeating what he did, or glad that my results are independently confirmed.

This information from a few whistle recordings should be good enough to characterise the behaviour and come up with a simple, controllable synthesis. Jiawei Liu took a different approach. In his Master’s thesis, he simulated whistles using computational fluid dynamics and acoustic finite element simulation. It was very interesting work, as was a related approach by Shia, but they’re both a bit like using a sledgehammer to kill a fly. Massive effort and lots of computation, when a model that probably sounds just as good could have been derived using semi-empirical equations that model aeroacoustic sounds directly, as discussed in our previous blog entries on sound synthesis of an Aeolian Harp, a Propeller. Sword sounds, swinging objects or Aeolian tones.

There’s been some research into automatic identification of referee whistle sounds, for instance, initial work of Shirley and Oldfield in 2011 and then a more advanced algorithm a few years later. But these are either standard machine learning techniques, or based on the most basic aspects of the whistle sound, like its fundamental frequency. In either case, they don’t use much understanding of the nature of the sound. But I suppose that’s fine. They work, they enable intelligent production techniques for sports broadcasts,  and they don’t need to delve into the physical or perceptual aspects.

I said I’d stick to pellet whistles, but I can’t resist mentioning a truly fascinating and unusual synthesis of another whistle sound. Steam locomotives were equipped with train whistles for warning and signalling. to generate the sound, the train driver pulls a cord in the driver’s cabin, thereby opening a valve, so that steam shoots out of an gap and against the sharp edge of a bell. This makes the bell vibrate rapidly, which creates a whistling sound. In 1972, Herbert Chaudiere created an incredibly detailed sound system for model trains. This analogue electronic system  generated all the memorable sounds of the steam locomotive; the bark of exhausting steam, the rhythmic toll of the bell, and the wail of the chime whistle, and reproduced these sounds from a loudspeaker carried in the model locomotive.

The preparation of this blog entry also illustrates some of the problems with crowd sourced metadata and user generated tagging. When trying to find some good sound examples, I searched the whole’s most popular sound effects archive, freesound, for ‘pea whistle’. It came up with only one hit, a recording of steam and liquid escaping from a pot of boiling black-eyed peas!

References:

  • Chaudiere, H. T. (1972). Model Railroad Sound system. Journal of the Audio Engineering Society, 20(8), 650-655.
  • Liu, J. (2012). Simulation of whistle noise using computational fluid dynamics and acoustic finite element simulation, MSc Thesis, U. Kentucky.
  • Shia, Y., Da Silvab, A., & Scavonea (2014), G. Numerical Simulation of Whistles Using Lattice Boltzmann Methods, ISMA, Le Mans, France
  • Cook, P. R. (2002). Real sound synthesis for interactive applications. CRC Press.
  • Oldfield, R. G., & Shirley, B. G. (2011, May). Automatic mixing and tracking of on-pitch football action for television broadcasts. In Audio Engineering Society Convention 130
  • Oldfield, R., Shirley, B., & Satongar, D. (2015, October). Application of object-based audio for automated mixing of live football broadcast. In Audio Engineering Society Convention 139.

Audio Research Year in Review- Part 2, the Headlines

Last week featured the first part of our ‘Audio research year in review.’ It focused on our own achievements. This week is the second, concluding part, with a few news stories related to the topics of this blog (music production, psychoacoustics, sound synthesis and everything in between) for each month of the year.

Browsing through the list, some interesting things pop up. Several news stories related to speech intelligibility in broadcast TV, which has been a recurring story the last few years. Effects of noise pollution on wildlife is also a theme in this year’s audio research headlines. And quite a few of the psychological studies are telling us what we already know. The fact that musicians (who are trained in a task that involves quick response to stimuli) have faster reaction times than non-musicians (who may not be trained in such a task) is not a surprise. Nor is the fact that if you hear the cork popping from a wine bottle, you may think it tastes better, although that’s a wonderful example of the placebo effect. But studies that end up confirming assumptions are still worth doing.

January

February

March

April

May

string wine glass

June

July

August

September

October

November

December

Sound Talking at the Science Museum featured assorted speakers on sonic semantics

sound-talking-logo-large

On Friday 3 November, Dr Brecht De Man (Centre for Digital Music, Queen Mary University of London) and Dr Melissa Dickson (Diseases of Modern Life, University of Oxford) organised a one-day workshop at the London Science Museum on the topic of language describing sound, and sound emulating language. We discussed it in a previous blog entry, but now we can wrap up and discuss what happened.

Titled ‘Sound Talking‘, it brought together a diverse lineup of speakers around the common theme of sonic semantics. And with diverse we truly mean that: the programme featured a neuroscientist, a historian, an acoustician, and a Grammy-winning sound engineer, among others.

The event was born from a friendship between two academics who had for a while assumed their work could not be more different, with music technology and history of Victorian literature as their respective fields. When learning their topics were both about sound-related language, they set out to find more researchers from maximally different disciplines and make it a day of engaging talks.

After having Dr Dickson as a resident researcher earlier this year, the Science Museum generously hosted the event, providing a very appropriate and ‘neutral’ central London venue. The venue was further supported by the Diseases of Modern Life project, funded by the European Research Council, and the Centre for Digital Music at Queen Mary University of London.

The programme featured (in order of appearance):

  • Maria Chait, Professor of auditory cognitive neuroscience at UCL, on the auditory system as the brain’s early warning system
  • Jonathan Andrews, Reader in the history of psychiatry at Newcastle University, on the soundscape of the Bethlehem Hospital for Lunatics (‘Bedlam’)
  • Melissa Dickson, postdoctoral researcher in Victorian literature at University of Oxford, on the invention of the stethoscope and the development of an associated vocabulary
  • Mariana Lopez, Lecturer in sound production and post production at University of York, on making film accessible for visually impaired audiences through sound design
  • David M. Howard, Professor of Electronic Engineering at Royal Holloway University of London, on the sound of voice and the voice of sound
  • Brecht De Man, postdoctoral researcher in audio engineering at Queen Mary University of London, on defining the language of music production
  • Mandy Parnell, mastering engineer at Black Saloon Studios, on the various languages of artistic direction
  • Trevor Cox, Professor of acoustic engineering at University of Salford, on categorisation of everyday sounds

In addition to this stellar speaker lineup, Aleks Kolkowski (Recording Angels) exhibited an array of historic sound making objects, including tuning forks, listening tubes, a monochord, and a live recording of a wax cylinder. The workshop took place in a museum, after all, where Dr Kolkowski has held a research associateship, so the display was very fitting.

The full program can be found on the event’s web page. Video proceedings of the event are forthcoming.

Applied Science Journal Article

We are delighted to announce the publication of our article titled, Sound Synthesis of Objects Swinging through Air Using Physical Models in the Applied Science Special Issue on Sound and Music Computing.

 

The Journal is a revised and extended version of our paper which won a best paper award at the 14th Sound and Music Computing Conference which was held in Espoo, Finland in July 2017. The initial paper presented a physically derived synthesis model used to replicate the sound of sword swings using equations obtained from fluid dynamics, which we discussed in a previous blog entry. In the article we extend listening tests to include sound effects of metal swords, wooden swords, golf clubs, baseball bats and broom handles as well as adding in a cavity tone synthesis model to replicate grooves in the sword profiles. Further test were carried out to see if participants could identify which object our model was replicating by swinging a Wii Controller.
The properties exposed by the sound effects model could be automatically adjusted by a physics engine giving a wide corpus of sounds from one simple model, all based on fundamental fluid dynamics principles. An example of the sword sound linked to the Unity game engine is shown in this video.
 

 

Abstract:
A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.
The Applied Science journal is open access and a copy of our article can be downloaded here.

Bounce, bounce, bounce . . .

bounce

Another in our continuing exploration of everyday sounds (Screams, Applause, Pouring water) is the bouncing ball. It’s a nice one for a blog entry since there are only a small number of papers focused on bouncing, which means we can give a good overview of the field. It’s also one of those sounds that we can identify very clearly; we all know it when we hear it. It has two components that can be treated separately; the sound of a single bounce and the timing between bounces.

Let’s consider the second aspect. If we drop a ball from a certain height and ignore any drag, the time it takes to hit the ground is completely determined by gravity. When it hits the ground, some energy is absorbed on impact. And so it may be traveling downwards with a velocity v1 just before impact, and after impact travels upwards with velocity v2. The ratio v2/v1 is called the coefficient of restitution (COR). A high COR means that the ball travels back up almost to its original height, and a low COR means that most energy is absorbed and it only travels up a short distance.

Knowing COR, one can use simple equations of motion to determine the time between each bounce. And since the sum of the times between bounces is a convergent series, one can find the maximum time until it stops bouncing. Conversely, measuring the coefficient of friction from times between bounces is literally a tabletop physics experiment (Aguiar 2003, Farkas 2006, Schwarz 2013). And kinetic energy depends on the square of the velocity, so we know how much energy is lost with each bounce, which also gives an idea of how the sound levels of successive bounces should decrease.

[The derivation of all this has been left to the reader 😊. But again, its straightforward application of the equations of motion that give time dependence of position and velocity under constant acceleration]

Its not that hard to extend this approach, for instance by including air drag or sloped surfaces. But if you put the ball on a vibrating platform, all sorts of wonderful nonlinear behaviour can be observed; chaos, locking and chattering (Luck 1993).

For instance, have a look at the following video; which shows some interesting behaviour where bouncing balls all seem to organise onto one side of a partition.

So much for the timing of bounces, but what about the sound of a single bounce? Well, Nagurka (2004) modelled the bounce as a mass-spring-damper system, giving the time of contact for each bounce. It provides a little more realism by capturing some aspects of the bounce sound, Stoelinga (2007) did a detailed analysis of bouncing and rolling sounds. It has a wealth of useful information, and deep insights into both the physics and perception of bouncing, but stops short of describing how to synthesize a bounce.

To really capture the sound of a bounce, something like modal synthesis should be used. That is, one should identify the modes that are excited for impact of a given ball on a given surface, and their decay rates. Farnell measured these modes for some materials, and used those values to synthesize bounces in Designing Sound . But perhaps the most detailed analysis and generation of such sounds, at least as far as I’m aware, is in the work of Davide Rocchesso and his colleagues, leaders in the field of sound synthesis and sound design. They have produced a wealth of useful work in the area, but an excellent starting point is The Sounding Object.

Are you aware of any other interesting research about the sound of bouncing? Let us know.

Next week, I’ll continue talking about bouncing sounds with discussion of ‘the audiovisual bounce-inducing effect.’

References

  • Aguiar CE, Laudares F. Listening to the coefficient of restitution and the gravitational acceleration of a bouncing ball. American Journal of Physics. 2003 May;71(5):499-501.
  • Farkas N, Ramsier RD. Measurement of coefficient of restitution made easy. Physics education. 2006 Jan;41(1):73.
  • Luck, J.M. and Mehta, A., 1993. Bouncing ball with a finite restitution: chattering, locking, and chaos. Physical Review E, 48(5), p.3988.
  • Nagurka, M., Shuguang H,. “A mass-spring-damper model of a bouncing ball.” American Control Conference, 2004. Vol. 1. IEEE, 2004.
  • Schwarz O, Vogt P, Kuhn J. Acoustic measurements of bouncing balls and the determination of gravitational acceleration. The Physics Teacher. 2013 May;51(5):312-3.
  • Stoelinga C, Chaigne A. Time-domain modeling and simulation of rolling objects. Acta Acustica united with Acustica. 2007 Mar 1;93(2):290-304.

Sound Talking – 3 November at the London Science Museum

On Friday 3 November 2017, Dr Brecht De Man (one of the audio engineering group researchers) and Dr Melissa Dickson are chairing an unusual and wildly interdisciplinary day of talks, tied together by the theme ‘language describing sound, and sound emulating language’.

Despite being part of the Electronic Engineering and Computer Science department, we think about and work around language quite a lot. After all, audio engineering is mostly related to transferring and manipulating (musical, informative, excessive, annoying) sound and therefore we need to understand how it is experienced and described. This is especially evident from projects such as the SAFE plugins, where we collect terms which describe a particular musical signal manipulation, to then determine their connection with the chosen process parameters and measured signal properties. So the relationship between sound and language is actually central to Brecht’s research, as well as of others here.

The aim of this event is to bring together a wide range of high-profile researchers who work on this intersection, from maximally different perspectives. They study the terminology used to discuss sound, the invention of words that capture sonic experience, and the use and manipulation of sound to emulate linguistic descriptions. Talks will address singing voice research, using sound in accessible film for hearing impaired viewers, new music production tools, auditory neuroscience, sounds in literature, the language of artistic direction, and the sounds of the insane asylum. ‘Sounds’ like a fascinating day at the Science Museum!

Register now (the modest fee just covers lunch, breaks, and wine reception) and get to see

  • Maria Chait (head of UCL Auditory Cognitive Neuroscience lab)
  • Jonathan Andrews (on soundscape of the insane asylum)
  • Melissa Dickson (historian of 19th century literature)
  • Mariana Lopez (making film more accessible through sound)
  • David Howard (the singing voice)
  • Brecht De Man (from our group, on understanding the vocabulary of mixing)
  • Mandy Parnell (award winning mastering engineer)
  • Trevor Cox (categorising quotidian sounds)

In addition, there will be a display of cool sound making objects, with a chance to make your own wax cylinder recording, and more!

The full programme including abstracts and biographies can be found on www.semanticaudio.co.uk/events/soundtalking/.