Applied Science Journal Article

We are delighted to announce the publication of our article titled, Sound Synthesis of Objects Swinging through Air Using Physical Models in the Applied Science Special Issue on Sound and Music Computing.

 

The Journal is a revised and extended version of our paper which won a best paper award at the 14th Sound and Music Computing Conference which was held in Espoo, Finland in July 2017. The initial paper presented a physically derived synthesis model used to replicate the sound of sword swings using equations obtained from fluid dynamics, which we discussed in a previous blog entry. In the article we extend listening tests to include sound effects of metal swords, wooden swords, golf clubs, baseball bats and broom handles as well as adding in a cavity tone synthesis model to replicate grooves in the sword profiles. Further test were carried out to see if participants could identify which object our model was replicating by swinging a Wii Controller.
The properties exposed by the sound effects model could be automatically adjusted by a physics engine giving a wide corpus of sounds from one simple model, all based on fundamental fluid dynamics principles. An example of the sword sound linked to the Unity game engine is shown in this video.
 

 

Abstract:
A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.
The Applied Science journal is open access and a copy of our article can be downloaded here.
Advertisements

Bounce, bounce, bounce . . .

bounce

Another in our continuing exploration of everyday sounds (Screams, Applause, Pouring water) is the bouncing ball. It’s a nice one for a blog entry since there are only a small number of papers focused on bouncing, which means we can give a good overview of the field. It’s also one of those sounds that we can identify very clearly; we all know it when we hear it. It has two components that can be treated separately; the sound of a single bounce and the timing between bounces.

Let’s consider the second aspect. If we drop a ball from a certain height and ignore any drag, the time it takes to hit the ground is completely determined by gravity. When it hits the ground, some energy is absorbed on impact. And so it may be traveling downwards with a velocity v1 just before impact, and after impact travels upwards with velocity v2. The ratio v2/v1 is called the coefficient of restitution (COR). A high COR means that the ball travels back up almost to its original height, and a low COR means that most energy is absorbed and it only travels up a short distance.

Knowing COR, one can use simple equations of motion to determine the time between each bounce. And since the sum of the times between bounces is a convergent series, one can find the maximum time until it stops bouncing. Conversely, measuring the coefficient of friction from times between bounces is literally a tabletop physics experiment (Aguiar 2003, Farkas 2006, Schwarz 2013). And kinetic energy depends on the square of the velocity, so we know how much energy is lost with each bounce, which also gives an idea of how the sound levels of successive bounces should decrease.

[The derivation of all this has been left to the reader 😊. But again, its straightforward application of the equations of motion that give time dependence of position and velocity under constant acceleration]

Its not that hard to extend this approach, for instance by including air drag or sloped surfaces. But if you put the ball on a vibrating platform, all sorts of wonderful nonlinear behaviour can be observed; chaos, locking and chattering (Luck 1993).

For instance, have a look at the following video; which shows some interesting behaviour where bouncing balls all seem to organise onto one side of a partition.

So much for the timing of bounces, but what about the sound of a single bounce? Well, Nagurka (2004) modelled the bounce as a mass-spring-damper system, giving the time of contact for each bounce. It provides a little more realism by capturing some aspects of the bounce sound, Stoelinga (2007) did a detailed analysis of bouncing and rolling sounds. It has a wealth of useful information, and deep insights into both the physics and perception of bouncing, but stops short of describing how to synthesize a bounce.

To really capture the sound of a bounce, something like modal synthesis should be used. That is, one should identify the modes that are excited for impact of a given ball on a given surface, and their decay rates. Farnell measured these modes for some materials, and used those values to synthesize bounces in Designing Sound . But perhaps the most detailed analysis and generation of such sounds, at least as far as I’m aware, is in the work of Davide Rocchesso and his colleagues, leaders in the field of sound synthesis and sound design. They have produced a wealth of useful work in the area, but an excellent starting point is The Sounding Object.

Are you aware of any other interesting research about the sound of bouncing? Let us know.

Next week, I’ll continue talking about bouncing sounds with discussion of ‘the audiovisual bounce-inducing effect.’

References

  • Aguiar CE, Laudares F. Listening to the coefficient of restitution and the gravitational acceleration of a bouncing ball. American Journal of Physics. 2003 May;71(5):499-501.
  • Farkas N, Ramsier RD. Measurement of coefficient of restitution made easy. Physics education. 2006 Jan;41(1):73.
  • Luck, J.M. and Mehta, A., 1993. Bouncing ball with a finite restitution: chattering, locking, and chaos. Physical Review E, 48(5), p.3988.
  • Nagurka, M., Shuguang H,. “A mass-spring-damper model of a bouncing ball.” American Control Conference, 2004. Vol. 1. IEEE, 2004.
  • Schwarz O, Vogt P, Kuhn J. Acoustic measurements of bouncing balls and the determination of gravitational acceleration. The Physics Teacher. 2013 May;51(5):312-3.
  • Stoelinga C, Chaigne A. Time-domain modeling and simulation of rolling objects. Acta Acustica united with Acustica. 2007 Mar 1;93(2):290-304.

Physically Derived Sound Synthesis Model of a Propeller

I recently presented my work on the real-time sound synthesis of a propeller at the 12th International Audio Mostly Conference in London. This sound effect is a continuation of my research into aeroacoustic sounds generated by physical models; an extension of my previous work on the Aeolian harp, sword sounds and Aeolian tones.

A demo video of the propeller model attached to an aircraft object in unity is given here. I use the Unity Doppler effect which I have since discovered is not the best and adds a high-pitched artefact but you’ll get the idea! The propeller physical model was implemented in Pure Data and transferred to Unity using the Heavy compiler.

So, when I was looking for an indication of the different sound sources in a propeller sound I found an excellent paper by JE Marte and DW Kurtz. (A review of aerodynamic noise from propellers, rotors, and lift fans. Jet Propulsion Laboratory, California Institute of Technology, 1970) This paper provides a breakdown of the different sound sources, replicated for you here.

The sounds are split into periodic and broadband groups. In the periodic sounds, there are rotational sounds associated with the forces on the blade and interaction and distortion effects. The first rotational sound is the Loading sounds. These are associated with the thrust and torque of each propeller blade.

To picture these forces, imagine you are sitting on an aircraft wing, looking down the span, travelling at a fixed speed and uniform air flowing over the aerofoil. From your point of view the wing will have a lift force associated with it and a drag force. Now if we change the aircraft wing to a propeller blade with similar profile to an aerofoil, spinning at a set RPM. If you are sitting at a point on the blade the thrust and torque will be constant at the point you are sat.

Now stepping off the propeller blade and examining the disk of rotation the thrust and torque forces will appear as pulses at the blade passing frequency. For example, a propeller with 2 blades, rotating at 2400 RPM will have a blade passing frequency of 80Hz. A similar propeller with 4 blades, rotating at the same RPM will have a blade passing frequency of 160Hz.

Thickness noise is the sound generated as the blade moves the air aside when passing. This sound is found to be small when blades are moving at the speed of sound, 343 m/s, (known as a speed of Mach 1), and is not considered in our model.

Interaction and distortion effects are associated with helicopter rotors and lift fans. Because these have horizontally rotating blades an effect called blade slap occurs, where the rotating blade passes through the vortices shed by the previous blade causing a large slapping sound. Horizontal blades also have AM and FM modulated signals related with them as well as other effects. Since we are looking at propellers that spin mostly vertically, we have omitted these effects.

The broadband sounds of the propeller are closely related to the Aeolian tone models I have spoken about previously. The vortex sounds are from the vortex shedding, identical to out sword model. This difference in this case is that a propeller has a set shape which more like an aerofoil than a cylinder.

In the Aeolian tone paper, published at AES, LA, 2016, it was found that for a cylinder the frequency can be determined by an equation defined by Strouhal. The ratio of the diameter, frequency and airspeed are related by the Strouhal number, found for a cylinder to be approximately 0.2. In the paper D Brown and JB Ollerhead, Propeller noise at low tip speeds. Technical report, DTIC Document, 1971, a Strouhal number of 0.85 was found for propellers. This was used in our model, along with the chord length of the propeller instead of the diameter.

We also include the wake sound in the Aeolian tone model which is similar to the turbulence sounds. These are only noticeable at high speeds.

The paper by Martz et. al. outlines a procedure by Hamilton Standard, a propeller manufacturer, for predicting the far field loading sounds. Along with the RPM, number of blades, distance, azimuth angle we need the blade diameter, and engine power. We first decided which aircraft we were going to model. This was determined by the fact that we wanted to carry out a perceptual test and had a limited number of clips of known aircraft.

We settled on a Hercules C130, Boeing B17 Flying Fortress, Tiger Moth, Yak-52, Cessna 340 and a P51 Mustang. The internet was searched for details like blade size, blade profile (to calculate chord lengths along the span of the blade), engine power, top speed and maximum RPM. This gave enough information for the models to be created in pure data and the sound effect to be as realistic as possible.

This enables us to calculate the loading sounds and broadband vortex sounds, adding in a Doppler effect for realism. What was missing is an engine sound – the aeroacoustic sounds will not happen in isolation in our model. To rectify this a model from Andy Farnell’s Designing Sound was modified to act as our engine sound.

A copy of the pure data software can be downloaded from this site, https://code.soundsoftware.ac.uk/hg/propeller-model. We performed listening tests on all the models, comparing them with an alternative synthesis model (SMS) and the real recordings we had. The tests highlighted that the real sounds are still the most plausible but our model performed as well as the alternative synthesis method. This is a great result considering the alternative method starts with a real recording of a propeller, analyses it and re-synthesizes it. Our model starts with real world physical parameters like the blade profile, engine power, distance and azimuth angles to produce the sound effect.

An example of the propeller sound effect is mixed into this famous scene from North by Northwest. As you can hear the effect still has some way to go to be as good as the original but this physical model is the first step in incorporating fluid dynamics of a propeller into the synthesis process.

From the editor: Check out all Rod’s videos at https://www.youtube.com/channel/UCIB4yxyZcndt06quMulIpsQ

A copy the paper published at Audio Mostly 2017 can be found here >> Propeller_AuthorsVersion

Female pioneers in audio engineering

The Heyser lecture is a distinguished talk given at each AES Convention by eminent individuals in audio engineering and related fields. At the 140th AES Convention, Rozenn Nicol was the Heyser lecturer. This was well-deserved, and she has made major contributions to the field of immersive audio. But what was shocking about this is that she is the first woman Heyser lecturer. Its an indicator that woman are under-represented and under-recognised in the field. With that in mind, I’d like to highlight some women who have made major contributions to the field, especially in research and innovation.

  • Birgitta Berglund led major research into the impact of noise on communities. Her influential research resulted in guidelines from the World Health Organisation, and greatly advanced our understanding of noise and its effects on society. She was the 2009 IOA Rayleigh medal recipient.
  • Marina Bosi is a past AES president of the AES. She has been instrumental in the development of standards for audio coding and digital content management standards and formats, including develop the AC-2, AC-3, and MPEG-2 Advanced Audio Coding technologies,
  • Anne-Marie Bruneau has been one of the most important researchers on electrodynamic loudspeaker design, exploring motion impedance and radiation patterns, as well as establishing some of the main analysis and measurement approaches used today. She co-founded the Laboratoire d’Acoustique de l’Université du Maine, now a leading acoustics research center.
  • Ilene J. Busch-Vishniac is responsible for major advances in the theory and understanding of electret microphones, as well as patenting several new designs. She received the ASA R. Bruce Lindsay Award in 1987, and the Silver Medal in Engineering Acoustics in 2001. President of the ASA 2003-4.
  • Elizabeth (Betsy) Cohen was the first female president of the Audio Engineering Society. She was presented with the AES Fellowship Award in 1995 for contributions to understanding the acoustics and psychoacoustics of sound in rooms. In 2001, she was presented with the AES Citation Award for pioneering the technology enabling collaborative multichannel performance over the broadband internet.
  • crumPoppy Crum is head scientist at Dolby Laboratories whose research involves computer research in music and acoustics. At Dolby, she is responsible for integrating neuroscience and knowledge of sensory perception into algorithm design, technological development, and technology strategy.
  • Delia Derbyshire (1937-2001) was an innovator in electronic music who pushed the boundaries of technology and composition. She is most well-known for her electronic arrangement of the theme for Doctor Who, an important example of Musique Concrète. Each note was individually crafted by cutting, splicing, and stretching or compressing segments of analogue tape which contained recordings of a plucked string, oscillators and white noise. Here’s a video detailing a lot of the effects she used, which have now become popular tools in digital music production.
  •  Ann Dowling is the first female president of the Royal Academy of Engineering. Her research focuses on noise analysis and reduction, especially from engines, and she is a leading educator in acoustics. A quick glance at google scholar shows how influential her research has been.
  • Marion Downs was an audiometrist at Colorado Medical Center in Denver, who invented the tests used to measure hearing both In newly born babies and in fetuses.
  • Judy Dubno is Director of Hearing Research at the Medical University of South Carolina. Her research focuses on human auditory function, with emphasis on the processing of auditory information and the recognition of speech, and how these abilities change in adverse listening conditions, with age, and with hearing loss. Recipient of the James Jerger Career Award for Research in Audiology from the American Academy of Audiology and Carhart Memorial Lecturer for the American Auditory Society. President of the ASA in 2014-15.
  • thumb_FiebrinkPhoto3Rebecca Fiebrink researches Human Computer Interaction (HCI) and its application of machine learning to real-time, interactive, and creative domains. She is the creator of the popular Wekinator, which allows anyone to use machine learning to build new musical instruments, real-time music information retrieval and audio analysis systems, computer listening systems and more.
  • Katherine Safford Harris pioneered EMG studies of speech production and auditory perception. Her research was fundamental to speech recognition, speech synthesis, reading machines for the blind, and the motor theory of speech perception. She was elected Fellow of the ASA, the AAAS, the American Speech-Language-Hearing Association, and the New York Academy of Sciences. She was President of the ASA (2000-2001), awarded the Silver Medal in 2005 and Gold Medal in 2007.
  • Rhona Hellman was a Fellow of the ASA. She was a distinguished hearing scientist and preeminent expert in auditory perceptual phenomena. Her research spanned almost 50 years, beginning in 1960. She tackled almost every aspect of loudness, and the work resulted in major advances and developments of loudness standards.
  • Mara Helmuth developed software for composition and improvisation involving granular synthesis. Throughout the 1990s, she paved the way forward by exploring and implementing systems for collaborative performance over the Internet. From 2008-10 she was President of the International Computer Music Association.
  • Carleen_HutchinsCarlene Hutchins (1911-2009) was a leading researcher in the study of violin acoustics, with over a hundred publications in the field. She was founder and president of the Catgut Society, an organization devoted to the study and appreciation of stringed instruments .
  • Sophie Germain (1776-1831) was a French mathematician, scientist and philosopher. She won a major prize from the French Academy of Sciences for developing a theory to explain the vibration of plates due to sound. The history behind her contribution, and the reactions of leading French mathematicians to having a female of similar calibre in their midst, is fascinating. Joseph Fourier, whose work underpins much of audio signal processing, was a champion of her work.
  • Bronwyn Jones was a psychoacoustician at the CBS Technology Center during the 70s and 80s. In seminal work with co-author Emil Torrick, she developed one of the first loudness meters, incorporating both psychoacoustic principles and detailed listening tests. It paved the way for what became major initiatives for loudness measurement, and in some ways outperforms the modern ITU 1770 standard
  • Bozena Kostek is editor of the Journal of the Audio Engineering Society. Her most significant contributions include the applications of neural networks, fuzzy logic and rough sets to musical acoustics, and the application of data processing and information retrieval to the psychophysiology of hearing. Her research has garnered dozens of prizes and awards.
  • Daphne Oram (1925 –2003) was a pioneer of ‘musique concrete’ and a central figure in the evolution of electronic music. She devised the Oramics technique for creating electronic sounds, co-founded the BBC Radiophonic Workshop, and was possibly the first woman to direct an electronic music studio, to set up a personal electronic music studio and to design and construct an electronic musical instrument.
  • scalettiCarla Scaletti is an innovator in computer generated music. She designed the Kyma sound generation computer language in 1986 and co-founded Symbolic Sound Corporation in 1989. Kyma is one of the first graphical programming languages for real time digital audio signal processing, a precursor to MaxMSP and PureData, and is still popular today.
  • Bridget Shield was professor of acoustics at London Southbank University. Her research is most significant in our understanding of the effects of noise on children, and has influenced many government initiatives. From 2012-14, she was the first female President of the Institute of Acoustics.
  • Laurie Spiegel created one of the first computer-based music composition programs, Music Mouse: an Intelligent Instrument, which also has some early examples of algorithmic composition and intelligent automation, both of which are hot research topics today.
  • maryMary Desiree Waller (1886-1959) wrote a definitive treatise on Chladni figures, which are the shapes and patterns made by surface vibrations due to sound, see Sophie Germain, above. It gave far deeper insight into the figures than any previous work.
  • Megan (or Margaret) Watts-Hughes is the inventor of the Eidophone, an early instrument for visualising the sounds made by your voice. She rediscovered this simple method of generating Chladni figures without knowledge of Sophie Germain or Ernst Chladni’s work. There is a great description of her experiments and analysis in her own words.

The Eidophone, demonstrated by Grace Digney.

Do you know some others who should be mentioned? We’d love to hear your thoughts.

Thanks to Theresa Leonard for information on past AES presidents. She was the third female president.  will be the fourth.

And check out Women in Audio: contributions and challenges in music
technology and production for a detailed analysis of the current state of the field.

International Congress on Sound and Vibration (ICSV) London 2017

The International Congress on Sound and Vibration (ICSV) may not be the first conference you would think of for publishing the results of research into a sound effect but that’s exactly what we have just returned from. I presented our paper on the Real-Time Physical Model of an Aeolian harp to a worldwide audience of the top researchers in sound and vibration.

 

The Congress opened with a keynote from Professor Eric Heller discussing acoustics resonance and formants following by a whole day of musical acoustics chaired by Professor Murray Campbell from Edinburgh University. One interesting talk was given by Stephen Dance of London South Bank University where a hearing study of music students was carried out. Their results showed that the hearing of the music students improved over the 3 years of their course even though none of the students would wear ear protection while playing. The only degradation of hearing was experienced by oboe players. Possible reasons being the fast attack time of the instrument and the fact that the oboe players were stood directly in front of the brass players when playing as an orchestra.

 

The opening day also had a talk titled – Artificial neural network based model for the crispness impression of the potato chip sounds  by Ercan Altinsoy from Dresden University of Technology. This researched looked into the acoustical properties of food and the impression of freshness that was inferred from this.

 

I presented my research on the Real-time physical model of an aeolian harp, describing the sound synthesis of this unusual musical instrument. The synthesis model captures the interaction between the mechanical vibration properties of each string and the vortices being shed from the wind blowing around them.

 

The session ended with Application of sinusoidal curves to shape design of chord sound plate and experimental verification by Bor-Tsuen Wang Department of Mechanical Engineering, National Pingtung University of Science and Technology, Pingtung, Taiwan. This work reviews the design concept of chord sound plate (CSP) that is a uniform thickness plate with special curved shape designed by Bezier curve (B-curve) method. The CSP can generate the percussion sound with three tone frequencies that consist of the musical note frequencies of triad chord.

 

A presentation from Gaku Minorikawa, Hosei University, Department of Mechanical Engineering, Faculty of Science and Engineering, Tokyo, Japan, discussed his research into the reduction of noise from fans – highly relevant to audio engineers who want the quietest computers as possible for a studio. Prediction for noise reduction and characteristics of flow induced noise on axial cooling fan 

 

There was an interesting session on the noise experienced in open plan offices and how other noise sources are introduced to apply acoustic masking to certain areas. The presentation by Charles Edgington illustrated practical implementations of such masking and considerations that have to be made. Practical considerations and experiences with sound masking’s latest technology

 

The testing of a number of water features within an open plan office was presented in Audio-visual preferences of water features used in open-plan offices by Zanyar Abdalrahman from Heriot-Watt University, School of Energy, Geoscience, Infrastructure and Society, Edinburgh. Here a number of water feature contractions were examined.

 

The difficulty of understanding the speech of the participants in both rooms of a video conference  was researched by Charlotte Hauervig-Jørgensen from Technical University of Denmark. Subjective rating and objective evaluation of the acoustic and indoor climate conditions in video conferencing rooms. Moving away from office acoustics to house construction I saw a fascinating talk by Francesco D’Alessandro, University of Perugia. This paper aims at investigating the acoustic properties of straw bale constructions. Straw as an acoustic material

 

One session was dedicated to Sound Field Control and 3D Audio with a total of 18 papers presented on this topic. Filippo Fazi from University of Southampton presented a paper on A loudspeaker array for 2 people transaural reproduction which introduced a signal processing approach for performing 2-people Transaural reproduction using a combination of 2 single-listener cross-talk cancellation (CTC) beamformers, so that the CTC is maximised at one listener position and the beamformer side-lobes radiate little energy not to affect the other listening position.

 

Another session running was Thermoacoustics research in a gender-balanced setting. For this session alternate female and male speakers presented their work on thermoacoustics. Francesca Sogaro from Imperial College London presented her work on Sensitivity analysis of thermoacoustic instabilities. Presenting Sensescapes fascilitating life quality, Frans Mossberg of The Sound Environment Center at Lund University, Sweden is examine research into what can be done to raise awareness of the significance of sense- and soundscape for health, wellbeing and communication.

 

The hearing aid is a complex yet common device used to assist those suffering from hearing loss. In their paper on Speech quality enhancement in digital hearing aids: an active noise control approach, Somanath Pradhan, (Indian Institute of Technology Gandhinagar), has attempted to overcome limitations of noise reduction techniques by introducing a reduced complexity integrated active noise cancellation approach, along with noise reduction schemes.

 

Through a combination of acoustic computer modelling, network protocol, game design and signal processing, the paper Head-tracked auralisations for a dynamic audio experience in virtual reality sceneries proposes a method for bridging acoustic simulations and interactive technologies, i.e. fostering a dynamic acoustic experience for virtual scenes via VR-oriented auralisations. This was presented by Eric Ballesteros, London South Bank University.

 

The final day also included a number of additional presentations form our co-author, Dr Avital, including ‘Differences in the Non Linear Propagation of Crackle and Screech and Aerodynamic and Aeroacoustic Re-Design of Low Speed Blade Profile. The conference’s final night concluded with a banquet at the Sheraton Park Lane Hotel in its Grade 2 listed ballroom. The night included a string quartet, awards and Japanese opera singing. Overall this was a conference with a vast number of presentations from a number of different fields.

Acoustic Energy Harvesting

At the recent Audio Engineering Society Convention, one of the most interesting talks was in the E-Briefs sessions. These are usually short presentations, dealing with late-breaking research results, work in progress, or engineering reports. The work, by Charalampos Papadokos presented an e-brief titled ‘Power Out of Thin Air: Harvesting of Acoustic Energy’.

Ambient energy sources are those sources all around us, like solar and kinetic energy. Energy harvesting is the capture and storage of ambient energy. It’s not a new concept at all, and dates back to the windmill and the waterwheel. Ambient power has been collected from electromagnetic radiation since the invention of crystal radios by Sir Jagadish Chandra Bose, a true renaissance man who made important contributions to many fields. But nowadays, people are looking for energy harvesting from many more possible sources, often for powering small devices, like wearable electronics and wireless sensor networks. The big advantages, of course, is that energy harvesters do not consume resources like oil or coal, and energy harvesting might enable some devices to operate almost indefinitely.

But two of the main challenges is that many ambient energy sources are very low power, and the harvesting may be difficult.

Typical power densities from energy harvesting can vary over orders of magnitude. Here’s the energy densities for various ambient sources, taken from the Open Access book chapter ‘Electrostatic Conversion for Vibration Energy Harvesting‘ by S. Boisseau, G. Despesse and B. Ahmed Seddik ‘.

EnergyHarvesting

You can see that vibration, which includes acoustic vibrations, has about 1/100th the energy density of solar power, or even less. The numbers are arguable, but at first glance it looks like it will be exceedingly difficult to get any significant energy from acoustic sources unless one can harvest over a very large area.

That’s where this e-brief paper comes in. Papadokos and his co-author, John Mourjopoulos, have a patented approach to harvesting the acoustic energy inside a loudspeaker enclosure. Others had considered harvesting the sound energy from loudspeakers before (see the work of Matsuda, for instance), but mainly just as a way of testing their harvesting approach, and not really exploiting the properties of loudspeakers. Papadokos and Mourjopoulos had the insight to realise that many loudspeakers are enclosed and the enclosure has abundant acoustic energy that might be harvested without interfering with the external design and without interfering with the sound presented to the listener. In earlier work, Papadokos and Mourjopoulos found that sound pressure within the enclosure often exceeds 130 dBs within a loudspeaker enclosure. Here, they simulated the effect of a piezoelectric plate in the enclosure, to convert the acoustic energy to electrical energy. Results showed that it might be possible to generate 2.6 volts under regular operating conditions, thus proving the concept of harvesting acoustic energy from loudspeaker enclosures, at least in simulation.

Sound as a Weapon

Sonic weapons frequently occur in science fiction and fantasy. I remember reading the Tintin book The Calculus affair, where Professor Calculus invents ultrasonic devices which break glass objects around the house. But the bad guys from Borduria want to make them large scale and long range devices, capable of mass destruction.

ed29874fbb6774785e5be31488fca3fe
As with many fantastic fiction ideas, sonic weapons have a firm basis in fact. But one of the first planned uses for sonic devices in war was as a defense system, not a weapon.

Between about 1916 and 1936, acoustic mirrors were built and tested around the coast of England. The idea is that they could reflect, and in some cases focus, the sound of incoming enemy aircraft. Microphones could be placed at the foci of the reflectors, giving listeners a means of early detection. The mirrors were usually parabolic or spherical in shape detect the aircraft, and for the spherical designs, the microphone could be moved as a means of identifying the direction of arrival.

acoustic-mirrors-01

It was a good idea at first, but air speed of bombers and fighters improved so much over that time period that it would only give a few minutes extra warning. And then the technology became completely obsolete with the invention of radar, though that also meant that the effort into planning a network of detectors along the coast was not wasted.

The British weren’t the only ones attempting to use sound for aircraft detection between the world wars. The Japanese had mobile acoustic locators known as ‘war tubas,’ Dutch had personal horns and personal parabolas, the Czechs used a four-horn acoustic locator to detect height as well as horizontal direction, and the French physicist Jean-Baptiste Perrin designed the télésitemètre, which in a field full of unusual designs, still managed to distinguish itself by having 36 small hexagonal horns. Perrin though, is better known for his Nobel prize winning work on Brownian motion that finally confirmed the atomic theory of matter. Other well-known contributors to the field include the Austrian born ethnomusicologist Erich Moritz von Hornbo and renowned psychologist Max Wertheimer. Together, they developed the sound directional locator known as the Wertbostel, which was believed to have been commercialised during the 30s.
There are wonderful photos of these devices, most of which can be found here , but I couldn’t resist including at least a couple,

german%201917a a German  acoustic & optical locating apparatus, and a Japanese war tuba.

hiro1a and a Japanese war tuba.

But these acoustic mirrors and related systems were all intended for defense. During World War II, German scientists worked on sonic weapons under the supervision of Albert Speer. They developed an acoustic cannon that was intended to send a deafening, focused beam of sound, magnified by parabolic reflector dishes. Research was discontinued however, since initial efforts were not successful, nor was it likely to be effective in practical situations.

Devices capable of producing especially loud sounds, often focused in a given direction or over a particular frequency range, have found quite a few uses as weapons of some kind. A long-range acoustic device was used to deter pirates who attempted to  attack a cruise ship, for instance, and sonic devices emitting high frequencies that might be heard by teenagers but unlikely to be heard by adults have been deployed in city centres to prevent youth from congregating. However, such stories make for interesting reading, but it’s hard to say how effective they actually are.
And there are even sonic weapons occurring in nature.

The snapping shrimp has a claw which shoots a jet of water, which in turn generates a cavitation bubble. The bubble bursts with a snap reaching around 190 decibels. Its loud enough to kill or stun small sea creatures, who then become its prey.