Sound as a Weapon

Sonic weapons frequently occur in science fiction and fantasy. I remember reading the Tintin book The Calculus affair, where Professor Calculus invents ultrasonic devices which break glass objects around the house. But the bad guys from Borduria want to make them large scale and long range devices, capable of mass destruction.

ed29874fbb6774785e5be31488fca3fe
As with many fantastic fiction ideas, sonic weapons have a firm basis in fact. But one of the first planned uses for sonic devices in war was as a defense system, not a weapon.

Between about 1916 and 1936, acoustic mirrors were built and tested around the coast of England. The idea is that they could reflect, and in some cases focus, the sound of incoming enemy aircraft. Microphones could be placed at the foci of the reflectors, giving listeners a means of early detection. The mirrors were usually parabolic or spherical in shape detect the aircraft, and for the spherical designs, the microphone could be moved as a means of identifying the direction of arrival.

acoustic-mirrors-01

It was a good idea at first, but air speed of bombers and fighters improved so much over that time period that it would only give a few minutes extra warning. And then the technology became completely obsolete with the invention of radar, though that also meant that the effort into planning a network of detectors along the coast was not wasted.

The British weren’t the only ones attempting to use sound for aircraft detection between the world wars. The Japanese had mobile acoustic locators known as ‘war tubas,’ Dutch had personal horns and personal parabolas, the Czechs used a four-horn acoustic locator to detect height as well as horizontal direction, and the French physicist Jean-Baptiste Perrin designed the télésitemètre, which in a field full of unusual designs, still managed to distinguish itself by having 36 small hexagonal horns. Perrin though, is better known for his Nobel prize winning work on Brownian motion that finally confirmed the atomic theory of matter. Other well-known contributors to the field include the Austrian born ethnomusicologist Erich Moritz von Hornbo and renowned psychologist Max Wertheimer. Together, they developed the sound directional locator known as the Wertbostel, which was believed to have been commercialised during the 30s.
There are wonderful photos of these devices, most of which can be found here , but I couldn’t resist including at least a couple,

german%201917a a German  acoustic & optical locating apparatus, and a Japanese war tuba.

hiro1a and a Japanese war tuba.

But these acoustic mirrors and related systems were all intended for defense. During World War II, German scientists worked on sonic weapons under the supervision of Albert Speer. They developed an acoustic cannon that was intended to send a deafening, focused beam of sound, magnified by parabolic reflector dishes. Research was discontinued however, since initial efforts were not successful, nor was it likely to be effective in practical situations.

Devices capable of producing especially loud sounds, often focused in a given direction or over a particular frequency range, have found quite a few uses as weapons of some kind. A long-range acoustic device was used to deter pirates who attempted to  attack a cruise ship, for instance, and sonic devices emitting high frequencies that might be heard by teenagers but unlikely to be heard by adults have been deployed in city centres to prevent youth from congregating. However, such stories make for interesting reading, but it’s hard to say how effective they actually are.
And there are even sonic weapons occurring in nature.

The snapping shrimp has a claw which shoots a jet of water, which in turn generates a cavitation bubble. The bubble bursts with a snap reaching around 190 decibels. Its loud enough to kill or stun small sea creatures, who then become its prey.

Sound Synthesis of an Aeolian Harp

Introduction

Synthesising the Aeolian harp is part of a project into synthesising sounds that fall into a class called aeroacoustics. The synthesis model operates in real-time and is based on the physics that generate the sounds in nature. 

The Aeolian harp is an instrument that is played by the wind. It is believed to date back to ancient Greece; legend states that King David hung a harp in the tree to hear it being played by the wind. They became popular in Europe in the romantic period and Aeolian harps can be designed as garden ornaments, part of sculptures or large scale sound installations.  

The sound created by Aeolian harp has often been described as meditative and inspiring. A poem by Ralph Emerson describes it as follows:
 
Keep your lips or finger-tips
For flute or spinet’s dancing chips; 
I await a tenderer touch
I ask more or not so much:

Give me to the atmosphere.

aeolian3s

 
The harp in the picture is taken from Professor Henry Gurr’s website. This has an excellent review of the principles behind design and operation of Aeolian harps. 
Basic Principles

As air flows past a cylinder vortices are shed at a frequency that is proportional to the cylinder diameter and speed of the air. This has been discussed in the previous blog entry on Aeolian tones. We now think of the cylinders as a string, like that of a harp, guitar, violin, etc. When a string of one of these instruments is plucked it vibrates at it’s natural frequency. The natural frequency is proportional to the tension, length and mass of the string.  

Instead of a pluck or a bow exciting a string, in an Aeolian harp it is the vortex shedding that stimulates the strings. When the frequency of the vortex shedding is in the region of the natural vibration frequency of the string, or one of it’s harmonics, a phenomenon known as lock-in occurs. While in lock-in the string starts to vibrate at the relevant harmonic frequency. For a range of airspeed the string vibration is the dominant factor that dictates the frequency of the vortex shedding; changing the air speed does not change the frequency of vortex shedding, hence the process is locked-in. 

While in lock-in a FM type acoustic output is generated giving the harp its unique sound, described by the poet Samuel Coleridge as a “soft floating witchery of sound”.
Our Model 

As with the Aeolian tone model we calculate the frequency of vortex shedding for a given string dimensions and airspeed. We also calculate the fundamental natural vibrational frequency and harmonics of a string given its properties. 

There is a specific area of airspeed that leads to string vibration and vortex shedding locking in. This is calculated and the specific frequencies for the FM acoustic signal generated. There is a hysteresis effect on the vibration amplitude based on the increase and decrease of the airspeed which is also implemented. 

 A used interface is provided that allows a user to select up to 13 strings, adjusting their length, diameter, tension, mass and the amount of damping (which reduces the vibration effects as the harmonic number increases). This interface is shown below which includes presets of an number of different string and wind configurations. 

1
A copy of the pure data patch can be downloaded here. The video below was made to give an overview of the principles, sounds generated and variety of Aeolian harp constructions.

The Benefits of Browser-Based Listening Tests

Listening tests, or subjective evaluation of audio, are an essential tool in almost any form of audio and music related research, from data compression codecs over loudspeaker design to realism of sound effects. Sadly, because of the time and effort required to carefully design a test and convince a sufficient number of participants, it is also quite an expensive process.

The advent of web technologies like the Web Audio API, enabling elaborate audio applications within a web page, offers the opportunity to develop browser-based listening tests which mitigate some of the difficulties associated with perceptual evaluation of audio. Researchers at the Centre for Digital Music and Birmingham City University’s Digital Media Technology Lab have developed the Web Audio Evaluation Tool [1] to facilitate listening test design for any experimenter regardless of their programming experience, operating system, test paradigm, interface layout, and location of their test subjects.

APE interface - Web Audio Evaluation Tool
Web Audio Evaluation Tool: An example single-axis, multiple stimuli listening test with comment fields.

Here we cover some of the reasons why you would want to carry out a listening test in the browser, using the Web Audio Evaluation Tool as a case study.

Remote tests

The first and most obvious reason to use a browser-based listening test platform. If you want to conduct a perceptual evaluation study online, i.e. host a website where participants can take the test, then there are no two ways about it: you need a listening test that works within the browser, i.e. one that is based on HTML and JavaScript.

A downloadable application is rarely an elegant solution, and only the most determined participants will end up taking the test if they can get it to work. A website, however, amounts to a very low-threshold participation.

Pros

  • Low effort A remote test means no booking and setting up of a listening room, showing the participant into the building, …
  • Scales easily If you can conduct the test once, you can conduct it a virtually unlimited number of times, as long as you find the participants. Amazon Turk or similar services could be helpful with this.
  • Different locations/cultures/languages within reach For some types of research, it is necessary to include (a high number of) participants with certain geographical locations, cultural backgrounds and/or native languages. When these are scarce nearby, and you cannot find the time or funds to fly around the world, a remote listening test can be helpful.

Cons

  • Limited programming languages For the implementation of the test, you are basically constrained to using web technologies such as JavaScript. For someone used to using e.g. MATLAB or C++, this can be off-putting. This is one of the reasons we aim to offer a tool that doesn’t involve any coding for most of the use cases.
  • Loss of control A truly remote test means that you are not present to talk to the participant and answer questions, or notice they misunderstand the instructions. You also have little information on their playback system (make and model, how it is set up, …) and you often know less about their background.

Depending on the type of test and research, you may or may not want to go ‘remote’ or ‘local’.
However, it has been shown for certain tasks that there is no significant difference between results from local and remote tests [2,3].

Furthermore, a tool like the Web Audio Evaluation Tool has many safeguards to compensate this loss of control. Examples of these features include

  • Extensive metrics Timestamps corresponding with playback and movement events can be automatically visualised to show when participants auditioned which samples and for how long; when they moved which slider from where to where; and so on.
  • Post-test checks Upon submissions, optional dialogs can remind the participant of certain instructions, e.g. to listen to all fragments; to move all sliders at least once; to rate at least one stimulus below 20% or at exactly 100%; …
  • Audiometric test and calibration of the playback system An optional series of sliders shown at the start of a test, to be set by the participant so that sine waves an octave apart are all equally loud.
  • Survey questions Most relevant background information on the participant’s background and playback system can be captured by well-phrased survey questions, which can be incorporated at the start or end of the test.
The Web Audio Evaluation Tool - an example test interface inspired by the popular MUSHRA standard, typical for the evaluation of audio codecs.
Web Audio Evaluation Tool – an example test interface inspired by the popular MUSHRA standard, typical for the evaluation of audio codecs.

Cross-platform, no third-party software needed


Source: Sewell Support

Listening test interfaces can be hard to design, with many factors to take into account. On top of that it may not always be possible to use your personal machine for (all) your listening tests, even when all your tests are ‘local’.

When your interface requires a third party, proprietary tool like MATLAB or Max to be set up, this can pose a problem as this may not be available where the test is to take place. Furthermore, upgrades to newer versions of this third party software has been known to ‘break’ listening test software, meaning many more hours of updating and patching.

This is a much bigger problem when the test is to take place at different locations, with different computers and potentially different versions of operating systems or other software.

This has been the single most important driving factor behind the development of the Web Audio Evaluation Tool, even for projects where all tests were controlled, i.e. not hosted on a web server, with ‘internet strangers’ as participants, but in a dedicated listening room with known, skilled participants. Because these listening rooms can have very different computers, operating systems, and geographical locations, using a standalone test or a third party application such as MATLAB is often very tedious or even impossible.

In contrast, a browser-based tool typically works on any machine and operating system that supports the browsers it was designed for. In the case of the Web Audio Evaluation Tool, this means Firefox, Chrome, Edge, Safari, … Essentially every browser which supports the Web Audio API.


Multiple machines with centralised results collection

Central server
Source: jaxonraye.com

Another benefit of a browser-based listening test, again regardless of whether your test takes place ‘locally’ or ‘remotely’, is the possibility of easy, centralised collection of results of these tests. Not only is this more elegant than fetching every test result with a USB drive (from any number of computers you are using), but it is also much safer to save the result to your own server straight away. If you are more paranoid (which is encouraged in the case of listening tests), you can then back up this server continually for redundancy.

In the case of the Web Audio Evaluation Tool, you just put the test on a (local or remote) web server, and the results will be stored to this server by default.
Others have put the test on a regular file server (not web server) and run the included Python server emulator script python/pythonServer.py from the test computer. The results are then stored to the file server, which can be your personal machine on the same network.
Intermediate versions of the results are stored as well, so that an outage of the test computer means the results are not lost in the event of a computer crash, a human error or a forgotten dentist appointment. The test can be resumed at any point.

Multiple participants using the Web Audio Evaluation Tool at the same time, at Queen Mary University of London

Leveraging other web technologies


Source: LinkedIn

Finally, any listening test which is essentially a website, can be integrated within other sites or enhanced with any kind of web technologies. We have already seen clever use of YouTube videos as instructions or HTML index pages tracking progression through a series of tests.

The Web Audio Evaluation Tool seeks to facilitate this by providing the optional returnURL attribute, which specifies the page the participant is redirected to upon completion of the test. This page can be anything from a Doodle to schedule the next test session, an Amazon voucher, a reward cat video, to a secret Eventbrite page for a test participant party.


Are there any other benefits to using a browser-based tool for your listening tests? Please let us know!

[1] N. Jillings, B. De Man, D. Moffat and J. D. Reiss, “Web Audio Evaluation Tool: A browser-based listening test environment,” 12th Sound and Music Computing Conference, 2015.

[2] M. Cartwright, B. Pardo, G. Mysore and M. Hoffman, “Fast and Easy Crowdsourced Perceptual Audio Evaluation,” IEEE International Conference on Acoustics, Speech and Signal Processing, 2016.

[3] M. Schoeffler, F.-R. Stöter, H. Bayerlein, B. Edler and J. Herre, “An Experiment about Estimating the Number of Instruments in Polyphonic Music: A Comparison Between Internet and Laboratory Results,” 14th International Society for Music Information Retrieval Conference, 2013.

This post originally appeared in modified form on the Web Audio Evaluation Tool Github wiki

Swinging microphones and slashing lightsabers

thT6SSZWJU

Sound designers for film and games often use creative methods to generate the appropriate sound from existing sources, rather than through signal processing techniques designed to synthesise or process audio. One well-known technique for generating the Doppler effect is to swing a microphone back and forth in front of a sound source. This was used in the original Star Wars to generate the original lightsaber sound. As described by Ben Burtt, the sound designer;

“… once we had established this tone of the lightsaber of course you had to get the sense of the lightsaber moving because characters would carry it around, they would whip it through the air , they would thrust and slash at each other in fights, and to achieve this addtional sense of movement I played the sound over a speaker in a room.

Just the humming sound, the humming and the buzzing combined as an endless sound, and then I took another microphone and waved it in the air next to that speaker so that it would come close to the speaker and go away and you could whip it by. And what happens when you do that by recording with a moving microphone is you get a Doppler’s shift, you get a pitch shift in the sound and therefore you can produce a very authentic facsimile of a moving sound. And therefore give the lightsaber a sense of movement…”

Ben Burtt, by the way, is one of the most successful sound designers of all time. One of his trademarks is incorporating the ‘Wilhelm Scream’ into many of the films he works on. This scream is a stock sound clip that he found, which was originally recorded for the movie Distant Drums (1951). Here’s some examples of where the clip has been used.

 

 

John Cage and the anechoic chamber

 

An acoustic anechoic chamber is a room designed to be free of reverberation (hence non-echoing or echo-free). The walls, ceiling and floor are usually lined with a sound absorbent material to minimise reflections and insulate the room from exterior sources of noise. All sound energy will travel away from the source with almost none reflected back. Thus a listener within an anechoic chamber will only hear the direct sound, with no reverberation.

The anechoic chamber effectively simulates a quiet open-space of infinite dimension. Thus, they are used to conduct acoustics experiments in ‘free field’ conditions. They are often used to measure the radiation pattern of a microphone or of a noise source, or the transfer function of a loudspeaker.

An anechoic chamber is very quiet, with noise levels typically close to the threshold of hearing in the 10–20 dBA range (the quietest anechoic chamber has a decibel  level of -9.4dBA, well below hearing). Without the usual sound cues, people find the experience of being in an anechoic chamber very disorienting and often lose their balance. They also sometimes detect sounds they would not normally perceive, such as the beating of their own heart.

One of the earliest anechoic chambers was designed and built by Leo Beranek and Harvey Sleeper in 1943. Their design is the one upon which most modern anechoic chambers is based. In a lecture titled ‘Indeterminacy,’ the avant-garde composer John Cage described his experience when he visited Beranek’s chamber.

“in that silent room, I heard two sounds, one high and one low. Afterward I asked the engineer in charge why, if the room was so silent, I had heard two sounds… He said, ‘The high one was your nervous system in operation. The low one was your blood in circulation.’”

After that visit, he composed his famous work entitled 4’33”, consisting solely of silence and intended to encourage the audience to focus on the ambient sounds in the listening environment.

In his 1961 book ‘Silence,’ Cage expanded on the implications of his experience in the anechoic chamber. “Try as we might to make silence, we cannot… Until I die there will be sounds. And they will continue following my death. One need not fear about the future of music.”

Breaking the sound barrier

Consider a source moving at the speed of sound (Mach 1). The sounds it produces will travel at the same speed as the source, so that in front of the source, each new wavefront is compressed to occur at the same point. A listener placed in front of the source will not detect anything until the source arrives. All the wavefronts add together, creating a wall of pressure. This shock wave will not be perceived as a pitch but as a ‘thump’ as the pressure front passes the listener.

Pilots who have flown at Mach 1 have described a noticeable “barrier” that must be penetrated before achieving supersonic speeds. Traveling within the pressure front results in a bouncy, turbulent flight.

Now consider a sound source moving at supersonic speed, i.e., faster than the speed of sound. In this case, the source will be in advance of the wavefront. So a stationary listener will hear the sound after the source has passed by. The shock wave forms a Mach cone, which is a conical pressure front with the plane at the tip. This cone creates the sonic boom shock wave as a supersonic aircraft passes by. This shock wave travels at the speed of sound, and since it is the combination of all the wavefronts, the listener will hear a quite intense sound. However, supersonic aircraft actually produce two sonic booms in quick succession. One boom comes from the aircraft’s nose and the other one from its tail, resulting in a double thump.

The speed of sound varies with temperature and humidity, but not directly with pressure. In air at sea level, it is about 343 m/s. But in water, the speed of sound is far quicker (about 1,484 m/s), since molecules in water are more compressed than in air and sound is produced by the vibrations of the substance. So the sound barrier can be broken at different speeds depending on air conditions, but is far more difficult to break underwater.

The beginning of stereo

5a9cc9_6da9661bf6bc4c6bbc8d49e310139509 Alan and Doreen Blumlein wedding photo

The sound reproduction systems for the early ‘talkie’ movies  often had only a single loudspeaker. Because of this, the actors all sounded like they were in the same place, regardless of their position on screen.

In 1931, the electronics and sound engineer Alan Blumlein and his wife Doreen went to see a movie where this monaural sound reproduction occured. According to Doreen, as they were leaving the cinema, Alan said to her ‘Do you realise the sound only comes from one person?’  And she replied, ‘Oh does it?’  ‘Yes.’ he said, ‘And I’ve got a way to make it follow the person’.

The genesis of these ideas is uncertain (though it might have been while watching the movie), but he described them to Isaac Shoenberg, managing director at EMI and Alan’s mentor, in the late summer of 1931. Blumlein detailed his stereo technology in the British patent “Improvements in and relating to Sound-transmission, Sound-recording and Sound-reproducing systems,” which was accepted June 14, 1933.