John Cage and the anechoic chamber


An acoustic anechoic chamber is a room designed to be free of reverberation (hence non-echoing or echo-free). The walls, ceiling and floor are usually lined with a sound absorbent material to minimise reflections and insulate the room from exterior sources of noise. All sound energy will travel away from the source with almost none reflected back. Thus a listener within an anechoic chamber will only hear the direct sound, with no reverberation.

The anechoic chamber effectively simulates a quiet open-space of infinite dimension. Thus, they are used to conduct acoustics experiments in ‘free field’ conditions. They are often used to measure the radiation pattern of a microphone or of a noise source, or the transfer function of a loudspeaker.

An anechoic chamber is very quiet, with noise levels typically close to the threshold of hearing in the 10–20 dBA range (the quietest anechoic chamber has a decibel  level of -9.4dBA, well below hearing). Without the usual sound cues, people find the experience of being in an anechoic chamber very disorienting and often lose their balance. They also sometimes detect sounds they would not normally perceive, such as the beating of their own heart.

One of the earliest anechoic chambers was designed and built by Leo Beranek and Harvey Sleeper in 1943. Their design is the one upon which most modern anechoic chambers is based. In a lecture titled ‘Indeterminacy,’ the avant-garde composer John Cage described his experience when he visited Beranek’s chamber.

“in that silent room, I heard two sounds, one high and one low. Afterward I asked the engineer in charge why, if the room was so silent, I had heard two sounds… He said, ‘The high one was your nervous system in operation. The low one was your blood in circulation.’”

After that visit, he composed his famous work entitled 4’33”, consisting solely of silence and intended to encourage the audience to focus on the ambient sounds in the listening environment.

In his 1961 book ‘Silence,’ Cage expanded on the implications of his experience in the anechoic chamber. “Try as we might to make silence, we cannot… Until I die there will be sounds. And they will continue following my death. One need not fear about the future of music.”

Breaking the sound barrier

Consider a source moving at the speed of sound (Mach 1). The sounds it produces will travel at the same speed as the source, so that in front of the source, each new wavefront is compressed to occur at the same point. A listener placed in front of the source will not detect anything until the source arrives. All the wavefronts add together, creating a wall of pressure. This shock wave will not be perceived as a pitch but as a ‘thump’ as the pressure front passes the listener.

Pilots who have flown at Mach 1 have described a noticeable “barrier” that must be penetrated before achieving supersonic speeds. Traveling within the pressure front results in a bouncy, turbulent flight.

Now consider a sound source moving at supersonic speed, i.e., faster than the speed of sound. In this case, the source will be in advance of the wavefront. So a stationary listener will hear the sound after the source has passed by. The shock wave forms a Mach cone, which is a conical pressure front with the plane at the tip. This cone creates the sonic boom shock wave as a supersonic aircraft passes by. This shock wave travels at the speed of sound, and since it is the combination of all the wavefronts, the listener will hear a quite intense sound. However, supersonic aircraft actually produce two sonic booms in quick succession. One boom comes from the aircraft’s nose and the other one from its tail, resulting in a double thump.

The speed of sound varies with temperature and humidity, but not directly with pressure. In air at sea level, it is about 343 m/s. But in water, the speed of sound is far quicker (about 1,484 m/s), since molecules in water are more compressed than in air and sound is produced by the vibrations of the substance. So the sound barrier can be broken at different speeds depending on air conditions, but is far more difficult to break underwater.

The beginning of stereo

5a9cc9_6da9661bf6bc4c6bbc8d49e310139509 Alan and Doreen Blumlein wedding photo

The sound reproduction systems for the early ‘talkie’ movies  often had only a single loudspeaker. Because of this, the actors all sounded like they were in the same place, regardless of their position on screen.

In 1931, the electronics and sound engineer Alan Blumlein and his wife Doreen went to see a movie where this monaural sound reproduction occured. According to Doreen, as they were leaving the cinema, Alan said to her ‘Do you realise the sound only comes from one person?’  And she replied, ‘Oh does it?’  ‘Yes.’ he said, ‘And I’ve got a way to make it follow the person’.

The genesis of these ideas is uncertain (though it might have been while watching the movie), but he described them to Isaac Shoenberg, managing director at EMI and Alan’s mentor, in the late summer of 1931. Blumlein detailed his stereo technology in the British patent “Improvements in and relating to Sound-transmission, Sound-recording and Sound-reproducing systems,” which was accepted June 14, 1933.


The serendipitous invention of the wah-wah pedal


The first wah-wah pedal is attributed to Brad Plunkett in 1966, who worked at Warwick Electronics Inc., which owned Thomas Organ Company. Warwick Electronics acquired the Vox name due to the brand name’s popularity and association with the Beatles. Their subsidiary, Thomas Organ Company, needed a modified design for the Vox amplifier, which had a midrange boost, so that it would be less expensive to manufacture.

In a 2005 interview (M. Vdovin, “Artist Interview: Brad Plunkett,” Universal Audio WebZine, vol. 3, October 2005) Brad Plunkett said, I “came up with a circuit that would allow me to move this midrange boost … As it turned out, it sounded absolutely marvelous while you were moving it. It was okay when it was standing still, but the real effect was when you were moving it and getting a continuous change in harmonic content. We turned that on in the lab and played the guitar through it… I turned the potentiometer and he played a couple licks on the guitar, and we went crazy.

A couple of years later… somebody said to me one time, ‘You know Brad, I think that thing you invented changed music.’”

Acoustic reverberators

Today, reverb is most often added to a recording using artificial reverberators, such as software plug-ins or digital reverb hardware units. But there are a lot of other approaches.

Many recording studios have used special rooms known as reverberation chambers to add reverb to a performance. Elevator shafts and stairwells (as in New York City’s Avatar Recording Studio) work well as highly reverberant rooms. The reverb can also be controlled by adding absorptive materials like curtains and rugs.

Spring reverbs are found in many guitar amplifiers and have been used in Hammond organs. The audio signal is coupled to one end of the spring by a transducer that creates waves traveling through the spring. At the far end of the spring, another transducer converts the motion of the string into an electrical signal, which is then added to the original sound. When a wave arrives at an end of the spring, part of the wave’s energy is reflected. However, these reflections have different delays and attenuations from what would be found in a natural acoustic environment, and there may be some interaction between the waves in a spring, thus this results in a slightly unusual (though not unpleasant) reverb sound.

Often several springs with different lengths and tensions are enclosed in a metal box, known as the reverb pan, and used together. This avoids uniform behavior and creates a more realistic, pseudorandom series of echoes. In most reverb units though, the spring lengths and tensions are fixed in the design process, and not left to the user to control.

The plate reverb is similar to a spring reverb, but instead of springs, the  transducers are attached at several locations on a metal plate. These transducers send vibrations through the plate, and reflections are produced whenever a wave reaches the plate’s edge. The location of the transducers and the damping of the plate can be adjusted to control the reverb. However, plate reverbs are expensive and bulky, and hence not widely used.

Water tank reverberators have also been used. Here, the audio signal is modulated with an ultrasonic signal and transmitted through a tank of water. The output is then demodulated, resulting in the reverberant output sound. Other reverberators include pipes with microphones placed at various points.

These acoustic and analogue reverberators can be interesting to create and use, but they lack the simplicity and ease of use of digital reverberators. Ultimately, the choice of implementation is a matter of taste.

High resolution audio- finally, rigorously put to the test. And the verdict is…

Yes, you can hear a difference! (but it is really hard to measure)

See for the June 2016 article in the Journal of the Audio Engineering Society  on “A meta-analysis of high resolution audio perceptual evaluation”

For years, I’ve been hearing people in the audio engineering community arguing over whether or not it makes any difference to record, mix and playback better than CD quality (44.1 kHz, 16 bit) or better than production quality (48 kHz, 16 bit) audio. Some people swear they can hear a difference, others have stories about someone they met who could always pick out the differences, others say they’re all just fooling themselves. A few people could mention a study or two that supported their side, but the arguments didn’t seem to ever get resolved.

Then, a bit more than a year ago I was at a dinner party where a guy sitting across from me was about to complete his PhD in meta-analysis. Meta-analysis? I’d never heard of it. But the concept, analysing and synthesising the results of many studies to get a more definitive answer and gain more insights and knowledge, really intrigued me. So it was about time that someone tried this on the question of perception of hi-res audio.

Unfortunately, no one I asked was willing to get involved. A couple of experts thought there couldn’t be enough data out there to do the meta-analysis. A couple more thought that the type of studies (not your typical clinical trial with experimental and control groups) couldn’t be analysed using the established statistical approaches in meta-analysis. So, I had to do it myself. This also meant I had to be extra careful, and seek out as much advice as possible, since no one was looking over my shoulder to tell me when I was wrong or stupid.

The process was fascinating. The more I looked, the more I uncovered studies of high resolution audio perception. And my main approach for finding them (start with a few main papers, then look at everyone they cited and everyone who cited them, and repeat with any further interesting papers found), was not mentioned in the guidance to meta-analysis that I read. Then getting the data was interesting. Some researchers had it all prepared in handy, well-labelled spreadsheets, one other found it in an old filing cabinet, one had never kept it at all! And for some data, I had to write little programs to reverse engineer the raw data from T values for trials with finite outcomes.

Formal meta-analysis techniques could be applied, and I gained a strong appreciation for both the maths behind them, and the general guidance that helps ensure rigour and helps avoid bias in the meta-study, But the results, in a few places, disagreed with what is typical. The potential biases in the studies seemed to occur more often with those that did not reject the null hypothesis, i.e., those that found no evidence for discriminating between high resolution and CD quality audio. Evidence of publication bias seemed to mostly go away if one put the studies into subgroups. And use of binomial probabilities allowed the statistical approaches in meta-analysis to be applied to studies where there was not a control group (‘no effect’ can be determined just from binomial probabilities).

The end result was that people could, sometimes, perceive the difference between hi-res and CD audio. But they needed to be trained and the test needed to be carefully designed. And it was nice to see that the experiments and analysis were generally a little better today than in the past, so research is advancing. Still, most tests had some biases towards false negatives. So perhaps, careful experiments, incorporating all the best approaches, may show this perception even more strongly.

Meta-analysis is truly fascinating, and audio engineering, psychoacoustics, music technology and related fields need more of it.

The Swoosh of the Sword

When we watch Game of Thrones or play the latest Assassin’s Creed the sound effect added to a sword being swung adds realism, drama and overall excitement to our viewing experience.

There are a number of methods for producing sword sound effects, from filtering white noise with a bandpass filter to solving the fundamental equations for fluid dynamics using finite volume methods. One method investigated by the Audio Engineering research team at QMUL was to find semi-empirical equations used in the Aeroacoustic community as an alternative to solving the full Navier Stokes equations. Running in real-time these provide computationally efficient methods of achieving accurate results – we can model any sword, swung at any speed and even adjust the model to replicate the sound of a baseball bat or golf club!

The starting point for these sound effect models is that of the Aeolian tone, (see previous blog entry – The Aeolian tone is the sound generated as air flows around an object, in the case of our model, a cylinder. In the previous blog we describe the creation of a sound synthesis model for the Aeolian tone, including a link to a demo version of the model.

For a sword we take a number of the Aeolian tone models and place them on a virtual sword at different place settings. This is shown below:


Each Aeolian tone model is called a compact source. It can be seen that more are placed at the tip of the sword rather than the hilt. This is because the acoustic intensity is far higher for faster moving sources. There are 6 sources placed at the tip, positioned at a distance of 7 x the sword diameter. This distance is based on when the aerodynamic effects become de-correlated, although a simplification. One source is placed at the hilt and the final source equidistant between the last tip source and the hilt.

The complete model is presented in a GUI as shown below:


Referring to the both previous figures, it can be seen that the user is able to move the observer position within a 3D space. The thickness of the blade can be set at the tip and the hilt as well as the length of the blade. It is then linearly interpolated over the blade length so that each source diameter can be calculated.

The azimuth and elevation of the sword pre and post swing can be set. The strike position is fixed to an azimuth of 180 degrees and this is the point where the sword reaches its maximum speed. The user sets the top speed of the tip from the GUI. The Prime button makes sure all the variables are pushed through into the correct places in equations and the Go button triggers the swing.

It can be seen that there are 4 presets. Model 1 is a thin fencing type sword and Model 2 is a thicker sword. To test versatility of the model we decided to try and model a golf club. The preset PGA will set the model to implement this. The golf club model involves making the diameter of the source at the tip much larger, to represent the striking face of a golf club. It was found that those unfamiliar with golf did not identify the sound immediately so a simple golf ball strike sound is synthesised as the club reaches top speed.

To test versatility further, we created a model to replicate the sound of a baseball bat; preset MLB. This is exactly the same model as the sword with the dimensions just adjusted to the length of a bat plus the tip and hilt thickness. A video with all the preset sounds is given below. This includes two sounds created by a model with reduced physics, LoQ1 & LoQ2. These were created to investigate if there is any difference in perception.

The demo model was connected to the animation of a knight character in the Unity game engine. The speed of the sword is directly mapped from the animation to the sound effect model and the model observer position set to the camera position. A video of the result is given below: