Bounce, bounce, bounce . . .


Another in our continuing exploration of everyday sounds (Screams, Applause, Pouring water) is the bouncing ball. It’s a nice one for a blog entry since there are only a small number of papers focused on bouncing, which means we can give a good overview of the field. It’s also one of those sounds that we can identify very clearly; we all know it when we hear it. It has two components that can be treated separately; the sound of a single bounce and the timing between bounces.

Let’s consider the second aspect. If we drop a ball from a certain height and ignore any drag, the time it takes to hit the ground is completely determined by gravity. When it hits the ground, some energy is absorbed on impact. And so it may be traveling downwards with a velocity v1 just before impact, and after impact travels upwards with velocity v2. The ratio v2/v1 is called the coefficient of restitution (COR). A high COR means that the ball travels back up almost to its original height, and a low COR means that most energy is absorbed and it only travels up a short distance.

Knowing COR, one can use simple equations of motion to determine the time between each bounce. And since the sum of the times between bounces is a convergent series, one can find the maximum time until it stops bouncing. Conversely, measuring the coefficient of friction from times between bounces is literally a tabletop physics experiment (Aguiar 2003, Farkas 2006, Schwarz 2013). And kinetic energy depends on the square of the velocity, so we know how much energy is lost with each bounce, which also gives an idea of how the sound levels of successive bounces should decrease.

[The derivation of all this has been left to the reader 😊. But again, its straightforward application of the equations of motion that give time dependence of position and velocity under constant acceleration]

Its not that hard to extend this approach, for instance by including air drag or sloped surfaces. But if you put the ball on a vibrating platform, all sorts of wonderful nonlinear behaviour can be observed; chaos, locking and chattering (Luck 1993).

For instance, have a look at the following video; which shows some interesting behaviour where bouncing balls all seem to organise onto one side of a partition.

So much for the timing of bounces, but what about the sound of a single bounce? Well, Nagurka (2004) modelled the bounce as a mass-spring-damper system, giving the time of contact for each bounce. It provides a little more realism by capturing some aspects of the bounce sound, Stoelinga (2007) did a detailed analysis of bouncing and rolling sounds. It has a wealth of useful information, and deep insights into both the physics and perception of bouncing, but stops short of describing how to synthesize a bounce.

To really capture the sound of a bounce, something like modal synthesis should be used. That is, one should identify the modes that are excited for impact of a given ball on a given surface, and their decay rates. Farnell measured these modes for some materials, and used those values to synthesize bounces in Designing Sound . But perhaps the most detailed analysis and generation of such sounds, at least as far as I’m aware, is in the work of Davide Rocchesso and his colleagues, leaders in the field of sound synthesis and sound design. They have produced a wealth of useful work in the area, but an excellent starting point is The Sounding Object.

Are you aware of any other interesting research about the sound of bouncing? Let us know.

Next week, I’ll continue talking about bouncing sounds with discussion of ‘the audiovisual bounce-inducing effect.’


  • Aguiar CE, Laudares F. Listening to the coefficient of restitution and the gravitational acceleration of a bouncing ball. American Journal of Physics. 2003 May;71(5):499-501.
  • Farkas N, Ramsier RD. Measurement of coefficient of restitution made easy. Physics education. 2006 Jan;41(1):73.
  • Luck, J.M. and Mehta, A., 1993. Bouncing ball with a finite restitution: chattering, locking, and chaos. Physical Review E, 48(5), p.3988.
  • Nagurka, M., Shuguang H,. “A mass-spring-damper model of a bouncing ball.” American Control Conference, 2004. Vol. 1. IEEE, 2004.
  • Schwarz O, Vogt P, Kuhn J. Acoustic measurements of bouncing balls and the determination of gravitational acceleration. The Physics Teacher. 2013 May;51(5):312-3.
  • Stoelinga C, Chaigne A. Time-domain modeling and simulation of rolling objects. Acta Acustica united with Acustica. 2007 Mar 1;93(2):290-304.

12th International Audio Mostly Conference, London 2017

by Rod Selfridge & David Moffat. Photos by Beici Liang.

Audio Mostly – Augmented and Participatory Sound and Music Experiences, was held at Queen Mary University of London between 23 – 26 August. The conference brought together a wide variety of audio and music designers, technologists, practitioners and enthusiasts from all over the world.

The opening day of the conference ran in parallel with the Web Audio Conference, also being held at Queen Mary, with sessions open to all delegates. The day opened with a joint Keynote from the computer scientist and author of the highly influential sound effect book – Designing Sound, Andy Farnell. Andy covered a number of topics and invited audience participation which grew into a discussion regarding intellectual property – the pros and cons if it was done away with.

Andy Farnell

The paper session then opened with an interesting talk by Luca Turchet from Queen Mary’s Centre for Digital Music. Luca presented his paper on The Hyper Mandolin, an augmented music instrument allowing real-time control of digital effects and sound generators. The session concluded with the second talk I’ve seen in as many months by Charles Martin. This time Charles presented Deep Models for Ensemble Touch-Screen Improvisation where an artificial neural network model has been used to implement a live performance and sniffed touch gestures of three virtual players.

In the afternoon, I got to present my paper, co-authored by David Moffat and Josh Reiss, on a Physically Derived Sound Synthesis Model of a Propeller. Here I continue the theme of my PhD by applying equations obtained through fluid dynamics research to generate authentic sound synthesis models.

Rod Selfridge

The final session of the day saw Geraint Wiggins, our former Head of School at EECS, Queen Mary, present Callum Goddard’s work on designing Computationally Creative Musical Performance Systems, looking at questions like what makes performance virtuosic and how this can be implemented using the Creative Systems Framework.

The oral sessions continued throughout Thursday, one presentation that I found interesting was by Anna Xambo titles Turn-Taking and Chatting in Collaborative Music Live Coding. In this research the authors explored collaborative music live coding using the live coding environment and pedagogical tool EarSketch, focusing on the benefits to both performance and education.

Thursday’s Keynote was by Goldsmith’s Rebecca Fiebrink, who was mentioned in a previous blog, discussing how machine learning can be used to support human creative experiences, aiding human designers for rapid prototyping and refinement of new interactions within sound and media.

Rebecca Fiebrink

The Gala Dinner and Boat Cruise was held on Thursday evening where all the delegates were taken on a boat up and down the Thames, seeing the sites and enjoying food and drink. Prizes were awarded and appreciation expressed to the excellent volunteers, technical teams, committee members and chairpersons who brought together the event.

Tower Bridge

A session on Sports Augmentation and Health / Safety Monitoring was held on Friday Morning which included a number of excellent talks. The presentation of the conference went to Tim Ryan who presented his paper on 2K-Reality: An Acoustic Sports Entertainment Augmentation for Pickup Basketball Play Spaces. Tim re-contextualises sounds appropriated from a National Basketball Association (NBA) video game to create interactive sonic experiences for players and spectators. I was lucky enough to have a play around with this system during a coffee break and can easily see how it could give an amazing experience for basketball enthusiasts, young and old, as well as drawing in a crowd to share.

Workshops ran on Friday afternoon. I went to Andy Farnell’s Zero to Hero Pure Data Workshop where participants managed to go from scratch to having a working bass drum, snare and high-hat synthesis models. Andy managed to illustrate how quickly these could be developed and included in a simple sequencer to give a basic drum machine.

Throughout the conference a number of fixed media, demos were available for delegates to view as well as poster sessions where authors presented their work.

Alessia Milo

Live music events were held on both Wednesday and Friday. A joint session titled Web Audio Mostly Concert was held on Wednesday which was a joint event for delegates of Audio Mostly and the Web Audio Conference. This included an augmented reality musical performance, a human-playable robotic zither, the Hyper Mandolin and DJs.

The Audio Mostly Concert on the Friday included a Transmusicking performance from a laptop orchestra from around the world, where 14 different performers collaborated online. The performance was curated by Anna Xambo. Alan Chamberlain and David De Roure performed The Gift of the Algorithm, which was a computer music performance inspired by Ada Lovelace. The wood and the water was an immersive performance of interactivity and gestural control of both a Harp and lighting for the performance, by Balandino Di Donato and Eleanor Turner. GrainField, by Benjamin Matuszewski and Norbert Schnell, was an interactive audio performance that demanded entire audience involvement, for the performance to exist, this collective improvisational piece demonstrated a how digital technology can really be used to augment the traditional musical experience. GrainField was awarded the prize for the best musical performance.

Adib Mehrabi

The final day of the conference was a full day’s workshop. I attended the one titled Designing Sounds in the Cloud. The morning was spent presenting two ongoing European Horizon 2020 projects, Audio Commons ( and Rapid-Mix. The Audio Commons initiative aims to promote the use of open audio content by providing a digital ecosystem that connects content providers and creative end users. The Rapid-Mix project focuses on multimodal and procedural interactions leveraging on rich sensing capabilities, machine learning and embodied ways to interact with sound.

Before lunch we took part in a sound walk around the Queen Mary Mile End Campus, with one of each group blindfolded, informing the other what they could hear. The afternoon session had teams of participants designing and prototyping new ways to use the APIs from each of the two Horizon 2020 projects – very much in the feel of a hackathon. We devised a system which captured expressive Italian hand gestures using the Leap Motion and classified them using machine learning techniques. Then in pure data each new classification triggered a sound effect taken from the Freesound website (part of the audio commons project). If time would have allowed the project would have been extended to have pure data link to the audio commons API and play sound effects straight from the web.

Overall, I found the conference informative, yet informal, enjoyable and inclusive. The social events were spectacular and ones that will be remembered by delegates for a long time.