What we did in 2018

2018 is coming to an end, and everyone is rushing to get their ‘Year in Review’ articles out. We’re no different in that regard. Only we’re going to do it in two parts, first what we have been doing this year, and then a second blog entry reviewing all the great breakthroughs and interesting research results in audio engineering, psychoacoustics, sound synthesis and related fields.

But first, lets talk about us. 🙂

I think we’ve all done some wonderful research this year, and the Audio Engineering team here can be proud of the results and progress.

Social Media:

First off, we’ve increased our social media presence tremendously,

• This blog, intelligentsoundengineering.wordpress.com/ has almost 22,000 views, with  1,711 followers, mostly through other social media.

• Our twitter account, twitter.com/IntelSoundEng has 886 followers. Not huge, but growing and doing well a research-focused feed.

• Our Youtube channel, www.youtube.com/user/IntelligentSoundEng has over 20,000 views and 206 subscribers. Which reminds me, I’ve got some more videos to put up.

If you haven’t already, subscribe to the feeds and tell your friends 😉 .

Awards:

Last year’s three awards was exceptional. This year I won Queen Mary University of London’s Bruce Dickinson Entrepreneur of the Year award. Here’s a little video featuring all the shortlisted nominees (I start about 50 seconds in).

I gave the keynote talk at this year’s Digital Audio Effects Conference. And not exactly an award, but still a big deal. I gave my inaugural professorship lecture, titled Do you hear what I hear? The science of everyday sounds.

People:

This was the year everyone graduated!

David Moffat, Yonghao Wang, Dave Ronan, Josh Mycroft, and Rod Selfridge  all successfully defended their PhDs. They did amazing and are all continuing to impress.

Parham Bahadoran and Tom Vassallo started exciting positions at AI Music, and Brecht de Man started with Semantic Audio. Expect great things from both those companies. There’s lots of others who moved around- too many to mention.

Grants and projects:

We finished the Cross-adaptive processing for musical intervention project  and the Autonomous Systems for Sound Integration and GeneratioN (ASSIGN) InnovateUK project. We’ve been working closely with industry on a variety of projects, especially with RPPtv, who are funding Emmanouil Chourdakis’s PhD and collaborated on InnovateUK projects. We are starting a very interesting ICASE Studentship with BBC- more on that in another entry, and may soon start a studentship with Yamaha. We formed the spin-out company FXive, which hopefully will be able to launch product soon.

Publications:

We had a great year for publications. I’ve listed all the ones I can think of below.

Journal articles

  1. Hu, W., Ma, T., Wang, Y., Xu, F., & Reiss, J. (2018). TDCS: a new scheduling framework for real-time multimedia OS. International Journal of Parallel, Emergent and Distributed Systems, 1-16.
  2. R. Selfridge, D. Moffat, E. Avital and J. D. Reiss, ‘Creating Real-Time Aeroacoustic Sound Effects Using Physically Derived Models,’ Journal of the Audio Engineering Society, 66 (7/8), pp. 594–607, July/August 2018, DOI: https://doi.org/10.17743/jaes.2018.0033
  3. J. D. Reiss, Ø. Brandtsegg, ‘Applications of cross-adaptive audio effects: automatic mixing, live performance and everything in between,’ Frontiers in Digital Humanities, 5 (17), 28 June 2018
  4. D. Moffat and J. D. Reiss, ‘Perceptual Evaluation of Synthesized Sound Effects,’ ACM Transactions on Applied Perception, 15 (2), April 2018
  5. Milo, Alessia, Nick Bryan-Kinns, and Joshua D. Reiss. “Graphical Research Tools for Acoustic Design Training: Capturing Perception in Architectural Settings” In Handbook of Research on Perception-Driven Approaches to Urban Assessment and Design, pp. 397-434. IGI Global, 2018.
  6. H. Peng and J. D. Reiss, ‘Why Can You Hear a Difference between Pouring Hot and Cold Water? An Investigation of Temperature Dependence in Psychoacoustics,’ 145th AES Convention, New York, Oct. 2018
  7. N. Jillings, B. De Man, R. Stables, J. D. Reiss, ‘Investigation into the Effects of Subjective Test Interface Choice on the Validity of Results.’ 145th AES Convention, New York, Oct. 2018
  8. P. Bahadoran, A. Benito, W. Buchanan and J. D. Reiss, “FXive: investigation and implementation of a sound effect synthesis service,” Amsterdam, International Broadcasting Convention (IBC), 2018
  9. M. A. Martinez Ramirez and J. D. Reiss, ‘End-to-end equalization with convolutional neural networks,’ Digital Audio Effects (DAFx), Aveiro, Portugal, Sept. 4–8 2018.
  10. D. Moffat and J. D. Reiss, “Objective Evaluations of Synthesised Environmental Sounds,” Digital Audio Effects (DAFx), Aveiro, Portugal, Sept. 4–8 2018
  11. W. J. Wilkinson, J. D. Reiss, D. Stowell, ‘A Generative Model for Natural Sounds Based on Latent Force Modelling,’ Arxiv pre-print version. International Conference on Latent Variable Analysis and Signal Separation, Guildford, UK, July 2018
  12. E. T. Chourdakis and J. D. Reiss, ‘From my pen to your ears: automatic production of radio plays from unstructured story text,’ 15th Sound and Music Computing Conference (SMC), Limassol, Cyprus, 4-7 July, 2018
  13. R. Selfridge, J. D. Reiss, E. Avital, Physically Derived Synthesis Model of an Edge Tone, Audio Engineering Society Convention 144, May 2018
  14. A. Pras, B. De Man, J. D Reiss, A Case Study of Cultural Influences on Mixing Practices, Audio Engineering Society Convention 144, May 2018
  15. J. Flynn, J. D. Reiss, Improving the Frequency Response Magnitude and Phase of Analogue-Matched Digital Filters, Audio Engineering Society Convention 144, May 2018
  16. P. Bahadoran, A. Benito, T. Vassallo, J. D. Reiss, FXive: A Web Platform for Procedural Sound Synthesis, Audio Engineering Society Convention 144, May 2018

 

See you in 2019!

Congratulations Dr. Rod Selfridge!

This afternoon one of our PhD student researchers, Rod Selfridge, successfully defended his PhD. The form of these exams, or vivas, varies from country to country, and even institution to institution, which we discussed previously. Here, its pretty gruelling; behind closed doors, with two expert examiners probing every aspect of the PhD.

Rod’s PhD was on ‘Real-time sound synthesis of aeroacoustic sounds using physical models.’ Aeroacoustic sounds are those generated from turbulent fluid motion or aerodynamic forces, like wind whistling or the swoosh of a sword. But when researchers simulate such phenomena, they usually use highly computational approaches. If you need to analyse airplane noise, it might be okay to spend hours of computing time for a few seconds of sound, but you can’t use that approach in games or virtual reality. The alternative is procedural audio, which involves real-time and controllable sound generation. But that is usually not based on the actual physics that generated the sound. For complicated sounds, at best it is inspired by the physics.

Rod wondered if physical models could be implemented in a procedural audio context. For this, he took a fairly novel approach. Physical modelling often involves gridding up a space and looking at the interaction between each grid element , such as in finite difference time domain methods. But there are equations explaining many aspects of aeroacoustics, so why not build them directly into the model. This is like the difference between modelling a bouncing ball by building a dynamic model of the space in which it could move, or you could just apply Newton’s laws of motion. And Rod took the latter approach. Here’s a slick video summarising what the PhD is about,

It worked. He was able to apply real-time, interactive physical models of propeller sounds, aeolian tones, cavity tones, edge tones, the aeolian harp, a bullroarer. He won the Silver Design Award and Best Student Paper Award at the 141st AES Convention, and the Best Paper Award at the Sound and Music Computing conference. And he produced some great demonstration videos of his work, like

and

and

and

Rod also contributed a lot of great blog entries,

So congratulations to Dr. Rod Selfridge, and best of luck with his future endeavours. 🙂

This is the first blog entry I’ve written for a graduating PhD student. I really should do it for all of them- they’ve all been doing great stuff.

And finally, here’s a list of all Rod’s papers as a member of the Intelligent Sound Engineering team.

·        R. Selfridge, D. Moffat, E. Avital and J. D. Reiss, ‘Creating Real-Time Aeroacoustic Sound Effects Using Physically Derived Models,’ Journal of the Audio Engineering Society, 66 (7/8), pp. 594–607, July/August 2018, DOI: https://doi.org/10.17743/jaes.2018.0033

·        R. Selfridge, D. Moffat and J. D. Reiss, ‘Sound Synthesis of Objects Swinging through Air Using Physical Models,’ Applied Sciences, v. 7 (11), Nov. 2017, Online version doi:10.3390/app7111177

·        R. Selfridge, J. D. Reiss, E. Avital, Physically Derived Synthesis Model of an Edge Tone, Audio Engineering Society Convention 144, May 2018

·        R. Selfridge, D. Moffat and J. D. Reiss, ‘Physically Derived Sound Synthesis Model of a Propeller,’ Audio Mostly, London, 2017

·        R. Selfridge, D. Moffat and J. D. Reiss, ‘Physically Derived Synthesis Model of a Cavity Tone,’ Digital Audio Effects (DAFx) Conf., Edinburgh, September 5–9, 2017

·        R. Selfridge, D. J. Moffat and J. D. Reiss, ‘Real-time physical model for synthesis of sword swing sounds,’ Best paper award, Sound and Music Computing (SMC), Helsinki, July 5-8, 2017.

·        R. Selfridge, D. J. Moffat, E. Avital, and J. D. Reiss, ‘Real-time physical model of an Aeolian harp,’ 24th International Congress on Sound and Vibration (ICSV), London, July 23-27, 2017.

·        R. Selfridge, J. D. Reiss, E. Avital, and X. Tang, “Physically derived synthesis model of aeolian tones,” winner of the Best Student Paper award, 141st Audio Engineering Society Convention USA, 2016.

·        R. Selfridge and J. D. Reiss, Interactive Mixing Using the Wii Controller, AES 130th Convention, May 2011.