Audio Research Year in Review- Part 2, the Headlines

Last week featured the first part of our β€˜Audio research year in review.’ It focused on our own achievements. This week is the second, concluding part, with a few news stories related to the topics of this blog (music production, psychoacoustics, sound synthesis and everything in between) for each month of the year.

Browsing through the list, some interesting things pop up. Several news stories related to speech intelligibility in broadcast TV, which has been a recurring story the last few years. Effects of noise pollution on wildlife is also a theme in this year’s audio research headlines. And quite a few of the psychological studies are telling us what we already know. The fact that musicians (who are trained in a task that involves quick response to stimuli) have faster reaction times than non-musicians (who may not be trained in such a task) is not a surprise. Nor is the fact that if you hear the cork popping from a wine bottle, you may think it tastes better, although that’s a wonderful example of the placebo effect. But studies that end up confirming assumptions are still worth doing.

January

February

March

April

May

string wine glass

June

July

August

September

October

November

December

Audio Research Year in Review- Part 1, It’s all about us

Enjoy the holiday!

So as 2017 is coming to an end, everyone is rushing to get their β€˜Year in Review’ articles out. And we’re no different in that regard. Only we’re going to do it in two parts, first what we have been doing this year, and then a second blog entry reviewing all the great breakthroughs and interesting research results in audio engineering, psychoacoustics, sound synthesis and related fields.

But first, lets talk about us. πŸ™‚

I think we’ve all done some wonderful research this year, and the Audio Engineering team here can be proud of the results and progress.

Social Media:

First off, we’ve increased our social media presence tremendously,

β€’ This blog,Β intelligentsoundengineering.wordpress.com/ has 7,363 views, withΒ  1,128 followers, mostly through other social media.

β€’ We started a twitter account, twitter.com/IntelSoundEng and now have 615 followers. Not huge, but doing well for the first few months of a research-focused feed.

β€’ Our Youtube channel, www.youtube.com/user/IntelligentSoundEng has 16,778 views and 178 subscribers

Here’s a sample video from our YouTube channel;

So people are reading and watching, which gives us even more incentive to put stuff out there that’s worth it for you to check out.

Awards:

We won three best paper or presentation awards;

Adan Benito (left) and Thomas Vassallo (right) for best presentation at the Web Audio Conference

benito vassallo awardRod Selfridge (right) , Dave Moffat and I, for best paper at Sound and Music Computing

selfridge award

I (right) won the best Journal of the Audio Engineering Society paper award, 2016 (announced in 2017 of course)

reiss award2

 

People:

Brecht De Man got his PhD and Yonghao Wang submitted his. Dave Ronan, Alessia Milo, Josh Mycroft and Rod Selfridge have all entered the write-up stage of their PhDs.

Brecht started a post-doc position and became Vice-Chair of the AES Education Committee, and I (Josh Reiss) was promoted to Professor of Audio Engineering. Dave Ronan started a position at AI Music.

We also welcomed a large number of visitors throughout the year, notably Dr. Amandine Pras and Saurjya Sarkar, now with Qualcomm.

Grants and projects:

We started the Cross-adaptive processing for musical intervention project (supporting Brecht, and Saurjya’s visit) and the Autonomous Systems for Sound Integration and GeneratioN (ASSIGN) InnovateUK project (supporting RTSFX researchers). We completed Brecht’s Yamaha postdoc, with another expected, and completed the QMI Proof of Concept: Sound Effect Synthesis project. We’ve been working closely with industry on a variety of projects, especially with RPPtv, who are funding Emmanouil Chourdakis’s PhD and collaborating on InnovateUK projects. We have other exciting grants in progress.

Events:

We’ve been involved in a few workshops. Will Wilkinson and Dave Moffat were on the organising committee for Audio Mostly. Alessia Milo gave an invited talk at the 8th International Symposium on Temporal Design, and organised a soundwalk at Audible Old Kent Road. Brecht and I were on the organizing committee of the 3rd Workshop on Intelligent Music Production. Brecht organized Sound Talking at the Science Museum, and panel sessions on listening tests design at the 142nd and 143rd AES Conventions. Dave Moffat organized a couple of Procedural Audio Now meet-ups.

Publications:

We had a fantastic year for publications, five journal papers (and one more accepted) and twenty conference papers. I’ve listed them all below.

Journal articles

  1. D. Moffat and J. D. Reiss, ‘Perceptual Evaluation of Synthesized Sound Effects,’ accepted for ACM Transactions on Applied Perception
  2. R. Selfridge, D. Moffat and J. D. Reiss, ‘Sound Synthesis of Objects Swinging through Air Using Physical Models,’ Applied Sciences, v. 7 (11), Nov. 2017, Online version doi:10.3390/app7111177
  3. A. Zacharakis, M. Terrell, A. Simpson, K. Pastiadis and J. Reiss ‘Rearrangement of timbre space due to background noise: behavioural evidence and acoustic correlates,’ Acta Acustica united with Acustica, 103 (2), 288-298, 2017. Definitive publisher-authenticated version at http://www.ingentaconnect.com/content/dav/aaua
  4. P. Pestana and J. Reiss, ‘User Preference on Artificial Reverberation and Delay Time Parameters,’ J. Audio Eng. Soc., Vol. 65, No. 1/2, January/February 2017.
  5. B. De Man, K. McNally and J. Reiss, ‘Perceptual evaluation and analysis of reverberation in multitrack music production,’ J. Audio Eng. Soc., Vol. 65, No. 1/2, January/February 2017.
  6. E. Chourdakis and J. Reiss, ‘A machine learning approach to design and evaluation of intelligent artificial reverberation,’ J. Audio Eng. Soc., Vol. 65, No. 1/2, January/February 2017.

Book chapters

  • Accepted: A. Milo, N. Bryan-Kinns, and J. D. Reiss. Graphical Research Tools for Acoustic Design Training: Capturing Perception in Architectural Settings. In Perception-Driven Approaches to Urban Assessment and Design, F. Aletta and X. Jieling (Eds.). IGI Global.
  • J. D. Reiss, ‘An Intelligent Systems Approach to Mixing Multitrack Music‘, Perspectives On Music Production: Mixing Music, Routledge, 2017

Patents

Conference papers

  1. M. A. Martinez Ramirez and J. D. Reiss, ‘Stem Audio Mixing as a Content-Based Transformation of Audio Features,’ IEEE 19th International Workshop on Multimedia Signal Processing, Luton, UK, Oct. 16-18, 2017.
  2. M. A. Martinez Ramirez and J. D. Reiss, ‘Analysis and Prediction of the Audio Feature Space when Mixing Raw Recordings into Individual Stems,’ 143rd AES Convention, New York, Oct. 18-21, 2017.
  3. A. Milo, N. Bryan-Kinns and J. D. Reiss, ‘Influences of a Key Map on Soundwalk Exploration with a Textile Sonic Map,’ 143rd AES Convention, New York, Oct. 18-21, 2017.
  4. A. Milo and J. D. Reiss, ‘Aural Fabric: an interactive textile sonic map,’ Audio Mostly, London, 2017
  5. R. Selfridge, D. Moffat and J. D. Reiss, ‘Physically Derived Sound Synthesis Model of a Propeller,’ Audio Mostly, London, 2017
  6. N. Jillings, R. Stables and J. D. Reiss, ‘Zero-Delay Large Signal Convolution Using Multiple Processor Architectures,’ WASPAA, New York, 2017
  7. E. T. Chourdakis and J. D. Reiss, ‘Constructing narrative using a generative model and continuous action policies,’ CC-NLG, 2017
  8. M. A. Martinez Ramirez and J. D. Reiss, ‘Deep Learning and Intelligent Audio Mixing,’ 3rd Workshop on Intelligent Music Production, Salford, UK, 15 September 2017.
  9. B. De Man, J. D. Reiss and R. Stables, ‘Ten years of automatic mixing,’ 3rd Workshop on Intelligent Music Production, Salford, UK, 15 September 2017.
  10. W. Wilkinson, J. D. Reiss and D. Stowell, ‘Latent Force Models for Sound: Learning Modal Synthesis Parameters and Excitation Functions from Audio Recordings,’ 20th International Conference on Digital Audio Effects (DAFx-17), Edinburgh, UK, September 5–9, 2017
  11. S. Sarkar, J. Reiss and O. Brandtsegg, ‘Investigation of a Drum Controlled Cross-adaptive Audio Effect for Live Performance,’ 20th International Conference on Digital Audio Effects (DAFx-17), Edinburgh, UK, September 5–9, 2017
  12. B. De Man and J. D. Reiss, ‘The mix evaluation dataset,’ 20th International Conference on Digital Audio Effects (DAFx-17), Edinburgh, UK, September 5–9, 2017
  13. D. Moffat, D. Ronan and J. D. Reiss, ‘Unsupervised taxonomy of sound effects,’ 20th International Conference on Digital Audio Effects (DAFx-17), Edinburgh, UK, September 5–9, 2017
  14. R. Selfridge, D. Moffat and J. D. Reiss, ‘Physically Derived Synthesis Model of a Cavity Tone,’ Digital Audio Effects (DAFx) Conf., Edinburgh, September 5–9, 2017
  15. N. Jillings, Y. Wang, R. Stables and J. D. Reiss, ‘Intelligent audio plugin framework for the Web Audio API,’ Web Audio Conference, London, 2017
  16. R. Selfridge, D. J. Moffat and J. D. Reiss, ‘Real-time physical model for synthesis of sword swing sounds,’ Best paper award, Sound and Music Computing (SMC), Helsinki, July 5-8, 2017.
  17. R. Selfridge, D. J. Moffat, E. Avital, and J. D. Reiss, ‘Real-time physical model of an Aeolian harp,’ 24th International Congress on Sound and Vibration (ICSV), London, July 23-27, 2017.
  18. A. Benito and J. D. Reiss, ‘Intelligent Multitrack Reverberation Based on Hinge-Loss Markov Random Fields,’ AES Semantic Audio, Erlangen Germany, June 2017
  19. D. Ronan, H. Gunes and J. D. Reiss, “Analysis of the Subgrouping Practices of Professional Mix Engineers“, AES 142nd Convention, Berlin, May 20-23, 2017
  20. Y. Song, Y. Wang, P. Bull and J. D. Reiss, ‘Performance Evaluation of a New Flexible Time Division Multiplexing Protocol on Mixed Traffic Types,’ 31st IEEE International Conference on Advanced Information Networking and Applications (AINA), Taipei, Taiwan, March 27-29, 2017.