So this is a short year in review for the Intelligent Sound Engineering team. I won’t focus on Covid 19, because that will be the focus of every other year in review. Instead, I’ll just keep it brief with some highlights.
We co-founded two new companies, Tonz and Nemisindo. Tonz relates to some of our deep learning research and Nemisindo to our procedural audio research, though they’ll surely evolve into something greater. Keep an eye out for announcements from both of them.
I (Josh Reiss) was elected president of the Audio Engineering Society. Its a great honour. But I become President-Elect, Jan 1st 2021, and then President Jan 1st 2022, so its a slow transition into the role. I also gave a keynote at the 8th China Conference on Sound and Music Technology.
Angeliki Mourgela’s hearing loss simulator won the Gold Prize (first place) in the Matlab VST plugin competition. This work was also used to present sounds as heard by a character with hearing loss in the BBC drama Casualty.
J. T. Colonel and Christian Steinmetz gave an invited talk at the AES Virtual Symposium: Applications of Machine Learning in Audio
We are continuing collaboration with Yamaha, started new grants with support from InnovateUK (Hookslam), industry and EPSRC, and others. There’s others in various stages of submission, review or finalising acceptance, so hopefully I can make proper announcements about them soon.
Christian Steinmetz and Ilias Ibnyahya started their PhDs with the team. Emmanouil Chourdakis, Alessia Milo and Marco Martinez completed their PhDs . Lauren Edlin, Angeliki Mourgela, J. T. Colonel and Marco Comunita are all progressing well through various stages of the PhD. Johan Pauwels and Hazar Tez are doing great work in Postdoc positions, and Jack Walters and Luke Brosnahan are working wonders while interning with our spin-out companies. I’m sure I’ve left a few people out.
So, though the virus situation meant a lot of things were put on pause or fizzled out, we actually accomplished quite a lot in 2020.
And finally, here’s our research publications this past year;
- M. A. M. Ramírez, E. Benetos, and J. D. Reiss, Deep learning for black-box modeling of audio effects. Applied Sciences, 10(2), 638, 2020.
- J. R. R. Lee and J. D. Reiss, ‘Real-Time Sound Synthesis of Audience Applause,’ Journal of the Audio Engineering Society, 68 (4), pp. 261-272, April 2020, DOI: https://doi.org/10.17743/jaes.2020.0006
- A. Nagele, V. Bauer, P. Healey, J. Reiss, H. Cooke, T. Cowlishaw, C. Baume, C. Pike, ‘Interactive Audio Augmented Reality in Participatory Performance’, to appear in Frontiers in Virtual Reality, section Virtual Reality and Human Behaviour, 2021
- M Comunità, D Stowell, JD Reiss, ‘Guitar Effects Recognition and Parameter Estimation with Convolutional Neural Networks‘, arXiv preprint, 2020
- C. J. Steinmetz, J. D. Reiss, ‘Randomized Overdrive Neural Networks‘, NeurIPS 2020 Workshop on Machine Learning for Creativity and Design, Dec. 2020
- C. J. Steinmetz, J. D. Reiss, ‘auraloss: Audio-focused loss functions in PyTorch‘, Digital Music Research Network (DMRN), London, Dec. 2020
- A. Mourgela, T. Agus and J. D. Reiss, “Investigation of a real-time hearing loss simulation for use in audio production,” 149th AES Convention, 2020
- L. Edlin, Y. Liu, N. Bryan-Kinns, & J. Reiss, ‘Exploring Augmented Reality as Craft Material. International Conference on Human-Computer Interaction‘, pp. 54-69. Springer, Cham, July 2020.
- G. Peeters and J. D. Reiss, ‘A deep learning approach to sound classification for film audio post-production,’ 148th AES Convention, June 2020
- M. A. Martinez Ramirez, E. Benetos, and J. D. Reiss, ‘Modeling plate and spring reverberation using a DSP-informed deep neural network,’ IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2020