Exciting research position with the intelligent sound engineering team

The Intelligent Sound Engineering team (the people behind this blog) are part of the prestigious Centre for Digital Music (C4DM) at Queen Mary University of London. And we are pleased to announce that we have another R&D position opening up, related to machine learning and audio production. This position is for a collaborative project with a fast-growing London-based start-up in the music production sector.
The position can be offered for either a post-doctoral or graduate research assistant, and can be either full- or part-time, and there is some flexibility in terms of salary and other aspects. Closing date for applications is May 1 2024.

Full details, including instructions on how to apply, can be found here
https://www.qmul.ac.uk/jobs/vacancies/items/9609.html

And some summary information is given below.
Thanks!


Contact Details: Joshua Reiss– Professor Of Audio Engineering at joshua.reiss@qmul.ac.uk

About the Role: The School of Electronic Engineering and Computer Science at Queen Mary University of London is looking to recruit a part-time Research Assistant for the project Music Production Style Transfer (ProStyle). The role is to investigate machine learning approaches by which a production style may be learnt from examples and mapped onto new musical audio content. The work will build on prior work in this area, but the Research Assistant will be encouraged to explore new approaches.

About You: Applicants must hold a Master’s Degree (or equivalent) in Computer Science, Electrical/Electronic Engineering or a related field. They should have expertise in audio processing, audio programming, music production and machine learning, especially deep learning. It would also be desirable for applicants to have publications in leading journals in the field.

Intelligent sound engineering for all

Signal processing challenges to rework music for those with a hearing loss

Intelligent sound engineering opens up the possibility of personalizing audio, for example processing and mixing music so the audio quality is better for someone with a hearing loss. People with a hearing impairment can experience problems when listening to music with or without hearing aids. 430 million people Worldwide have a disabling hearing loss, with this number increasing as the population ages. Poor hearing makes music harder to appreciate, for example picking out the lyrics or melody is more difficult. This reduces the enjoyment of music, and can lead to disengagement from listening and music-making.

I work on the Cadenza project, which has just launched a series of open competitions to get experts in music signal processing and machine learning to develop algorithms to improve music for those with a hearing loss. Such open challenges are increasingly used to push forward audio processing. They’re free to enter, and we provide lots of data, software and support to help competitors take part.

The Cadenza Challenges are about improving the perceived audio quality of recorded music for people with a hearing loss.

What do we mean by audio quality? Imagine listening to the same music track in two different ways. First on a low quality mp3 played on a cheap mobile, and then via a high quality wav and studio-grade monitors. The underlying music is the same in both cases, but the audio quality is very different.

Headphones

The first task you might tackle is our Task 1: listening over headphones. The figure below shows the software baseline that we are providing for you to build on. First the stereo music is demixed into VDBO (Vocals, Drums, Bass, Other) before being remixed into stereo for the listener to hear. At the remixing stage there is an opportunity for intelligent sound engineering to process the VDBO tracks and adjust the balance between them, to personalise and improve the music. We’re also hoping for improved demixing algorithms that allow for the hearing abilities of the listeners.

    Baseline schematic for headphone task

    Car

    The second task you could tackle is intelligent sound engineering in the presence of noise. Listening to music in the car against the rumble of car noise is really common. How would you tune a car stereo (Enhancement box in the diagram below), so the processed music is best allowing for both the noise and the simple hearing aid the driver is wearing?

    Baseline schematic for car task

    Next steps

    Both tasks are live now, with entrants having to finish and submit their entries in July 2023. Join us in trying to improve music for those with a hearing loss. Or let us know what you think below, e.g., what do you think of the project idea and the scenarios we’ve chosen.

    You’ll find lots more on the Cadenza project website, including a Learning Resources section that gives you background information on hearing, hearing loss, hearing aids and other knowledge you might need to enter the challenge. We also have a “find a team” page, if you want to get together with other experts to improve music for those with a hearing loss.

    AI powered audio mixing start-up RoEx closes investment round

    We’ve been involved in the founding of several start-ups based in part on our research; LandR, Waveshaper AI, and Nemisindo. More recently, we co-founded RoEx, led by alumnus Dave Ronan. RoEx is a music tech start-up which offers AI-powered audio mixing services. They provide smart solutions for audio production to content creators and companies of all sizes.

    As a musician, content creator or bedroom producer, you might have already faced an unpleasant situation when you realised that the mixing process is hard, takes time and money, and distracts you from the creative process. Hence RoEx decided to help people like you by creating intelligent audio production tools to assist in the music creation process. Not only do they want to save your time, but they also want to ensure that their solutions will help you get as close as possible to the sound of professional content.

    You can try out some of these tools directly on their website.​

    We’re thrilled to announce that RoEx has closed its investment seed round and they are ready to take the next step in growing their business and expanding the team.

    RoEx’s goal has always been to bring cutting-edge audio technology to the music industry, and this funding will allow them to accelerate our progress and bring our innovative solutions to a wider audience.

    RoEx is grateful to our investors Haatch and Queen Mary University of London for their confidence in the team and the vision, and they’re excited to continue growing and making a positive impact on the music world.

    As RoEx move forward, they will be focusing on expanding the team and building partnerships to bring their technology to the next level. They can’t wait to share updates on their progress and they appreciate your support as they take this journey.

    #musictech #startup #ai #innovation #musicindustry #mixingandmastering #teamgrowth

    No alternative text description for this image

    Based on the original post by Dave Ronan.

    Aural diversity

    We are part of a research network that has just been funded, focused around Aural diversity.

    Aural Diversity arises from the observation that everybody hears differently. The assumption that we all possess a standard, undifferentiated pair of ears underpins most listening scenarios. Its the basis of many audio technologies, and has been a basis for much of our understanding of hearing and hearing perception. But the assumption is demonstrably incorrect, and taking it too far means that we miss out on many opportunities for advances in auditory science and audio engineering. We may well ask: whose ears are standard? whose ear has primacy? The network investigates the consequences of hearing differences in areas such as: music and performance, soundscape and sound studies, hearing sciences and acoustics, hearing care and hearing technologies, audio engineering and design, creative computing and AI, and indeed any field that has hearing or listening as a major component.

    The term ‘auraldiversity’ echoes ‘neurodiversity’ as a way of distinguishing between ‘normal’ hearing, defined by BS ISO 226:2003 as that of a healthy 18-25 year-old, and atypical hearing (Drever 2018, ‘Primacy of the Ear’). This affects everybody to some degree. Each individual’s ears are uniquely shaped. We have all experienced temporary changes in hearing, such as when having a cold. And everybody goes through presbyacusis (age-related hearing loss) at varying rates after the teenage years.

    More specific aural divergences are the result of an array of hearing differences or impairments which affect roughly 1.1 billion people worldwide (Lancet, 2013). These include noise-related, genetic, ototoxic, traumatic, and disorder-based hearing loss, some of which may cause full or partial deafness. However, “loss” is not the only form of impairment: auditory perceptual disorders such as tinnitus, hyperacusis and misophonia involve an increased sensitivity to sound.

    And its been an issue in our research too. We’ve spent years developing automatic mixing systems that produce audio content like a sound engineer would (De Man et al 2017, ‘Ten Years of Automatic Mixing’). But to do that, we usually assume that there is a ‘right way’ to mix, and of course, it really depends on the listener, the listener’s environment, and many other factors. Our recent research has focused on developing simulators that allow anyone to hear the world as it really sounds to someone with hearing loss.

    AHRC is funding the network for two years, beginning July 2021. The network is led by  Andrew Hugill of the University of Leicester. The core partners are the Universities of Leicester, Salford, Nottingham, Leeds, Goldsmiths, Queen Mary University of London (the team behind this blog), and the Attenborough Arts Centre. The wider network includes many more universities and a host of organisations concerned with hearing and listening.

    The network will stage five workshops, each with a different focus:

    • Hearing care and technologies. How the use of hearing technologies may affect music and everyday auditory experiences.
    • Scientific and clinical aspects. How an arts and humanities approach might complement, challenge, and enhance scientific investigation.
    • Acoustics of listening differently. How acoustic design of the built and digital environments can be improved.
    • Aural diversity in the soundscape. Includes a concert featuring new works by aurally diverse artists for an aurally diverse audience.
    • Music and performance. Use of new technologies in composition and performance.

    See http://auraldiversity.org for more details.

    Intelligent Music Production book is published

    9781138055193

    Ryan Stables is an occasional collaborator and all around brilliant person. He started the annual Workshop on Intelligent Music Production (WIMP) in 2015. Its been going strong ever since, with the 5th WIMP co-located with DAFx, this past September. The workshop series focuses on the application of intelligent systems (including expert systems, machine learning, AI) to music recording, mixing, mastering and related aspects of audio production or sound engineering.

    Ryan had the idea for a book about the subject, and myself (Josh Reiss) and Brecht De Man (another all around brilliant person) were recruited as co-authors. What resulted was a massive amount of writing, editing, refining, re-editing and so on. We all contributed big chunks of content, but Brecht pulled it all together and turned it into something really high quality giving a comprehensive overview of the field, suitable for a wide range of audiences.

    And the book is finally published today, October 31st! Its part of the AES Presents series by Focal Press, a division of Routledge. You can get it from the publisher, from Amazon or any of the other usual places.

    And here’s the official blurb

    Intelligent Music Production presents the state of the art in approaches, methodologies and systems from the emerging field of automation in music mixing and mastering. This book collects the relevant works in the domain of innovation in music production, and orders them in a way that outlines the way forward: first, covering our knowledge of the music production processes; then by reviewing the methodologies in classification, data collection and perceptual evaluation; and finally by presenting recent advances on introducing intelligence in audio effects, sound engineering processes and music production interfaces.

    Intelligent Music Production is a comprehensive guide, providing an introductory read for beginners, as well as a crucial reference point for experienced researchers, producers, engineers and developers.

     

    Cross-adaptive audio effects: automatic mixing, live performance and everything in between

    Our paper on Applications of cross-adaptive audio effects: automatic mixing, live performance and everything in between has just been published in Frontiers in Digital Humanities. It is a systematic review of cross-adaptive audio effects and their applications.

    Cross-adaptive effects extend the boundaries of traditional audio effects by having many inputs and outputs, and deriving their behavior based on analysis of the signals and their interaction. This allows the audio effects to adapt to different material, seemingly being aware of what they do and listening to the signals. Here’s a block diagram showing how a cross-adaptive audio effect modifies a signal.

    cross-adaptive architecture

    Last year, we published a paper reviewing the history of automatic mixing, almost exactly ten years to the day from when automatic mixing was first extended beyond simple gain changes for speech applications. These automatic mixing applications rely on cross-adaptive effects, but the effects can do so much more.

    Here’s an example automatic mixing system from our youtube channel, IntelligentSoundEng.

    When a musician uses the signals of other performers directly to inform the timbral character of her own instrument, it enables a radical expansion of interaction during music making. Exploring this was the goal of the Cross-adaptive processing for musical intervention project, led by Oeyvind Brandtsegg, which we discussed in an earlier blog entry. Using cross-adaptive audio effects, musicians can exert control over each the instruments and performance of other musicians, both leading to new competitive aspects and new synergies.

    Here’s a short video demonstrating this.

    Despite various projects, research and applications involving cross-adaptive audio effects, there is still a fair amount of confusion surrounding the topic. There are multiple definitions, sometimes even by the same authors. So this paper gives a brief history of applications as well as a classification of effects types and clarifies issues that have come up in earlier literature. It further defines the field, lays a formal framework, explores technical aspects and applications, and considers the future from artistic, perceptual, scientific and engineering perspectives.

    Check it out!

    Exciting research at the upcoming Audio Engineering Society Convention

    aes143

    About five months ago, we previewed the last European Audio Engineering Society Convention, which we followed with a wrap-up discussion. The next AES  convention is just around the corner, October 18 to 21st in New York. As before, the Audio Engineering research team here aim to be quite active at the convention.

    These conventions are quite big, with thousands of attendees, but not so large that you get lost or overwhelmed. Away from the main exhibition hall is the Technical Program, which includes plenty of tutorials and presentations on cutting edge research.

    So here, we’ve gathered together some information about a lot of the events that we will be involved in, attending, or we just thought were worth mentioning. And I’ve gotta say, the Technical Program looks amazing.

    Wednesday

    One of the first events of the Convention is the Diversity Town Hall, which introduces the AES Diversity and Inclusion Committee. I’m a firm supporter of this, and wrote a recent blog entry about female pioneers in audio engineering. The AES aims to be fully inclusive, open and encouraging to all, but that’s not yet fully reflected in its activities and membership. So expect to see some exciting initiatives in this area coming soon.

    In the 10:45 to 12:15 poster session, Steve Fenton will present Alternative Weighting Filters for Multi-Track Program Loudness Measurement. We’ve published a couple of papers (Loudness Measurement of Multitrack Audio Content Using Modifications of ITU-R BS.1770, and Partial loudness in multitrack mixing) showing that well-known loudness measures don’t correlate very well with perception when used on individual tracks within a multitrack mix, so it would be interesting to see what Steve and his co-author Hyunkook Lee found out. Perhaps all this research will lead to better loudness models and measures.

    At 2 pm, Cleopatra Pike will present a discussion and analysis of Direct and Indirect Listening Test Methods. I’m often sceptical when someone draws strong conclusions from indirect methods like measuring EEGs and reaction times, so I’m curious what this study found and what recommendations they propose.

    The 2:15 to 3:45 poster session will feature the work with probably the coolest name, Influence of Audience Noises on the Classical Music Perception on the Example of Anti-cough Candies Unwrapping Noise. And yes, it looks like a rigorous study, using an anechoic chamber to record the sounds of sweets being unwrapped, and the signal analysis is coupled with a survey to identify the most distracting sounds. It reminds me of the DFA faders paper from the last convention.

    At 4:30, researchers from Fraunhofer and the Technical University of Ilmenau present Training on the Acoustical Identification of the Listening Position in a Virtual Environment. In a recent paper in the Journal of the AES, we found that training resulted in a huge difference between participant results in a discrimination task, yet listening tests often employ untrained listeners. This suggests that maybe we can hear a lot more than what studies suggest, we just don’t know how to listen and what to listen for.

    Thursday

    If you were to spend only one day this year immersing yourself in frontier audio engineering research, this is the day to do it.

    At 9 am, researchers from Harman will present part 1 of A Statistical Model that Predicts Listeners’ Preference Ratings of In-Ear Headphones. This was a massive study involving 30 headphone models and 71 listeners under carefully controlled conditions. Part 2, on Friday, focuses on development and validation of the model based on the listening tests. I’m looking forward to both, but puzzled as to why they weren’t put back-to-back in the schedule.

    At 10 am, researchers from the Tokyo University of the Arts will present Frequency Bands Distribution for Virtual Source Widening in Binaural Synthesis, a technique which seems closely related to work we presented previously on Cross-adaptive Dynamic Spectral Panning.

    From 10:45 to 12:15, our own Brecht De Man will be chairing and speaking in a Workshop on ‘New Developments in Listening Test Design.’ He’s quite a leader in this field, and has developed some great software that makes the set up, running and analysis of listening tests much simpler and still rigorous.

    In the 11-12:30 poster session, Nick Jillings will present Automatic Masking Reduction in Balance Mixes Using Evolutionary Computing, which deals with a challenging problem in music production, and builds on the large amount of research we’ve done on Automatic Mixing.

    At 11:45, researchers from McGill will present work on Simultaneous Audio Capture at Multiple Sample Rates and Formats. This helps address one of the challenges in perceptual evaluation of high resolution audio (and see the open access journal paper on this), ensuring that the same audio is used for different versions of the stimuli, with only variation in formats.

    At 1:30, renowned audio researcher John Vanderkooy will present research on how a  loudspeaker can be used as the sensor for a high-performance infrasound microphone. In the same session at 2:30, researchers from Plextek will show how consumer headphones can be augmented to automatically perform hearing assessments. Should we expect a new audiometry product from them soon?

    At 2 pm, our own Marco Martinez Ramirez will present Analysis and Prediction of the Audio Feature Space when Mixing Raw Recordings into Individual Stems, which applies machine learning to challenging music production problems. Immediately following this, Stephen Roessner discusses a Tempo Analysis of Billboard #1 Songs from 1955–2015, which builds partly on other work analysing hit songs to observe trends in music and production tastes.

    At 3:45, there is a short talk on Evolving the Audio Equalizer. Audio equalization is a topic on which we’ve done quite a lot of research (see our review article, and a blog entry on the history of EQ). I’m not sure where the novelty is in the author’s approach though, since dynamic EQ has been around for a while, and there are plenty of harmonic processing tools.

    At 4:15, there’s a presentation on Designing Sound and Creating Soundscapes for Still Images, an interesting and unusual bit of sound design.

    Friday

    Judging from the abstract, the short Tutorial on the Audibility of Loudspeaker Distortion at Bass Frequencies at 5:30 looks like it will be an excellent and easy to understand review, covering practice and theory, perception and metrics. In 15 minutes, I suppose it can only give a taster of what’s in the paper.

    There’s a great session on perception from 1:30 to 4. At 2, perceptual evaluation expert Nick Zacharov gives a Comparison of Hedonic and Quality Rating Scales for Perceptual Evaluation. I think people often have a favorite evaluation method without knowing if its the best one for the test. We briefly looked at pairwise versus multistimuli tests in previous work, but it looks like Nick’s work is far more focused on comparing methodologies.

    Immediately after that, researchers from the University of Surrey present Perceptual Evaluation of Source Separation for Remixing Music. Techniques for remixing audio via source separation is a hot topic, with lots of applications whenever the original unmixed sources are unavailable. This work will get to the heart of which approaches sound best.

    The last talk in the session, at 3:30 is on The Bandwidth of Human Perception and its Implications for Pro Audio. Judging from the abstract, this is a big picture, almost philosophical discussion about what and how we hear, but with some definitive conclusions and proposals that could be useful for psychoacoustics researchers.

    Saturday

    Grateful Dead fans will want to check out Bridging Fan Communities and Facilitating Access to Music Archives through Semantic Audio Applications in the 9 to 10:30 poster session, which is all about an application providing wonderful new experiences for interacting with the huge archives of live Grateful Dead performances.

    At 11 o’clock, Alessia Milo, a researcher in our team with a background in architecture, will discuss Soundwalk Exploration with a Textile Sonic Map. We discussed her work in a recent blog entry on Aural Fabric.

    In the 2 to 3:30 poster session, I really hope there will be a live demonstration accompanying the paper on Acoustic Levitation.

    At 3 o’clock, Gopal Mathur will present an Active Acoustic Meta Material Loudspeaker System. Metamaterials are receiving a lot of deserved attention, and such advances in materials are expected to lead to innovative and superior headphones and loudspeakers in the near future.

     

    The full program can be explored on the Convention Calendar or the Convention website. Come say hi to us if you’re there! Josh Reiss (author of this blog entry), Brecht De Man, Marco Martinez and Alessia Milo from the Audio Engineering research team within the Centre for Digital Music  will all be there.
     

     

    Ten Years of Automatic Mixing

    tenyears

    Automatic microphone mixers have been around since 1975. These are devices that lower the levels of microphones that are not in use, thus reducing background noise and preventing acoustic feedback. They’re great for things like conference settings, where there may be many microphones but only a few speakers should be heard at any time.

    Over the next three decades, various designs appeared, but it didn’t really grow much from Dan Dugan’s original Dan Dugan’s original concept.

    Enter Enrique Perez Gonzalez, a PhD student researcher and experienced sound engineer. On September 11th, 2007, exactly ten years ago from the publication of this blog post, he presented a paper “Automatic Mixing: Live Downmixing Stereo Panner.” With this work, he showed that it may be possible to automate not just fader levels in speech applications, but other tasks and for other applications. Over the course of his PhD research, he proposed methods for autonomous operation of many aspects of the music mixing process; stereo positioning, equalisation, time alignment, polarity correction, feedback prevention, selective masking minimization, etc. He also laid out a framework for further automatic mixing systems.

    Enrique established a new field of research, and its been growing ever since. People have used machine learning techniques for automatic mixing, applied auditory neuroscience to the problem, and explored where the boundaries lie between the creative and technical aspects of mixing. Commercial products have arisen based on the concept. And yet all this is still only scratching the surface.

    I had the privilege to supervise Enrique and have many anecdotes from that time. I remember Enrique and I going to a talk that Dan Dugan gave at an AES convention panel session and one of us asked Dan about automating other aspects of the mix besides mic levels. He had a puzzled look and basically said that he’d never considered it. It was also interesting to see the hostile reactions from some (but certainly not all) practitioners, which brings up lots of interesting questions about disruptive innovations and the threat of automation.

    wimp3

    Next week, Salford University will host the 3rd Workshop on Intelligent Music Production, which also builds on this early research. There, Brecht De Man will present the paper ‘Ten Years of Automatic Mixing’, describing the evolution of the field, the approaches taken, the gaps in our knowledge and what appears to be the most exciting new research directions. Enrique, who is now CTO of Solid State Logic, will also be a panellist at the Workshop.

    Here’s a video of one of the early Automatic Mixing demonstrators.

    And here’s a list of all the early Automatic Mixing papers.

    • E. Perez Gonzalez and J. D. Reiss, A real-time semi-autonomous audio panning system for music mixing, EURASIP Journal on Advances in Signal Processing, v2010, Article ID 436895, p. 1-10, 2010.
    • Perez-Gonzalez, E. and Reiss, J. D. (2011) Automatic Mixing, in DAFX: Digital Audio Effects, Second Edition (ed U. Zölzer), John Wiley & Sons, Ltd, Chichester, UK. doi: 10.1002/9781119991298. ch13, p. 523-550.
    • E. Perez Gonzalez and J. D. Reiss, “Automatic equalization of multi-channel audio using cross-adaptive methods”, Proceedings of the 127th AES Convention, New York, October 2009
    • E. Perez Gonzalez, J. D. Reiss “Automatic Gain and Fader Control For Live Mixing”, IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, New York, October 18-21, 2009
    • E. Perez Gonzalez, J. D. Reiss “Determination and correction of individual channel time offsets for signals involved in an audio mixture”, 125th AES Convention, San Francisco, USA, October 2008
    • E. Perez Gonzalez, J. D. Reiss “An automatic maximum gain normalization technique with applications to audio mixing.”, 124th AES Convention, Amsterdam, Netherlands, May 2008
    • E. Perez Gonzalez, J. D. Reiss, “Improved control for selective minimization of masking using interchannel dependency effects”, 11th International Conference on Digital Audio Effects (DAFx), September 2008
    • E. Perez Gonzalez, J. D. Reiss, “Automatic Mixing: Live Downmixing Stereo Panner”, 10th International Conference on Digital Audio Effects (DAFx-07), Bordeaux, France, September 10-15, 2007

    SMC Conference, Espoo, Finland

    I have recently returned from the 14th Sound and Music Computing Conference hosted by Aalto University, Espoo, Finland. All 4 days were full of variety and quality, ensuring there was something of interest for all. There was also live performances during an afternoon session and 2 evenings as well as the banquet on Hanasaari, a small island in Espoo. This provided a friendly framework for all the delegates to interact, making or renew connections.
    The paper presentations were the main content of the programme with presenters from all over the globe. Papers that stood out for me were Johnty Wang et al – Explorations with Digital Control of MIDI-enabled Pipe Organs where I heard the movement of an unborn child control the audio output of a pipe organ. I became aware of the Championship of Standstill where participants are challenged to standstill while a number of musical pieces are played – The Musical Influence on People’s Micromotion when Standing Still in Groups.
    Does Singing a Low-Pitch Tone Make You Look Angrier? well it looked like it in  this interesting presentation! A social media music app was presented in Exploring Social Mobile Music with Tiny Touch-Screen Performances where we can interact with others by layering 5 second clips of sound to create a collaborative mix.
    Analysis and synthesis was well represented with a presentation on Virtual Analog Simulation and Extensions of Plate Reverberation by Silvan Willemson et al and The Effectiveness of Two Audiovisual Mappings to Control a Concatenate Synthesiser by Augoustinos Tiros et al. The paper on Virtual Analog Model of the Lockhart Wavefolder explaining a method of modelling West Coast style analogue synthesiser.
    Automatic mixing was also represented. Flavio Everard’s paper on Towards an Automated Multitrack Mixing Tool using Answer Set Programming, citing at least 8 papers from the Intelligent Audio Engineering group at C4DM.
    In total 65 papers were presented orally or in the poster sessions with sessions on Music performance analysis and rendering, Music information retrieval, Spatial sound and sonification, Computer music languages and software, Analysis, synthesis and modification of sound, Social interaction, Computer-based music analysis and lastly Automatic systems and interactive performance. All papers are available at http://smc2017.aalto.fi/proceedings.html.
    Award2
    Having been treated to a wide variety of live music, technical papers and meeting colleagues from around the world, it was a added honour to be presented with one of the Best Paper Awards for our paper on Real-Time Physical Model for Synthesis of Sword Sounds. The conference closed with a short presentation from the next host….. SMC2018 – Cyprus!