Death metal, green dance music, and Olympic sound design

This is an unusual blog entry, in that the three topics in the title, death metal, green dance music, and Olympic sound design, have very little in common. But they are all activities that the team here have been involved with recently, outside of our normal research, which are worth mentioning.

Angeliki Mourgela, whose work has been described in previous blog entries on hearing loss simulation and online listening tests is also a sound engineer and death metal musician. Her band, Unmother, has just released an album, and you can check it out on Bandcamp.

Eva Fineberg is a Masters student doing a project on improved thunder simulation, building on some work we did which showed that none of the existing thunder synthesis models were very good. Eva is one of the leaders of Berlin’s Clean Scene, a collective of industry professionals focused on making dance music greener. They have been investigating the environmental impacts of touring. They recently released a report, Last Night a DJ Took a Flight: Exploring the carbon footprint of touring DJs and looking towards alternative futures within the dance music industry, that found rather stunning environmental impact from touring DJs. But it also went further and gave many recommendations to reduce this impact. Its good to see initiatives like this in the music industry that bring research and action together.

Finally, I was asked to write an article for The Conversation about sound design in the Olympics. A quick search showed that there were quite a few pieces written about this, but they all focused on the artificial crowd noise. Thats of course the big story, but I managed to find a different angle. Looking back, the modern Olympics that perhaps most revolutionised sound design in the past was… the 1964 Olympics in Tokyo. The technical aspects of the sound engineering involved were published in the July 1965 issue of the Journal of the Audio Engineering Society. So there’s a good story there on innovation in sound design, from Tokyo to Tokyo. The article, 3,600 microphones and counting: how the sound of the Olympics is created, was just published the moment I started writing this blog entry.

The crack of thunder

Lightning, copyright James Insogna, 2011

The gaming, film and virtual reality industries rely heavily on recorded samples for sound design. This has inherent limitations since the sound is fixed from the point of recording, leading to drawbacks such as repetition, storage, and lack of perceptually relevant controls.

Procedural audio offers a more flexible approach by allowing the parameters of a sound to be altered and sound to be generated from first principles. A natural choice for procedural audio is environmental sounds. They occur widely in creative industries content, and are notoriously difficult to capture. On-location sounds often cannot be used due to recording issues and unwanted background sounds, yet recordings from sample libraries are rarely a good match to an environmental scene.

Thunder in particular, is highly relevant. It provides a sense of the environment and location, but can also be used to supplement the narrative and heighten the tension or foreboding in a scene. There exist a fair number of methods to simulate thunder. But no one’s ever actually sat down and evaluated these models. That’s what we did in,

J. D. Reiss, H. E. Tez, R. Selfridge, ‘A comparative perceptual evaluation of thunder synthesis techniques’, to appear at the 150th Audio Engineering Convention, 2021.

We looked at all the thunder synthesis models we could find, and in the end were able to compare five models and a recording of real thunder in a listening test. And here’s the key result,

This was surprising. None of the methods sound very close to the real thing. It didn’t matter whether it was a physical model, didn’t matter which type of physical modelling approach was used, or whether an entirely signal-based approach was applied. And yet there’s plenty of other sounds where procedural audio can sound indistinguishable from the real thing, see our previous blog post on applause foot .

We also played around with the code. Its clear that the methods could be improved. For instance, they all produced mono sounds (so we used a mono recording for comparison too), the physical models could be much, much faster, and most of the models used very simplistic approximation of lightning. So there’s a really nice PhD topic for someone to work on one day.

Besides showing the limitations of the current models, it also showed the need for better evaluation in sound synthesis research, and the benefits of making code and data available for others. On that note, we put the paper and all the relevant code, data, sound samples etc online at

And you can try out a couple of models at

Aural diversity

We are part of a research network that has just been funded, focused around Aural diversity.

Aural Diversity arises from the observation that everybody hears differently. The assumption that we all possess a standard, undifferentiated pair of ears underpins most listening scenarios. Its the basis of many audio technologies, and has been a basis for much of our understanding of hearing and hearing perception. But the assumption is demonstrably incorrect, and taking it too far means that we miss out on many opportunities for advances in auditory science and audio engineering. We may well ask: whose ears are standard? whose ear has primacy? The network investigates the consequences of hearing differences in areas such as: music and performance, soundscape and sound studies, hearing sciences and acoustics, hearing care and hearing technologies, audio engineering and design, creative computing and AI, and indeed any field that has hearing or listening as a major component.

The term ‘auraldiversity’ echoes ‘neurodiversity’ as a way of distinguishing between ‘normal’ hearing, defined by BS ISO 226:2003 as that of a healthy 18-25 year-old, and atypical hearing (Drever 2018, ‘Primacy of the Ear’). This affects everybody to some degree. Each individual’s ears are uniquely shaped. We have all experienced temporary changes in hearing, such as when having a cold. And everybody goes through presbyacusis (age-related hearing loss) at varying rates after the teenage years.

More specific aural divergences are the result of an array of hearing differences or impairments which affect roughly 1.1 billion people worldwide (Lancet, 2013). These include noise-related, genetic, ototoxic, traumatic, and disorder-based hearing loss, some of which may cause full or partial deafness. However, “loss” is not the only form of impairment: auditory perceptual disorders such as tinnitus, hyperacusis and misophonia involve an increased sensitivity to sound.

And its been an issue in our research too. We’ve spent years developing automatic mixing systems that produce audio content like a sound engineer would (De Man et al 2017, ‘Ten Years of Automatic Mixing’). But to do that, we usually assume that there is a ‘right way’ to mix, and of course, it really depends on the listener, the listener’s environment, and many other factors. Our recent research has focused on developing simulators that allow anyone to hear the world as it really sounds to someone with hearing loss.

AHRC is funding the network for two years, beginning July 2021. The network is led by  Andrew Hugill of the University of Leicester. The core partners are the Universities of Leicester, Salford, Nottingham, Leeds, Goldsmiths, Queen Mary University of London (the team behind this blog), and the Attenborough Arts Centre. The wider network includes many more universities and a host of organisations concerned with hearing and listening.

The network will stage five workshops, each with a different focus:

  • Hearing care and technologies. How the use of hearing technologies may affect music and everyday auditory experiences.
  • Scientific and clinical aspects. How an arts and humanities approach might complement, challenge, and enhance scientific investigation.
  • Acoustics of listening differently. How acoustic design of the built and digital environments can be improved.
  • Aural diversity in the soundscape. Includes a concert featuring new works by aurally diverse artists for an aurally diverse audience.
  • Music and performance. Use of new technologies in composition and performance.

See http://auraldiversity.org for more details.

Invitation to online listening study

We would like to invite you to participate in our study titled “Investigation of frequency-specific loudness discomfort levels, in listeners with migraine-related hypersensitivity to sound“.

Please note : You do not have to be a migraine sufferer to participate in this study although if you are, please make sure to specify that, when asked during the study (for more on eligibility criteria check the list below)

Our study consists of a brief questionnaire, followed by a simple listening test. This study is targeted towards listeners with and without migraine headaches and in order to participate you have to meet all of the following criteria:

1) Be 18 years old or older

2) Not have any history or diagnosis of hearing loss

3) Have access to a quiet room to take the test

4) Have access to a computer with an internet connection

5) Have access to a pair of functioning headphones

The total duration of the study is approximately 25 minutes. Your participation is voluntary however valuable, as it could provide a useful insight on the auditory manifestations of migraine, as well as aid the identification of possible differences between participants with and without migraines, this way facilitating further research on sound adaptations for migraine sufferers.

To access the study please follow the link below:

https://golisten.ucd.ie/task/hearing-test/5ff5b8ee0a6da21ed8df2fc7

If you have any questions or would like to share your feedback on this study please email a.mourgela@qmul.ac.uk or joshua.reiss@qmul.ac.uk

What we did in 2020 (despite the virus!)

So this is a short year in review for the Intelligent Sound Engineering team. I won’t focus on Covid 19, because that will be the focus of every other year in review. Instead, I’ll just keep it brief with some highlights.

We co-founded two new companies, Tonz and Nemisindo. Tonz relates to some of our deep learning research and Nemisindo to our procedural audio research, though they’ll surely evolve into something greater. Keep an eye out for announcements from both of them.

I (Josh Reiss) was elected president of the Audio Engineering Society. Its a great honour. But I become President-Elect, Jan 1st 2021, and then President Jan 1st 2022, so its a slow transition into the role. I also gave a keynote at the 8th China Conference on Sound and Music Technology.

Angeliki Mourgela’s hearing loss simulator won the Gold Prize (first place) in the Matlab VST plugin competition. This work was also used to present sounds as heard by a character with hearing loss in the BBC drama Casualty.

J. T. Colonel and Christian Steinmetz gave an invited talk at the AES Virtual Symposium: Applications of Machine Learning in Audio

We are continuing collaboration with Yamaha, started new grants with support from InnovateUK (Hookslam), industry and EPSRC, and others. There’s others in various stages of submission, review or finalising acceptance, so hopefully I can make proper announcements about them soon.

Christian Steinmetz and Ilias Ibnyahya started their PhDs with the team. Emmanouil Chourdakis, Alessia Milo and Marco Martinez completed their PhDs . Lauren Edlin, Angeliki Mourgela, J. T. Colonel and Marco Comunita are all progressing well through various stages of the PhD. Johan Pauwels and Hazar Tez are doing great work in Postdoc positions, and Jack Walters and Luke Brosnahan are working wonders while interning with our spin-out companies. I’m sure I’ve left a few people out.

So, though the virus situation meant a lot of things were put on pause or fizzled out, we actually accomplished quite a lot in 2020.

And finally, here’s our research publications this past year;

Hearing loss simulator – MATLAB Plugin Competition Gold Award Winner

Congratulations to Angeliki Mourgela, winner of the AES Show 2020 Student Competition for developing a MATLAB plugin. The aim of the competition was for students to ‘Design a new kind of audio production VST plugin using MATLAB Software and your wits’.

Hearing loss is a global phenomenon, with almost 500 million people worldwide suffering from it, a number only increasing with an ageing population. Hearing loss can severely impact the daily life of an individual, causing both functional and emotional difficulties and affecting their overall quality of life. Research efforts towards a better understanding of its physical and perceptual characteristics, as well as the development of new and efficient methods for audio enhancement are  an essential endeavour for the future.

Angeliki developed a real-time hearing loss simulator, for use in audio production. It builds on a previous simulation, but is now real-time, low latency, and available as a stereo VST audio effect plug-in with more control and more accurate modelling of hearing loss. It offers the option of customizing threshold attenuations on each ear corresponding to the audiogram information. It also incorporates additional effects such as spectral smearing, rapid loudness growth and loss of temporal resolution on audio.

In effect, it allows anyone to hear the world as it really sounds to someone with hearing loss. And it means that audio producers can easily preview what their content would sound like to most hearing impaired listeners.

Here’s a video with Angeliki demonstrating the system.

Her plugin was also used in an episode of the BBC drama Casualty to let the audience hear the world as heard by a character with severe hearing loss.

You can download her code from the MathWorks file exchange and additional code on SoundSoftware.

Full technical details of the work and the research around it (in collaboration with myself and Dr. Trevor Agus of Queen’s University Belfast) were published in;

A. Mourgela, T. Agus and J. D. Reiss, “‘Investigation of a Real-Time Hearing Loss Simulation for Audio Production,” 149th AES Convention, 2020

Many thanks to the team from Matlab MathWorks for sponsoring and hosting the competition, and congratulations to all the other winners of the AES Student Competitions.

Research highlights for the AES Show Fall 2020

AES_FallShow2020_logo_x

#AESShow

We try to write a preview of the technical track for almost all recent Audio Engineering Society (AES) Conventions, see our entries on the 142nd, 143rd, 144th, 145th147th and 148th Conventions. Like the 148th Convention, the 149th convention, or just the AES Show, is an online event. But one challenge with these sorts of online events is that anything not on the main live stream can get overlooked. The technical papers are available on demand. So though many people can access them, perhaps more than would attend the presentation in person if possible. But they don’t have the feel of an event.

Hopefully, I can give you some idea of the exciting nature of these technical papers. And they really do present a lot of cutting edge and adventurous research. They unveil, for the first time some breakthrough technologies, and both surprising and significant advances in our understanding of audio engineering and related fields.

This time, since all the research papers are available throughout the Convention and beyond, starting Oct. 28th, I haven’t organised them by date. Instead, I’ve divided them into the regular technical papers (usually longer, with more reviewing), and the Engineering Briefs, or E-briefs. The E-briefs are typically smaller, often presenting work-in-progress, late-breaking or just unusual research. Though this time, the unusual appears in the regular papers too.

But first… listening tests. Sooner or later, almost every researcher has to do them. And a good software package will help the whole process run easier. There are two packages presented at the convention. Dale Johnson will present the next generation of a high quality one in the E-Brief ‘HULTI-GEN Version 2 – A Max-based universal listening test framework’. And Stefan Gorzynski will present the paper ‘A flexible software tool for perceptual evaluation of audio material and VR environments’.

E-Briefs

A must for audio educators is Brett Leonard’s ‘A Survey of Current Music Technology & Recording Arts Curriculum Order’. These sorts of programs are often ‘made up’ based on the experience and knowledge of the people involved. Brecht surveyed 35 institutions and analysed the results to establish a holistic framework for the structure of these degree programmes.

The idea of time-stretching as a live phenomenon might seem counterintuitive. For instance, how can you speed up a signal if its only just arriving? And if you slow it down, then surely after a while it lags far enough behind that it is no longer ‘live’. A novel solution is explored in Colin Malloy’s ‘An approach for implementing time-stretching as a live realtime audio effect

The wonderfully titled ‘A Terribly Good Speaker: Understanding the Yamaha NS-10 Phenomenon,’ is all about how and why a low quality loudspeaker with bad reviews became seen as a ‘must have’ amongst many audio professionals. It looks like this presentation will have lessons for those who study marketing, business trends and consumer psychology in almost any sector, not just audio.

Just how good are musicians at tuning their instruments? Not very good, it seems. Or at least, that was what was found out in ‘Evaluating the accuracy of musicians and sound engineers in performing a common drum tuning exercise’, presented by Rob Toulson. But before you start with your favourite drummer joke, note that the participants were all experienced musicians or sound engineers, but not exclusively drummers. So it might be that everyone is bad at drum tuning, whether they’re used to carrying drumsticks around or not.

Matt Cheshire’s ‘Snare Drum Data Set (SDDS): More snare drums than you can shake a stick at’ is worth mentioning just for the title.

Champ Darabundit will present some interesting work on ‘Generalized Digital Second Order Systems Beyond Nyquist Frequency’, showing that the basic filter designs can be tuned to do a lot more than just what is covered in the textbooks. Its interesting and good work, but I have a minor issue with it. The paper only has one reference that isn’t a general overview or tutorial. But there’s lots of good, relevant related work, out there.

I’m involved in only one paper at this convention (shame!). But its well worth checking out. Angeliki Mourgela is presenting ‘Investigation of a Real-Time Hearing Loss Simulation for Audio Production’. It builds on an initial hearing loss simulator she presented at the 147th Convention, but now its higher quality, real-time and available as a VST plugin. This means that audio producers can easily preview what their content would sound like to most listeners with hearing loss.

Masking is an important and very interesting auditory phenomenon. With the emergence of immersive sound, there’s more and more research about spatial masking. But questions come up, like whether artificially panning a source to a location will result in masking the same way as actually placing a source at that location. ‘Spatial auditory masking caused by phantom sound images’, presented by Masayuki Nishiguchi, will show how spatial auditory masking works when sources are placed at virtual locations using rendering techniques.

Technical papers

There’s a double bill presented by Hsein Pew, ‘Sonification of Spectroscopic analysis of food data using FM Synthesis’ and ‘A Sonification Algorithm for Subjective Classification of Food Samples.’ They are unusual papers, but not reallly about classifying food samples. The focus is on the sonification method, which turns data into sounds, allowing listeners to easily discriminate between data collections.

Wow. When I first saw Moorer in the list of presenting authors, I thought ‘what a great coincidence that a presenter has the same last name as one of the great legends in audio engineering. But no, it really is James Moorer. We talked about him before in our blog about the greatest JAES papers of all time. And the abstract for his talk, ‘Audio in the New Millenium – Redux‘, is better than anything I could have written about the paper. He wrote, “In the author’s Heyser lecture in 2000, technological advances from the point of view of digital audio from 1980 to 2000 were summarized then projected 20 years into the future. This paper assesses those projections and comes to the somewhat startling conclusion that entertainment (digital video, digital audio, computer games) has become the driver of technology, displacing military and business forces.”

The paper with the most authors is presented by Lutz Ehrig. And he’ll be presenting a breakthrough, the first ‘Balanced Electrostatic All-Silicon MEMS Speakers’. If you don’t know what that is, you’re not alone. But its worth finding out, because this may be tomorrow’s widespread commercial technology.

If you recorded today, but only using equipment from 1955, would it really sound like a 65 year old recording? Clive Mead will present ‘Composing, Recording and Producing with Historical Equipment and Instrument Models’ which explores just that sort of question. He and his co-authors created and used models to simulate the recording technology and instruments, available at different points in recorded music history.

Degradation effects of water immersion on earbud audio quality,’ presented by Scott Beveridge, sounds at first like it might be very minor work, dipping earbuds in water and then listening to distorted sound from them. But I know a bit about the co-authors. They’re the type to apply rigorous, hardcore science to a problem. And it has practical applications too, since its leading towards methods by which consumers can measure the quality of their earbuds.

Forensic audio is a fascinating field, though most people have only come across it in film and TV shows like CSI, where detectives identify incriminating evidence buried in a very noisy recording. In ‘Forensic Interpretation and Processing of User Generated Audio Recordings’, audio forensics expert Rob Maher looks at how user generated recordings, like when many smartphones record a shooting, can be combined, synchronised and used as evidence.

Mark Waldrep presents a somewhat controversial paper, ‘Native High-Resolution versus Red Book Standard Audio: A Perceptual Discrimination Survey’. He sent out high resolution and CD quality recordings to over 450 participants, asking them to judge which was high resolution. The overall results were little better than guessing. But there were a very large number of questionable decisions in his methodology and interpretation of results. I expect this paper will get the online audiophile community talking for quite some time.

Neural networks are all the rage in machine learning. And for good reason- for many tasks, they outperform all the other methods. There are three neural network papers presented, Tejas Manjunath’s ‘Automatic Classification of Live and Studio Audio Recordings using Convolutional Neural Networks‘, J. T. Colonel’s (who is now part of the team behind this blog) ‘Low Latency Timbre Interpolation and Warping using Autoencoding Neural Networks’ and William Mitchell’s ‘Exploring Quality and Generalizability in Parameterized Neural Audio Effects‘.

The research team here did some unpublished work that seemed to suggest that the mix had only a minimal effect on how people respond to music for untrained listeners, but this became more significant with trained sound engineers and musicians. Kelsey Taylor’s research suggests there’s a lot more to uncover here. In ‘I’m All Ears: What Do Untrained Listeners Perceive in a Raw Mix versus a Refined Mix?’, she performed structured interviews and found that untrained listeners perceive a lot of mixing aspects, but use different terms to describe it.

No loudness measure is perfect. Even the well established ones, like ITU 1770 for broadcast content, or the Glasberg Moore auditory model of loudness perception, see http://www.aes.org/e-lib/browse.cfm?elib=16608 here and http://www.aes.org/e-lib/browse.cfm?elib=17098, have been noted before. In ‘Using ITU-R BS.1770 to Measure the Loudness of Music versus Dialog-based Content’, Scott Norcross shows another issue with the ITU loudness measure, the difficulty in matching levels for speech and music.

Staying on the subject of loudness, Kazuma Watanabe presents ‘The Reality of The Loudness War in Japan -A Case Study on Japanese Popular Music’. This loudness war, the overuse of dynamic range compression, has resulted in lower quality recordings (and annoyingly loud TV and radio ads). It also led to measures like the ITU standard. Watanabe and co-authors measured the increased loudness over the last 30 years, and make a strong

Remember to check the AES E-Library which has all the full papers for all the presentations mentioned here, including listing all authors not just presenters. And feel free to get in touch with us. Josh Reiss (author of this blog entry), J. T. Colonel, and Angeliki Mourgela from the Audio Engineering research team within the Centre for Digital Music, will all be (virtually) there.

Congratulations, Dr. Marco Martinez Ramirez

Today one of our PhD student researchers, Marco Martinez Ramirez, successfully defended his PhD. The form of these exams, or vivas, varies from country to country, and even institution to institution, which we discussed previously. Here, its pretty gruelling; behind closed doors, with two expert examiners probing every aspect of the PhD. And it was made even more challenging since it was all online due to the virus situation.
Marco’s PhD was on ‘Deep learning for audio effects modeling.’

Audio effects modeling is the process of emulating an audio effect unit and seeks to recreate the sound, behaviour and main perceptual features of an analog reference device. Both digital and analog audio effect units  transform characteristics of the sound source. These transformations can be linear or nonlinear, time-invariant or time-varying and with short-term and long-term memory. Most typical audio effect transformations are based on dynamics, such as compression; tone such as distortion; frequency such as equalization; and time such as artificial reverberation or modulation based audio effects.

Simulation of audio processors is normally done by designing mathematical models of these systems. Its very difficult because it seeks to accurately model all components within the effect unit, which usually contains mechanical elements together with nonlinear and time-varying analog electronics. Most audio effects models are either simplified or optimized for a specific circuit or  effect and cannot be efficiently translated to other effects.

Marco’s thesis explored deep learning architectures for audio processing in the context of audio effects modelling. He investigated deep neural networks as black-box modelling strategies to solve this task, i.e. by using only input-output measurements. He proposed several different DSP-informed deep learning models to emulate each type of audio effect transformations.

Marco then explored the performance of these models when modeling various analog audio effects, and analyzed how the given tasks are accomplished and what the models are actually learning. He investigated virtual analog models of nonlinear effects, such as a tube preamplifier; nonlinear effects with memory, such as a transistor-based limiter; and electromechanical nonlinear time-varying effects, such as a Leslie speaker cabinet and plate and spring reverberators.

Marco showed that the proposed deep learning architectures represent an improvement of the state-of-the-art in black-box modeling of audio effects and the respective directions of future work are given.

His research also led to a new start-up company, TONZ, which build on his machine learning techniques to provide new audio processing interactions for the next generation of musicians and music makers.

Here’s a list of some of Marco’s papers that relate to his PhD research while a member of the Intelligent Sound Engineering team.

Congratulations again, Marco!

We want you to take part in a listening test

Hi everyone,

As you may know, we do lots of listening tests (audio evaluation experiments) to evaluate our research, or to gather data about perception and preference that we use in the systems we create.

We would like to invite you to participate in a study on the perception of sound effects.

Participation just requires a computer with audio output, a reliable internet connection and internet browser (definitely works on Chrome, should work on most other browsers). The experiment shouldn’t take more than a half hour at most. You don’t have to be an expert to take part. And I promise it will be interesting!

You can access the experiment by going to http://webprojects.eecs.qmul.ac.uk/ee08m037/WebAudioEvaluationTool/test.html?url=FXiveTest.xml

And if you’re curious, the listening test was created using the Web Audio Evaluation Tool, https://github.com/BrechtDeMan/WebAudioEvaluationTool [1,2].

Please direct any questions to joshua.reiss@qmul.ac.uk , r.selfridge@qmul.ac.uk or h.e.tez@qmul.ac.uk .

Once the experiment is finished, I’ll share the results in another blog entry.

Thanks!

[1] N. Jillings, D. Moffat, B. De Man, J. D. Reiss, R. Stables, ‘Web Audio Evaluation Tool: A framework for subjective assessment of audio,’ 2nd Web Audio Conf., Atlanta, 2016

[2] N. Jillings, B. De Man, D. Moffat and J. D. Reiss, ‘Web Audio Evaluation Tool: A Browser-Based Listening Test Environment,’ Sound and Music Computing (SMC), July 26 – Aug. 1, 2015

AES President-Elect-Elect!

“Anyone who is capable of getting themselves made President should on no account be allowed to do the job.” ― Douglas Adams, The Hitchhiker’s Guide to the Galaxy

So I’m sure you’ve all been waiting for this presidential election to end. No not that one. I’m referring to the Audio Engineering Society (AES)’s recent elections for their Board of Directors and Board of Governors.

And I’m very pleased and honored, that I (that’s Josh Reiss, the main author of this blog) have been elected as President.

Its actually three positions; in 2021 I’ll be President-Elect, 2022 President, and 2023 I’ll be Past-President. Another way to look at it is that the AES always has three presidents, one planning for the future, one getting things done and one imparting their experience and knowledge.

For those who don’t know, the AES is the largest professional society in audio engineering and related fields. It has over 12,000 members, and is the only professional society devoted exclusively to audio technology. It was founded in 1948 and has grown to become an international organisation that unites audio engineers, creative artists, scientist and students worldwide by promoting advances in audio and disseminating new knowledge and research.

My thanks to everyone who voted, to the AES in general, and to everyone who has said congratulations. And a big congratulations to all the other elected officers.