Weird and wonderful research to be unveiled at the 144th Audio Engineering Society Convention

th

Last year, we previewed the142nd and 143rd AES Conventions, which we followed with a wrap-up discussions here and here. The next AES  convention is just around the corner, May 23 to 26 in Milan. As before, the Audio Engineering research team here aim to be quite active at the convention.

These conventions have thousands of attendees, but aren’t so large that you get lost or overwhelmed. Away from the main exhibition hall is the Technical Program, which includes plenty of tutorials and presentations on cutting edge research.

So we’ve gathered together some information about a lot of the events that caught our eye as being unusual, exceptionally high quality involved in, attending, or just worth mentioning. And this Convention will certainly live up to the hype.

Wednesday May 23rd

From 11:15 to 12:45 that day, there’s an interesting poster by a team of researchers from the University of Limerick titled Can Visual Priming Affect the Perceived Sound Quality of a Voice Signal in Voice over Internet Protocol (VoIP) Applications? This builds on work we discussed in a previous blog entry, where they did a perceptual study of DFA Faders, looking at how people’s perception of mixing changes when the sound engineer only pretends to make an adjustment.

As expected given the location, there’s lots of great work being presented by Italian researchers. The first one that caught my eye is the 2:30-4 poster on Active noise control for snoring reduction. Whether you’re a loud snorer, sleep next to someone who is a loud snorer or just interested in unusual applications of audio signal processing, this one is worth checking out.

Do you get annoyed sometimes when driving and the road surface changes to something really noisy? Surely someone should do a study and find out which roads are noisiest so that then we can put a bit of effort into better road design and better in-vehicle equalisation and noise reduction? Well, now its finally happened with this paper in the same session on Deep Neural Networks for Road Surface Roughness Classification from Acoustic Signals.

Thursday, May 24

If you were to spend only one day this year immersing yourself in frontier audio engineering research, this is the day to do it.

How do people mix music differently in different countries? And do people perceive the mixes differently based on their different cultural backgrounds? These are the sorts of questions our research team here have been asking. Find out more in this 9:30 presentation by Amandine Pras. She led this Case Study of Cultural Influences on Mixing Practices, in collaboration with Brecht De Man (now with Birmingham City University) and myself.

Rod Selfridge has been blazing new trails in sound synthesis and procedural audio. He won the Best Student Paper Award at AES 141st Convention and the Best Paper Award at Sound and Music Computing. He’ll give another great presentation at noon on Physically Derived Synthesis Model of an Edge Tone which was also discussed in a recent blog entry.

I love the title of this next paper, Miniaturized Noise Generation System—A Simulation of a Simulation, which will be presented at 2:30pm by researchers from Intel Technology in Gdansk, Poland. This idea of a meta-simulation is not as uncommon as you might think; we do digital emulation of old analogue synthesizers, and I’ve seen papers on numerical models of Foley rain sound generators.

A highlight for our team here is our 2:45 pm presentation, FXive: A Web Platform for Procedural Sound Synthesis. We’ll be unveiling a disruptive innovation for sound design, FXive.com, aimed at replacing reliance on sound effect libraries. Please come check it out, and get in touch with the presenters or any members of the team to find out more.

Immediately following this is a presentation which asks Can Algorithms Replace a Sound Engineer? This is a question the research team here have also investigated a lot, you could even say it was the main focus of our research for several years. The team behind this presentation are asking it in relation to Auto-EQ. I’m sure it will be interesting, and I hope they reference a few of our papers on the subject.

From 9-10:30, I will chair a Workshop on The State of the Art in Sound Synthesis and Procedural Audio, featuring the world’s experts on the subject. Outside of speech and possibly music, sound synthesis is still in its infancy, but its destined to change the world of sound design in the near future. Find out why.

12:15 — 13:45 is a workshop related to machine learning in audio (a subject that is sometimes called Machine Listening), Deep Learning for Audio Applications. Deep learning can be quite a technical subject, and there’s a lot of hype around it. So a Workshop on the subject is a good way to get a feel for it. See below for another machine listening related workshop on Friday.

The Heyser Lecture, named after Richard Heyser (we discussed some of his work in a previous entry), is a prestigious evening talk given by one of the eminent individuals in the field. This one will be presented by Malcolm Hawksford. , a man who has had major impact on research in audio engineering for decades.

Friday

The 9:30 — 11 poster session features some unusual but very interesting research. A talented team of researchers from Ancona will present A Preliminary Study of Sounds Emitted by Honey Bees in a Beehive.

Intense solar activity in March 2012 caused some amazing solar storms here on Earth. Researchers in Finland recorded them, and some very unusual results will be presented in the same session with the poster titled Analysis of Reports and Crackling Sounds with Associated Magnetic Field Disturbances Recorded during a Geomagnetic Storm on March 7, 2012 in Southern Finland.

You’ve been living in a cave if you haven’t noticed the recent proliferation of smart devices, especially in the audio field. But what makes them tick, is there a common framework and how are they tested? Find out more at 10:45 when researchers from Audio Precision will present The Anatomy, Physiology, and Diagnostics of Smart Audio Devices.

From 3 to 4:30, there’s a Workshop on Artificial Intelligence in Your Audio. It follows on from a highly successful workshop we did on the subject at the last Convention.

Saturday

A couple of weeks ago, John Flynn wrote an excellent blog entry describing his paper on Improving the Frequency Response Magnitude and Phase of Analogue-Matched Digital Filters. His work is a true advance on the state of the art, providing digital filters with closer matches to their analogue counterparts than any previous approaches. The full details will be unveiled in his presentation at 10:30.

If you haven’t seen Mariana Lopez presenting research, you’re missing out. Her enthusiasm for the subject is infectious, and she has a wonderful ability to convey the technical details, their deeper meanings and their importance to any audience. See her one hour tutorial on Hearing the Past: Using Acoustic Measurement Techniques and Computer Models to Study Heritage Sites, starting at 9:15.

The full program can be explored on the Convention Calendar or the Convention website. Come say hi to us if you’re there! Josh Reiss (author of this blog entry), John Flynn, Parham Bahadoran and Adan Benito from the Audio Engineering research team within the Centre for Digital Music, along with two recent graduates Brecht De Man and Rod Selfridge, will all be there.

Advertisements

Analogue matched digital EQ: How far can you go linearly?

(Background post for the paper “Improving the frequency response magnitude and phase of
analogue-matched digital filters” by John Flynn & Josh Reiss for AES Milan 2018)

Professional audio mastering is a field that is still dominated by analogue hardware. Many mastering engineers still favour their go-to outboard compressors and equalisers over digital emulations. As a practising mastering engineer myself, I empathise. Quality analogue gear has a proven track record in terms of sonic quality spanning about a century. Even though digital approximations of analogue tools have gotten better, particularly over the past decade, I too have tended to reach for analogue hardware. However, through my research at Queen Mary with Professor Josh Reiss, that is changing.

When modelling an analogue EQ, a lot of focus has been in modelling distortions and other non-linearities, we chose to look at the linear component. Have we reached a ceiling in terms of modelling an analogue prototype filter in the digital domain? Can we do better? We found that yes there was room for improvement and yes we can do better.

The milestone of research in this area is Orfanidis’ 1997 paper “Digital parametric equalizer design with prescribed Nyquist-frequency gain“, the first major improvement over the bilinear transform which has a reknowned ‘cramped’ sound in the high frequencies. Basically, the bilinear transform is what all first generation digital equalisers is based on. It’s high frequencies towards 20kHz drops sharply, giving a ‘closed/cramped’ sound. Orfanidis and later improvements by Massberg [9] & Gunness/Chauhan [10] give a much better approximation of an analogue prototype.

blt

However [9],[10] improve magnitude, they don’t capture analogue phase. Bizarrely, the bilinear transform performs reasonably well on phase. So we knew it was possible.

So the problem is: how do you get a more accurate magnitude match to analogue than [9],[10]? While also getting a good match to phase? Many attempts, including complicated iterative Parks/McClellen filter design approaches, fell flat. It turned out that Occam was right, in this case a simple answer was the better answer.

By combining a matched-z transform, frequency sampling filter design and a little bit of clever coefficient manipulation, we achieved excellent results. A match to the analogue prototype to an arbitrary degree. At low filter lengths you get a filter that performs as well as [9],[10] in magnitude but also matches analogue phase. By using longer filter lengths the match to analogue is extremely precise, in both magnitude and phase (lower error is more accurate)

error-vs

 

Since submitting the post I have released the algorithm in a plugin with my mastering company and been getting informal feedback from other mastering engineers about how this sounds in use.

balance-mastering-analog-magpha-eq-plugin-small-new

Overall the word back has been overwhelmingly positive, with one engineer claiming it to be the “the best sounding plugin EQ on the market to date”. It’s nice know that those long hours staring at decibel error charts have not been in vain.

Are you heading to AES Milan next month? Come up and say hello!

 

Audio Engineering Society E-library

I try to avoid too much promotion in this blog, but in this case I think its justified. I’m involved in advancing a resource from a non-profit professional organisation, the Audio Engineering Society. They do lots and lots of different things, promoting the science, education and practice of all things audio engineering related. Among others, they’ve been publishing research in the area for almost 70 years, and institutions can get full access to all the content in a searchable library. In recent posts, I’ve written about some of the greatest papers ever published there, Part 1 and Part 2, and about one of my own contributions.

In an ideal world, this would all be Open Access . But publishing still costs money, so the AES support both gold Open Access (free to all, but authors pay Article Processing Charges) and the traditional model, where its free to publish but individuals or institutions subscribe or articles can be purchased individually. AES members get free access. I could write many blog articles just about Open Access (should I?)- its never as straightforward as it seems. At its best it is freely disseminating information for the benefit of all, but at its worst its like Pay to Play, a highly criticised practice for the music industry, and gives publishers an incentive to lower acceptance standards. But for now I’ll just point out that the AES does its absolute best to keep the costs down, regardless of publishing model, and the costs are generally much less than similar publishers.

Anyway, the AES realised that one of the most cost effective ways to get our content out to large communities is through institutional licenses or subscriptions. And we’re missing an opportunity here since we haven’t really promoted this option. And everybody benefits from it; wider dissemination of knowledge and research, more awareness of the AES, better access, etc. With this in mind, the AES issued the following press release, which I have copied verbatim. You can also find it as a tweet, blog entry or facebook post.

AES_ELibrary

AES E-Library Subscriptions Benefit Institutions and Organizations

— The Audio Engineering Society E-Library is the world’s largest collection of audio industry resources, and subscriptions provide access to extensive content for research, product development and education — 

New York, NY, March 22, 2018 — Does your research staff, faculty or students deserve access to the world’s most comprehensive collection of audio information? The continuously growing Audio Engineering Society (AES) E-Library contains over 16,000 fully searchable PDF files documenting the progression of audio research from 1953 to the present day. It includes every AES paper published from every AES convention and conference, as well as those published in the Journal of the Audio Engineering Society. From the phonograph to MP3s, from early concepts of digital audio through its fulfillment as the mainstay of audio production, distribution and reproduction, to leading-edge realization of spatial audio and audio for augmented and virtual reality, the E-Library provides a gateway to both the historical and the forward-looking foundational knowledge that sustains an entire industry.  

The AES E-Library has become the go-to online resource for anyone looking to gain instant access to the vast amount of information gathered by the Audio Engineering Society through research, presentations, interviews, conventions, section meetings and more. “Our academic and research staff, and PhD and undergraduate Tonmeister students, use the AES E-Library a lot,” says Dr. Tim Brookes, Senior Lecturer in Audio & Director of Research Institute of Sound Recording (IoSR) University of Surrey. “It’s an invaluable resource for our teaching, for independent student study and, of course, for our research.” 

“Researchers, academics and students benefit from E-Library access daily,” says Joshua Reiss, Chair of the AES Publications Policy Committee, “while many relevant institutions – academic, governmental or corporate – do not have an institutional license of the AES E-library, which means their staff or students are missing out on all the wonderful content there. We encourage all involved in audio research and investigation to inquire if their libraries have an E-Library subscription and, if not, suggest the library subscribe.” 

E-Library subscriptions can be obtained directly from the AES or through journal bundling services. A subscription allows a library’s users to download any document in the E-Library at no additional cost. 

“As an international audio company with over 25,000 employees world-wide, the AES E-library has been an incredibly valuable resource used by Harman audio researchers, engineers, patent lawyers and others,” says Dr. Sean Olive, Acoustic Research Fellow, Harman International. “It has paid for itself many times over.” 

The fee for an institutional online E-Library subscription is $1800 per year, which is significantly less than equivalent publisher licenses. 

To search the E-library, go to http://www.aes.org/e-lib/

To arrange for an institutional license, contact Lori Jackson directly at lori.jackson@aes.org, or go to http://www.aes.org/e-lib/subscribe/.

 

About the Audio Engineering Society
The Audio Engineering Society, celebrating its 70th anniversary in 2018, now counts over 12,000 members throughout the U.S., Latin America, Europe, Japan and the Far East. The organization serves as the pivotal force in the exchange and dissemination of technical information for the industry. Currently, its members are affiliated with 90 AES professional sections and more than 120 AES student sections around the world. Section activities include guest speakers, technical tours, demonstrations and social functions. Through local AES section events, members experience valuable opportunities for professional networking and personal growth. For additional information visit http://www.aes.org.

Join the conversation and keep up with the latest AES News and Events:
Twitter: #AESorg (AES Official) 
Facebook: http://facebook.com/AES.org

Greatest JAES papers of all time, Part 2

Last week I revealed Part 1 of the greatest ever papers published in the Journal of the Audio Engineering Society (JAES). JAES is the premier peer-reviewed journal devoted exclusively to audio technology, and the flagship publication of the AES. This week, its time for Part 2. There’s little rhyme or reason to how I divided up and selected the papers, other than I started by looking at the most highly cited ones according to Google Scholar. But all the papers listed here have had major impact on the science, education and practice of audio engineering and related fields.

All of the papers below are available from the Audio Engineering Society (AES) E-library, the world’s most comprehensive collection of audio information. It contains over 16,000 fully searchable PDF files documenting the progression of audio research from 1953 to the present day. It includes every AES paper published at a convention, conference or in the Journal. Members of the AES get free access to the E-library. To arrange for an institutional license, giving full access to all members of an institution, contact Lori Jackson Lori Jackson directly, or go to http://www.aes.org/e-lib/subscribe/ .

And without further ado, here are the rest of the Selected greatest JAES papers

More than any other work, this 1992 paper by Stanley Lipshitz and co-authors has resulted in the correct application of dither by music production. Its one possible reason that digital recording quality improved after the early years of the Compact Disc (though the loudness wars reversed that trend). As renowned mastering engineer Bob Katz put it, “if you want to get your digital audio done just right, then you should learn about dither,” and there is no better resource than this paper.

According to Wikipedia, this 1993 paper coined the term Auralization as an analogy to visualization for rendering audible (imaginary) sound fields. This general research area of understanding and rendering the sound field of acoustic spaces has resulted in several other highly influential papers. Berkhout’s 1988 A holographic approach to acoustic control (575 citations) described the appealingly named acoustic holography method for rendering sound fields. In 1999, the groundbreaking Creating interactive virtual acoustic environments (427 citations) took this further, laying out the theory and challenges of virtual acoustics rendering, and paving the way for highly realistic audio in today’s Virtual Reality systems.

The Schroeder reverberator was first described here, way back in 1962. It has become the basis for almost all algorithmic reverberation approaches. Manfred Schroeder was another great innovator in the audio engineering field. A long transcript of a fascinating interview is available here, and a short video interview below.

These two famous papers are the basis for the Thiele Small parameters. Thiele rigorously analysed and simulated the performance of loudspeakers in the first paper from 1971, and Small greatly extended the work in the second paper in 1972. Both had initially published the work in small Australian journals, but it didn’t get widely recognised until the JAES publications. These equations form the basis for much of loudspeaker design.

Check out;

or the dozens of youtube videos about choosing and designing loudspeakers which make use of these parameters.

This is the first English language publication to describe the Haas effect, named after the author. Also called the precedence effect, it investigated the phenomenon that when sending the same signal to two loudspeakers, a small delay between the speakers results in the sound appearing to come just from one speaker. Its now widely used in sound reinforcement systems, and in audio production to give a sense of depth or more realistic panning (the Haas trick).

Hass-effect

This is the first ever research paper published in JAES. Published in August 1949, it set a high standard for rigour, while at the same time emphasising that many publications will have strong relevance not just to researchers, but to audiophiles and practitioners as well.

It described a new instrument for frequency response measurement and display. People just love impulse response and transfer function measurements, and some of the most highly cited JAES papers are on this topic; 1983’s An efficient algorithm for measuring the impulse response using pseudorandom noise (308 citations), Transfer-function measurement with maximum-length sequences (771 citations), the 2001 paper from a Brazil-based team, Transfer-function measurement with sweeps (722 citations), and finally Comparison of different impulse response measurement techniques (276 citations) in 2002. With a direct link between theory and new applications, these papers on maximum length sequence approaches and sine sweeps were major advances over the alternatives, and changed the way such measurements are made.

And the winner is… Ville Pulkki’s Vector Base Amplitude Panning (VBAP) paper! This is the highest cited paper in JAES. Besides deriving the stereo panning law from basic geometry, it unveiled VBAP, an intuitive and now widely used spatial audio technique. Ten years later, Pulkki unveiled another groundbreaking spatial audio format, DirAC, in Spatial sound reproduction with directional audio coding (386 citations).

Greatest JAES papers of all time, Part 1

The Journal of the Audio Engineering Society (JAES) is the premier publication of the AES, and is the only peer-reviewed journal devoted exclusively to audio technology. The first issue was published in 1949, though volume 1 began in 1953. For the past 70 years, it has had major impact on the science, education and practice of audio engineering and related fields.

I was curious which were the most important JAES papers, so had a look at Google Scholar to see which had the most citations. This has lots of issues, not just because Scholar won’t find everything, but because a lot of the impact is in products and practice, which doesn’t usually lead to citing the papers. Nevertheless, I looked over the list, picked out some of the most interesting ones and following no rules except my own biases, selected the Greatest Papers of All Time Published in the Journal of the Audio Engineering Society. Not surprisingly, the list is much longer than a single blog entry, so this is just part 1.

All of the papers below are available from the Audio Engineering Society (AES) E-library, the world’s most comprehensive collection of audio information. It contains over 16,000 fully searchable PDF files documenting the progression of audio research from 1953 to the present day. It includes every AES paper published at a convention, conference or in the Journal. Members of the AES get free access to the E-library. To arrange for an institutional license, giving full access to all members of an institution, contact Lori Jackson Lori Jackson directly, or go to http://www.aes.org/e-lib/subscribe/ .

Selected greatest JAES papers

ambisonicsThis is the main ambisonics paper by one* of its originator, Michael Gerzon, and perhaps the first place the theory was described in detail (and very clearly too). Ambisonics is incredibly flexible and elegant. It is now used in a lot of games and has become the preferred audio format for virtual reality. Two other JAES ambisonics papers are also very highly cited. In 1985, Michael Gerzon’s Ambisonics in multichannel broadcasting and video (368 citations) described the high potential of ambisonics for broadcast audio, which is now reaching its potential due to the emergence of object-based audio production. And 2005 saw Mark Poletti’s Three-dimensional surround sound systems based on spherical harmonics (348 citations), which rigorously laid out and generalised all the mathematical theory of ambisonics.

*See the comment on this entry. Jerry Bauck correctly pointed out that Duane H. Cooper was the first to describe ambisonics in some form, and Michael Gerzon credited him for it too. Cooper’s work was also published in JAES. Thanks Jerry.

James Moorer

This isn’t one of the highest cited papers, but it still had huge impact, and James Moorer is a legend in the field of audio engineering (see his prescient ‘Audio in the New Millenium‘). The paper popularised the phase vocoder, now one of the most important building blocks of modern audio effects. Auto-tune, anyone?

Richard Heyser’s Time Delay Spectrometry technique allowed one to make high quality anechoic spectral measurements in the presence of a reverberant environment. It was ahead of its time since despite the efficiency and elegance, computing power was not up to employing the method. But by the 1980s, it was possible to perform complex on-site measurements of systems and spaces using Time Delay Spectrometry. The AES now organises Heyser Memorial Lectures in his honor.

hrtf

Together, these two papers by Henrik Møller et al completed transformed the world of binaural audio. The first paper described the first major dataset of detailed HRTFs, and how they vary from subject to subject. The second studied localization performance when subjects listened to a soundfield, the same soundfield using binaural recordings with their own HRTFs, and those soundfields using the HRTFs of others. It nailed down the state of the art and the challenges for future research.

The early MPEG audio standards. MPEG 1 unveiled the MP3, followed by the improved MPEG2 AAC. They changed the face of not just audio encoding, but completely revolutionised music consumption and the music industry.

John Chowning was a pioneer and visionary in computer music. This seminal work described FM synthesis, where the timbre of a simple waveform is changed by frequency modulating it with another frequency also in the audio range, resulting in a surprisingly rich control of audio spectra and their evolution in time. In 1971, Chowning also published The simulation of moving sound sources (278 citations), perhaps the first system (and using digital technology) for synthesising an evolving sound scene.

The famous Glasberg and Moore loudness model is perhaps the most widely used auditory model for loudness and masking estimation. Other aspects of it have appeared in other papers (including A model of loudness applicable to time-varying sounds, 487 citations, 2002).

More greatest papers in the next blog entry.

My favorite sessions from the 143rd AES Convention

AES_NY

Recently, several researchers from the audio engineering research team here attended the 143rd Audio Engineering Society Convention in New York. Before the Convention, I wrote a blog entry highlighting a lot of the more interesting or adventurous research that was being presented there. As is usually the case at these Conventions, I have so many meetings to attend that I miss out on a lot of highlights, even ones that I flag up beforehand as ‘must see’. Still, I managed to attend some real gems this time, and I’ll discuss a few of them here.

I’m glad that I attended ‘Audio Engineering with Hearing Loss—A Practical Symposium’ . Hearing loss amongst musicians, audiophiles and audio engineers is an important topic that needs more attention. Overexposure, both prolonged and too loud, is a major cause of hearing dage. In addition to all the issues it causes for anybody, for those in the industry, it affects their ability to work or even appreciate their passion. The session had lots of interesting advice.

The most interesting presentation in the session was from Richard Einhorn, a composer and music producer. In 2010, he lost much of his hearing due to a virus. He woke up one day to find that he had completely lost hearing in his right ear, a condition known as Idiopathic Sudden Sensorineural Hearing Loss. This then evolved into hyperacusis, with extreme distortion, excessive volume and speech intelligibility. In many ways, deafness in the right ear would have been preferred. On top of that, his left ear suffered otosclerosis, where everything was at greatly reduced volume. And given that this was his only functioning ear, the risk of surgery to correct it was too great.

Richard has found some wonderful ways to still function, and even continue working in audio and music, with the limited hearing he still has. There’s a wonderful description of them in Hearing Loss Magazine, and they include the use of the ‘Companion Mic,’ which allowed him to hear from many different locations around a busy, noisy environment, like a crowded restaurant.

Thomas Lund presented ‘The Bandwidth of Human Perception and its Implications for Pro Audio.’ I really wasn’t sure about this before the Convention. I had read the abstract, and thought it might be some meandering, somewhat philosophical talk about hearing perception, with plenty of speculation but lacking in substance. I was very glad to be proven wrong! It had aspects of all of that, but in a very positive sense. It was quite rigorous, essentially a systematic review of research in the field that had been published in medical journals. It looks at the question of auditory perceptual bandwidth, where bandwidth is in a general information theoretic and cognitive sense, not specifically frequency range. The research revolves around the fact that, though we receive many megabits of sensory information every second, it seems that we only use dozens of bits per second of information in our higher level perception. This has lots of implications for listening test design, notably on how to deal with aspects like sample duration or training of participants. This was probably the most fascinating technical talk I saw at the Convention.

There were two papers that I had flagged up as having the most interesting titles, ‘Influence of Audience Noises on the Classical Music Perception on the Example of Anti-cough Candies Unwrapping Noise’, and ‘Acoustic Levitation—Standing Wave Demonstration.’ I had an interesting chat with an author of the first one, Adam Pilch. When walking around much later looking for the poster for the second one, I bump into Adam again. Turns out, he was a co-author on both of them! It looks like Adam Pilch and Bartlomiej Chojnacki (the shared authors on those papers) and their co-authors have an appreciation of the joy of doing research for fun and curiousity, and an appreciation for a good paper title.

Leslie Ann Jones was the Heyser lecturer. The Heyser lecture, named after Richard C. Heyser, is an evening talk given by an eminent individual in audio engineering or related fields. Leslie has had a fascinating career, and gave a talk that makes one realise just how much the industry is changing and growing, and how important are the individuals and opportunities that one encounters in a career.

The last session I attended was also one of the best. Chris Pike, who recently became leader of the audio research team at BBC R&D (he has big shoes to fill, but fits them well and is already racing ahead), presented ‘What’s This? Doctor Who with Spatial Audio!’ . I knew this was going to be good because it involved two of my favorite things, but it was much better than that. The audience were all handed headphones so that they could listen to binaural renderings used throughout the presentation. I love props at technical talks! I also expected the talk to focus almost completely on the binaural, 3d sound rendering for a recent episode, but it was so much more than that. There was quite detailed discussion of audio innovation throughout the more than 50 years of Doctor Who, some of which we have discussed when mentioning Daphne Oram and Delia Derbyshire in our blog entry on female pioneers in audio engineering.

There’s a nice short interview with Chris and colleagues Darran Clement (sound mixer) and Catherine Robinson (audio supervisor) about the binaural sound in Doctor Who on BBC R&D’s blog, and here’s a youtube video promoting the binaural sound in the recent episode;

 

Our meta-analysis wins best JAES paper 2016!

Last year, we published an Open Access article in the Journal of the Audio Engineering Society (JAES) on “A meta-analysis of high resolution audio perceptual evaluation.”

JAES_V64_6_ALL

I’m very pleased and proud to announce that this paper won the award for best JAES paper for the calendar year 2016.

We discussed the research a little bit while it was ongoing, and then in more detail soon after publication. The research addressed a contentious issue in the audio industry. For decades, professionals and enthusiasts have engaged in heated debate over whether high resolution audio (beyond CD quality) really makes a difference. So I undertook a meta-analysis to assess the ability to perceive a difference between high resolution and standard CD quality audio. Meta-analysis is a popular technique in medical research, but this may be the first time that its been formally applied to audio engineering and psychoacoustics. Results showed a highly significant ability to discriminate high resolution content in trained subjects that had not previously been revealed. With over 400 participants in over 12,500 trials, it represented the most thorough investigation of high resolution audio so far.

Since publication, this paper was covered broadly across social media, popular press and trade journals. Thousands of comments were made on forums, with hundreds of thousands of reads.

Here’s one popular independent youtube video discussing it.

and an interview with Scientific American about it,

and some discussion of it in this article for Forbes magazine (which is actually about the lack of a headphone jack in the iPhone 7).

But if you want to see just how angry this research made people, check out the discussion on hydrogenaudio. Wow, I’ve never been called an intellectually dishonest placebophile apologist before 😉 .

In fact, the discussion on social media was full of misinformation, so I’ll try and clear up a few things here;

When I first started looking into this subject , it became clear that potential issues in the studies was a problem. One option would have been to just give up, but then I’d be adding no rigour to a discussion because I felt it wasn’t rigourous enough. Its the same as not publishing because you don’t get a significant result, only now on a meta scale. And though I did not have a strong opinion either way as to whether differences could be perceived, I could easily be fooling myself. I wanted to avoid any of my own biases or judgement calls. So I set some ground rules.

  • I committed to publishing all results, regardless of outcome.
  • A strong motivation for doing the meta-analysis was to avoid cherry-picking studies. So I included all studies for which there was sufficient data for them to be used in meta-analysis.  Even if I thought a study was poor, its conclusions seemed flawed or it disagreed with my own conceptions, if I could get the minimal data to do meta-analysis, I included it. I then discussed potential issues.
  • Any choices regarding analysis or transformation of data was made a priori, regardless of the result of that choice, in an attempt to minimize any of my own biases influencing the outcome.
  • I did further analysis to look at alternative methods of study selection and representation.

I found the whole process of doing a meta-analysis in this field to be fascinating. In audio engineering and psychoacoustics, there are a wealth of studies investigating big questions, and I hope others will use similar approaches to gain deeper insights and perhaps even resolve some issues.