Funded PhD studentships available in Data-informed Audience-centric Media Engineering

So its been a while since I’ve written a blog post. Life, work, and of course, the Covid crisis has made my time limited. But hopefully I’ll write more frequently in future.

The good news is that there are fully funded PhD studentships which you or others you know might be interested in. They are all around the concept of Data-informed Audience-centric Media Engineering (DAME). See for details.

Three studentships are available. They are all fully-funded, for four years of study, based at Queen Mary University of London, and starting January 2021. Two of the proposed topics, ‘Media engineering for hearing-impaired audiences’ and ‘Intelligent systems for radio drama production’, are supported by BBC and build on prior and ongoing work by my research team.

  • Media engineering for hearing-impaired audiences: This research proposes the exploration of ways in which media content can be automatically processed to deliver the content optimally for audiences with hearing loss. It builds on prior work by our group and the collaborator, BBC, in development of effective audio mixing techniques for broadcast audio enhancement [1,2,3]. It will form a deeper understanding of the effects of hearing loss on media content perception and enjoyment, as well as utilize this knowledge towards the development of intelligent audio production techniques and applications that could improve audio quality by providing efficient and customisable compensation. It aims to advance beyond current research [4], which does not yet fully take into account the artistic intent of the material, and requires an ‘ideal mix’ for normal hearing listeners. So a new approach that both removes constraints and is more focused on the meaning of the content is required. This approach will be derived from natural language processing and audio informatics, to prioritise sources and establish requirements for the preferred mix.
  • Intelligent systems for radio drama production: This research topic proposes methods for assisting a human creator in producing radio dramas. Radio drama consists of both literary aspects, such as plot, story characters, or environments, as well as production aspects, such as speech, music, and sound effects. This project builds on recent, high impact collaboration with BBC [3, 5], to greatly advance the understanding of radio drama production, with the goal of devising and assessing intelligent technologies to aid in its creation. The project will first be concerned with investigating rules-based systems for generating production scripts from story outlines, and producing draft content from such scripts. It will consider existing workflows for content production and where such approaches rely on heavy manual labour. Evaluation will be with expert content producers, with the goal of creating new technologies that streamline workflows and facilitate the creative process.

If you or anyone you know is interested, please look at . Consider applying and feel free to ask me any questions.

[1] A. Mourgela, T. Agus and J. D. Reiss, “Perceptually Motivated Hearing Loss Simulation for Audio Mixing Reference,” 147th AES Convention, 2019.

[2] Ward, Lauren, et al. “Casualty Accessible and Enhanced (A&E) Audio: Trialling Object-Based Accessible TV Audio.” Audio Engineering Society Convention 147. 2019.

[3] E. T. Chourdakis, L. Ward, M. Paradis and J. D. Reiss, “Modelling Experts’ Decisions on Assigning Narrative Importances of Objects in a Radio Drama Mix,” Digital Audio Effects Conference (DAFx), 2019.

[4] L. Ward and B. Shirley, Personalization in object-based audio for accessibility: a review of advancements for hearing impaired listeners. Journal of the Audio Engineering Society, 67(7/8), 584-597, 2019.

[5] E. T. Chourdakis and J. D. Reiss, ‘From my pen to your ears: automatic production of radio plays from unstructured story text,’ 15th Sound and Music Computing Conference (SMC), Limassol, Cyprus, 4-7 July, 2018