I believe that the way we hear can tell us a lot about the way we speak! For example, one of my research projects deals with the role of so-called acoustic landmarks, which seem to be important in speech production as well as speech perception. To find out more about this, I search for patterns in functional models of auditory processing that could explain certain aspects of articulation.
What are the underlying acoustic features driving speech and voice recognition? To improve our understanding of speech acoustics and speaker recognition, I often use synthesized or manipulated speech in behavioural experiments.
In collaboration with researchers and physicians from UCL’s Great Ormond Street Hospital for Children, I am working on a non-invasive method to measure brain lateralisation for speech and language. The aim of this project is to develop a simple and robust technique that can be used in the planning of surgery in young patients with intractable epilepsy.
Currently, I am particularly interested in a newly emerging neuroimaging technique called functional near-infrared spectroscopy (fNIRS), which can be used to study the cortical processing of speech and language in clinical populations that are not amenable to functional magnetic resonance imaging (fMRI) and electroencephalography (EEG).
I am involved in a project led by researchers at the UCL Ear Institute and the UCL Hospitals NHS Foundation Trust (UCLH), which studies the cerebral processing of speech and language in children and adults with cochlear implants. The major aim of this work is to help improve their understanding and use of spoken language.
I am also very interested in the acoustics of singing voices. In collaboration with a team at the Zurich University of the Arts (ZHdK) in Switzerland, I have helped to build up a comprehensive database of artistic voices that have been recorded under minutely controlled conditions.