Research

To communicate effectively with each other, we often need to be able to understand what someone’s saying when background noise is present. Even listening to a friend speak at a busy restaurant can be difficult when other people are having loud conversations at nearby tables. I’m particularly interested in how we understand speech in these challenging listening environments. To explore this, I use a combination of methods, including psychophysics, neuroimaging (EEG, MEG, and fMRI), pupillometry, and computational modelling.

How do cognitive factors improve speech perception in noisy environments?

3D Brain ImageA theme of my research is to investigate how we focus our attention on a sound of interest—for example, a friend speaking in a noisy place. How does attention influence perception and the ways in which our brains process sounds? We’ve demonstrated that, even before a talker starts speaking, people engage brain activity to prepare for characteristics of an upcoming talker—and when people have longer to prepare for an upcoming talker (up to 2 seconds), they’re better at understanding what that person has said. We’ve also shown that responses to the frequency of a tone are greater and more precisely time-locked to tones at attended than unattended frequencies.

people-2567915_1920Another factor that improves speech intelligiblity in noisy environments is being familiar with a person’s voice, which relies on long-term memory representations. We found that knowing the person who’s speaking provides a large benefit to speech intelligibility—improving the accuracy of reporting sentences by 10–25% when it’s spoken by a close friend or partner compared to when it’s spoken by a stranger. Intruiguingly, this benefit is still present when we acoustically manipulated the familiar voice, by changing its pitch or timbre—even when it was manipulated enough so that it was no longer recognisable as that person’s voice. As well as naturally familiar voices (e.g., friends and spouses), we’ve also studied voices that we’ve trained in the lab to become familiar. This work showed that people are able to better understand speech spoken by unfamiliar people after they’ve been trained to recognise that voice in the lab. We found that speech intelligibility is as good for unfamiliar voices that have been trained for just one hour as for naturally familiar voices (e.g., friends and spouses).

Selected relevant publications:

How are cognitive influences on speech perception affected by hearing loss?

WomanWearingHearingAidAnother aspect of my research investigates how cognitive factors influence speech perception for people with sensorineural hearing impairment. People with hearing loss often find listening in noisy environments difficult and effortful, even when they use hearing aids. We found that children with peripheral hearing loss prepare their attention for an upcoming talker less than children with normal hearing do, helping to explain why they struggle to listen in noisy places. Nevertheless, we found that knowing the topic of conversation improves speech intelligibility and reduces listening effort for adults with hearing impairment. Thus, although some cognitive abilities seem to be impaired in people with hearing loss, they nevertheless retain the ability to use context and prior knowledge to help them understand speech—perhaps relying on context to help compensate for a degraded acoustic signal. I’m interested in how we can improve speech perception, and make listening less tiring and effortful, for people with hearing loss.

Selected relevant publications:

Why do some people without detectable hearing loss report substantial difficulty listening in noisy settings?

CocktailPartyI’m also interested in why some people find it very difficult to understand speech in noisy environments, despite having no detectable hearing loss. Researchers and clinicians don’t fully understand the cause(s) of this difficulty or how it can be treated. Our research has shown that some of this difficulty may arise from problems segregating sounds, which we measured using a non-linguistic figure-ground task. This task appears to tap into grouping processes in early auditory cortex that are similar to those involved in understanding speech in noisy environments. We found that even young people with no detectable hearing loss vary substantially in how well they can understand speech in noisy environments—and their grouping ability predicted their ability to understand speech when background noise was present. In another study, we also showed that the ability to hold sounds in mind over several seconds (working memory for frequency) helped to predict the ability to understand speech. These findings imply that cognitive processes in auditory cortex might help us to understand why some people without detectable hearing loss have substantial difficulty listening in noisy settings.

Selected relevant publications:


Current projects

Current projects include:

  • Modelling the cognitive processes involved in speech perception;
  • Using neuroimaging to understand how people direct spatial attention to speech;
  • Examining how spatial attention relates to age and audiometric thresholds in older adults;
  • Exploring factors that might mediate the link between hearing loss and dementia;
  • Testing how voice training benefits speech intelligibility

If you’re interested in working with me—as an undergraduate student, masters student, PhD student, or postdoc—please feel free to get in touch. I believe diversity adds value to groups and I welcome enquiries from people from historically underrepresented communities. I strive to create a supportive and inclusive lab environment in which all members can thrive.


For a full list of publications, see my Google Scholar or NCBI page.