To communicate effectively with each other, we often need to be able to understand what someone’s saying when background noise is present. Even listening to a friend speak at a busy restaurant can be difficult when other people are having loud conversations at nearby tables. I’m particularly interested in how we understand speech in these challenging listening environments. To explore this, I use a combination of methods, including psychophysics, neuroimaging (EEG, MEG, and fMRI), pupillometry, and computational modelling.
How do cognitive factors improve speech perception in noisy environments?
A theme of my research is to investigate how we focus our attention on a sound of interest—for example, a friend speaking in a noisy place. How does attention influence perception and the ways in which our brains process sounds? We’ve demonstrated that, even before a talker starts speaking, people engage brain activity to prepare for characteristics of an upcoming talker—and when people have longer to prepare for an upcoming talker (up to 2 seconds), they’re better at understanding what that person has said. We’ve also shown that responses to the frequency of a tone are greater and more precisely time-locked to tones at attended than unattended frequencies.
Another factor that improves speech intelligiblity in noisy environments is being familiar with a person’s voice, which relies on long-term memory representations. We found that knowing the person who’s speaking provides a large benefit to speech intelligibility—improving the accuracy of reporting sentences by 10–25% when it’s spoken by a close friend or partner compared to when it’s spoken by a stranger. Intruiguingly, this benefit is still present when we acoustically manipulated the familiar voice, by changing its pitch or timbre—even when it was manipulated enough so that it was no longer recognisable as that person’s voice. As well as naturally familiar voices (e.g., friends and spouses), we’ve also studied voices that we’ve trained in the lab to become familiar. This work showed that people are able to better understand speech spoken by unfamiliar people after they’ve been trained to recognise that voice in the lab. We found that speech intelligibility is as good for unfamiliar voices that have been trained for just one hour as for naturally familiar voices (e.g., friends and spouses).
Selected relevant publications:
- Holmes, E., Kitterick, P. T., Summerfield, A. Q. (2016). EEG activity evoked in preparation for multi-talker listening by adults and children. Hearing Research, 336, 83–100. doi:10.1016/j.heares.2016.04.007
- Holmes, E., Kitterick, P. T. & Summerfield, A. Q. (2018). Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2,000 ms. Attention, Perception, & Psychophysics, 80(6), 1520-1538. doi:10.3758/s13414-018-1531-x
- Holmes, E., Parr, T., Griffiths, T. D., & Friston, K. J. (2021). Active inference, selective attention, and the cocktail party problem. Neuroscience and Biobehavioral Reviews, 131, 1288-1304. doi:10.1016/j.neubiorev.2021.09.038
- Holmes, E., Purcell, D. W., Carlyon, R. P. et al. (2018). Attentional modulation of envelope-following responses at lower (93–109 Hz) but not higher (217–233 Hz) modulation rates. JARO, 19, 83–97. doi:10.1007/s10162-017-0641-9
- Holmes, E., Domingo, Y., & Johnsrude, I. S. (2018). Familiar voices are more intelligible, even if they are not recognized as familiar. Psychological Science, 29(10), 1575–1583. doi:10.1177/0956797618779083
- Domingo, Y., Holmes, E., & Johnsrude, I. S. (2020). The benefit to speech intelligibility of hearing a familiar voice. Journal of Experimental Psychology: Applied, 26(2), 236–247.
- Holmes, E., To, G., & Johnsrude, I. S. (2021). How long does it take for a voice to become familiar? Speech intelligibility and voice recognition are differentially sensitive to voice training. Psychological Science, 32(6), 903–915. doi:10.1177/0956797621991137
How are cognitive influences on speech perception affected by hearing loss?
Another aspect of my research investigates how cognitive factors influence speech perception for people with sensorineural hearing impairment. People with hearing loss often find listening in noisy environments difficult and effortful, even when they use hearing aids. We found that children with peripheral hearing loss prepare their attention for an upcoming talker less than children with normal hearing do, helping to explain why they struggle to listen in noisy places. Nevertheless, we found that knowing the topic of conversation improves speech intelligibility and reduces listening effort for adults with hearing impairment. Thus, although some cognitive abilities seem to be impaired in people with hearing loss, they nevertheless retain the ability to use context and prior knowledge to help them understand speech—perhaps relying on context to help compensate for a degraded acoustic signal. I’m interested in how we can improve speech perception, and make listening less tiring and effortful, for people with hearing loss.
Selected relevant publications:
- Holmes, E., Folkeard, P., Johnsrude, I. S., & Scollie, S. (2018). Semantic context reduces sentence-by-sentence listening effort for listeners with hearing impairment. International Journal of Audiology, 57(7), 483–492. doi:10.1080/14992027.2018.1432901
- Holmes, E., Kitterick, P. T. & Summerfield, A. Q. (2017). Peripheral hearing loss reduces the ability of children to direct selective attention during multi-talker listening. Hearing Research, 350, 160–172. doi:10.1016/j.heares.2017.05.005
Why do some people without detectable hearing loss report substantial difficulty listening in noisy settings?
I’m also interested in why some people find it very difficult to understand speech in noisy environments, despite having no detectable hearing loss. Researchers and clinicians don’t fully understand the cause(s) of this difficulty or how it can be treated. Our research has shown that some of this difficulty may arise from problems segregating sounds, which we measured using a non-linguistic figure-ground task. This task appears to tap into grouping processes in early auditory cortex that are similar to those involved in understanding speech in noisy environments. We found that even young people with no detectable hearing loss vary substantially in how well they can understand speech in noisy environments—and their grouping ability predicted their ability to understand speech when background noise was present. In another study, we also showed that the ability to hold sounds in mind over several seconds (working memory for frequency) helped to predict the ability to understand speech. These findings imply that cognitive processes in auditory cortex might help us to understand why some people without detectable hearing loss have substantial difficulty listening in noisy settings.
Selected relevant publications:
- Holmes, E., & Griffiths, T. D. (2019). ‘Normal’ hearing thresholds and fundamental auditory grouping processes predict difficulties with speech-in-noise perception. Scientific Reports, 9, 16771. doi:10.1038/s41598-019-53353-5
- Holmes, E., Zeidman, P., Friston, K. J., & Griffiths, T. D. (2021). Difficulties with speech-in-noise perception related to fundamental grouping processes in auditory cortex. Cerebral Cortex, 31(3), 1582–1596. doi:10.1093/cercor/bhaa311
- Lad, M., Chu, A., Holmes, E., & Griffiths, T. D. (2020). Speech-in-noise detection is related to auditory working memory precision for frequency. Scientific Reports, 10, 13997. doi:10.1038/s41598-020-70952-9
Current projects
Current projects include:
- Modelling the cognitive processes involved in speech perception;
- Using neuroimaging to understand how people direct spatial attention to speech;
- Examining how spatial attention relates to age and audiometric thresholds in older adults;
- Exploring factors that might mediate the link between hearing loss and dementia;
- Testing how voice training benefits speech intelligibility
If you’re interested in working with me—as an undergraduate student, masters student, PhD student, or postdoc—please feel free to get in touch. I believe diversity adds value to groups and I welcome enquiries from people from historically underrepresented communities. I strive to create a supportive and inclusive lab environment in which all members can thrive.
For a full list of publications, see my Google Scholar or NCBI page.