To communicate effectively with each other, we often need to be able to understand what someone’s saying in the presence of background noise. Even listening to a friend speak at a busy restaurant can be difficult when other people are having loud conversations at nearby tables. I’m particularly interested in how we understand speech in these challenging listening environments.
Selectively attending to a sound of interest
A theme of my research is to investigate how we focus our attention on a sound of interest—for example, a friend speaking in a noisy place. How does attention influence perception and the way in which our brains process sounds? To explore this question, I use a combination of methods, including psychophysics, electroencephalograhy (EEG), and functional magnetic resonance imaging (fMRI).
- Holmes, E., Purcell, D. W., Carlyon, R. P. et al. (2018). Attentional modulation of envelope-following responses at lower (93–109 Hz) but not higher (217–233 Hz) modulation rates. JARO, 19, 83–97. doi:10.1007/s10162-017-0641-9
- Holmes, E., Kitterick, P. T., Summerfield, A. Q. (2016). EEG activity evoked in preparation for multi-talker listening by adults and children. Hearing Research, 336, 83–100. doi:10.1016/j.heares.2016.04.007
Improving speech intelligibility during multi-talker listening
I’m also interested in how cognitive factors (e.g. context and prior knowledge) can improve speech intelligibility when we’re in difficult listening situations. For example, some of my research on this topic has focused on voice familiarity. When multiple people speak simultaneously, we’re better at understanding speech spoken by people who we’re familiar with, such as a friend or partner, than someone we’ve never previously met. But how do we become familiar with someone’s voice? And which aspects of someone’s voice help us to follow what they’re saying?
- Holmes, E., Domingo, Y., & Johnsrude (2018). Familiar voices are more intelligible, even if they are not recognized as familiar. Psychological Science, 29(10), 1575–1583. doi:10.1177/0956797618779083
- Holmes, E., Kitterick, P. T. & Summerfield, A. Q. (2018). Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2000 ms. Attention, Perception, & Psychophysics, 80(6), 1520-1538. doi:10.3758/s13414-018-1531-x
Another aspect of my research investigates how cognitive factors influence speech perception for people with sensorineural hearing impairment. People with hearing impairment often report that they find listening in noisy environments particularly difficult and effortful, even when they use hearing aids. Although some cognitive abilities seem to be impaired in people with hearing loss, they nevertheless retain the ability to use context and prior knowledge to help them understand speech—perhaps relying on context to help compensate for a degraded acoustic signal. I’m interested in how these factors can help to improve speech perception, and make listening less tiring and effortful, for people with hearing loss.
- Holmes, E., Folkeard, P., Johnsrude, I. S., & Scollie, S. (2018). Semantic context reduces sentence-by-sentence listening effort for listeners with hearing impairment. International Journal of Audiology, 57(7), 483–492. doi:10.1080/14992027.2018.1432901
- Holmes, E., Kitterick, P. T. & Summerfield, A. Q. (2017). Peripheral hearing loss reduces the ability of children to direct selective attention during multi-talker listening. Hearing Research, 350, 160–172. doi:10.1016/j.heares.2017.05.005