Date of Graduation


Document Type

Dissertation (PhD)

Program Affiliation


Degree Name

Doctor of Philosophy (PhD)

Advisor/Committee Chair

Michael Beauchamp, Ph.D.

Committee Member

Ruth Heidelberger, M.D., Ph.D.

Committee Member

Harel Shouval, Ph.D.

Committee Member

Michael Beierlein, Ph.D.

Committee Member

Kartik Venkatachalam, Ph.D.


Understanding speech in face-to-face conversation utilizes the integration of multiple pieces of information, most importantly the auditory vocal sounds and visual lip movements. Prior studies of the neural underpinnings of audiovisual integration in the brain have provided converging evidence to suggest that neurons within the left superior temporal sulcus (STS) provide a critical neural hub for the integration of auditory and visual information in speech. While most studies of audiovisual processing focus on neural mechanisms within healthy, young adults, we currently know very little about how changes to the brain can affect audiovisual integration in speech. To examine this further, two particular cases of changing neural structure were investigated. I first conducted a case study with patient SJ, who suffered damage from a stroke that injured a large portion of her left tempo-parietal area, including the left STS. I tested SJ five years after her stroke with behavioral testing and determined that she is able to integrate auditory and visual information in speech. In order to understand the neural basis of SJ’s intact multisensory integration abilities, I examined her and 23 age-matched controls with functional magnetic resonance imaging (fMRI). SJ had a greater volume of multisensory cortex as well as greater response amplitude in her right STS in response to an audiovisual speech illusion than the age-matched controls. This evidence suggests that SJ’s brain reorganized after her stroke such that the right STS now supports the functions of the stroke damaged left-sided cortex. Because changes to the brain occur even with healthy aging, I next examined the neural response to audiovisual speech in healthy older adults. Many behavioral studies have noted that older adults show not only performance declines during various sensory and cognitive tasks, but also greater variability in performance. I sought to determine if there is a neural counterpart to this increased behavioral variability. I found that older adults exhibited greater intrasubject variability in their neural responses across trials compared to younger adults. This was true in individual regions-of-interest in the multisensory speech perception network and across all brain voxels that responded to speech stimuli. This increase in variability may underlie a decreased ability of the brain to distinguish between similar stimuli (such as the categorical boundaries of speech perception), which could link these findings to declines in speech perception in aging.


fMRI, audiovisual speech, aphasia, McGurk effect, aging, BOLD variability, intrasubject variability