Master of Research Student
Research Program: Multisensory Communication
Universal and Language-specific in Speech Perception: The Case of Prosody
When listeners can see the talker's speaking face (visual speech), auditory speech processing is facilitated. In studies using electroencephalography (EEG) it has been shown that seeing the talker compared to only hearing them leads to a smaller brain response (the N1, the first negative peak in an ERP and the P2, the second positive peak) that occurs earlier in time. As visual speech provides the listener with advanced cues about what the speech sound will be ('form') and when it will occur ('timing'), it is not clear how much 'form' or 'timing' information influences the change in the N1 & P2 components. In my thesis I plan to systematically investigate the amount of facilitation provided by form and timing cues by comparing visual speech (that has both) with printed cues (that have only form information). I use EEG to measure the event related potentials (ERPs) that occur to auditory signals cued with either visual speech or printed cues.
Qualifications and Honours