International researcher Ao Chen completes visit to MARCS BabyLab

International researcher Ao Chen has just completed a six-month visit to MARCS BabyLab as part of the Australian Government's Endeavour Research Fellowship program.

She spoke with BabyLab coordinator Rachel Lee and discussed her project, her research methods and some preliminary results.

Can you tell us a bit about your background?
Originally from China, I did my Masters and my PhD in the Netherlands. In November 2014, after I finished my PhD, I received an Endeavour Fellowship from the Australian Department of Education, which allowed me to visit MARCS BabyLab for six months.

What have you been doing?
During my stay I have been testing 4-, 8- and 12-month-old infants on their perception of language and music with EEG. I have had the pleasure of meeting and testing more than 70 babies with their parents.

What is your research question?
The question I'm interested in is why and how humans hold both music and speech in one single mind, while both largely rely on the same physical attributes. Speech and music may sound easily distinguishable for many people, but they actually have much in common: both of contain rich pitch variations as well as rhythmic patterns, and songs are very similar to speech. My focus so far has been on the pitch part.

Can you tell us a bit about your research methods?
In the experiment, small pieces of musical and speech sounds are presented to the babies while they wear an EEG cap. The sensors on the EEG cap capture their brain responses to the sounds that they hear. In both the music and the speech condition, occasionally, one different sound is embedded in a stream of repeating sound. The occurrence of such infrequent event triggers neuron firing in the brain, which generates biological potentials, called mismatch response (MR). The EEG cap captures these potentials.

Do you have any results you can share?
At four months, MR of music is equally distributed at the frontal site, whereas MR of speech pitch is more left lateralized. This suggests that as early as four months, the neural mechanism behind music and speech processing already differs. By eight months, in the music condition, the infants start showing MR comparable to adult ones, and time-locked tracking of individual notes in the melody can be seen. For language however, the MR is diminished.

By 12-months, interestingly, in the language condition, the MR becomes visible again. We also found that the MR of both speech and music shows different polarities at four months and 12 months. Questions such as what causes such a shift and what is the physiological basis of such shift need further investigation in the future.

Can you conclude anything from these results?
Methods to understand baby brain is very limited. EEG allows us to get large amount of information within a relatively short time. How infants track pitch change in early life, and how well their brain responses are time-locked to the stimuli may provide valuable information for early diagnosis as well as intervention for learning difficulties.formation for early diagnosis as well as intervention for learning difficulties.