Professor Catherine Best

Position

Chair in Psycholinguistic Research in the Speech and Language research program at the MARCS Institute

Biography

After receiving her PhD in Developmental Psychology and Neuroscience (Michigan State U, 1978), Best was awarded a prestigious NIH postdoctoral fellowship grant (1978-1980) to study psycholinguistics at the world-renowned Haskins Laboratories, where she was supervised by two central figures in speech perception research: Alvin Liberman and Michael Studdert-Kennedy.

From there, she served for 4 years as the Director of the Neuroscience & Education program at Columbia University (1980-1984), and then took up a faculty position in Psychology at Wesleyan University (1984-2004). She then joined MARCS Laboratories, University of Western Sydney (now MARCS Institute, Western Sydney University), as Chair in Psycholinguistic Research in late 2004.

Best's research and theoretical work has focused primarily on how adults' and infants' experience with their native language shapes their perception and production of the phonological elements of spoken words, including consonants, vowels, lexical tones and prosodic patterns. She has applied this theme broadly, investigating perception and production of spoken language in second language learners and bilinguals, in children with language difficulties, and expanding her research to include sign language, facial expressions, and culture-specific characteristics of music. Her most significant theoretical contribution is her model the effects of language experience on perception: the Perceptual Assimilation Model (PAM: e.g., Best, 1984, 1994a, 1994b, 1995).

Best's work has offered important insights into why many non-native phonetic contrasts are difficult for adults and older infants to discriminate, while others remain much easier. Throughout her work, Best has taken an ecological, or direct realist, philosophical perspective, founded on James Gibson's ecological theory of perception. During her Wesleyan years, she was awarded a highly competitive NIH Research Career Development Award, providing her with several years of advanced linguistics training, which deepened her interest in articulatory information as a viable basis for speech perception.

That interest has been fundamental to the development of the PAM model, and provides the core motivation for her more recent line of research on the effects of regional accent differences in spoken word recognition by infants, toddlers and adults.

Research Interests

Listening with a Native Ear: Cross-Language Speech Perception and Word Recognition

Identifying and discriminating consonants and vowels ("speech segments") may seem to require only simple sensory abilities that should be equally activated for the segments of any languages. Yet in fact, speech perception is exquisitely "tuned" to our native language, allowing rapid accurate detection of the phonetic details needed to identify and distinguish native words, and minimizing attention to details that are irrelevant to word recognition (e.g., talker differences, accent differences). We investigate monolingual, bilingual and second-language learning adults and infants using a wide range of procedures in studies designed to provide novel insights on: how infants "tune in" to native speech; how perceptual tuning assists adults in recognizing native words; how bilingual adults and infants "manage" the speech contrasts of their different languages; and how and when and why native language attunement biases perceptual re-tuning during L2 learning.

How Strict is the Mother Tongue? Effects of Regional Accent Differences on Perception of Phonetic Segments and Spoken Words

Native-language perceptual attunement is shaped by the phonetic details of speech in the listener's home community, i.e., their native regional accent, affecting recognition of words spoken in other accents. Within our native language, we understand each other across most accents, and/or quickly become perceptually adapted to new talkers and accents, so that we can understand each other easily. Yet at the same time, we remain exquisitely sensitive to those pronunciation differences for sociolinguistic purposes such as identifying specific talkers and whether they are from the same country/region/town as us or not. Our current investigations on this intriguing tension between spoken word recognition and recognition of talker/accent characteristics include: adults' perceptual adaptation to other regional accents; relations among toddlers' native-language speech attunement, vocabulary development, and cross-accent word recognition; bilinguals' and L2 learners' word recognition and perceptual adaptation across unfamiliar L2 accents. English and Italian are the languages used in our ongoing studies; we hope to extend the work to other L1s and L2s.

Development of Phonology and Spoken Word Recognition in Children with Atypical Language

Perceptual attunement to native speech begins early, proceeds rapidly, and occurs automatically in typically-developing (TD) infants and toddlers. Unfortunately, some children have notable difficulties acquiring various aspects of spoken and/or written language. These difficulties often persist into adulthood, but can be partially alleviated by effective early interventions, which of course depend on early diagnosis. Diagnosis and early intervention for children with Autistic Spectrum Disorders (ASD), dyslexia (severe difficulty in reading and learning to read) and Specific Language Impairment (SLI) are begging for deeper understanding than we currently have regarding how and when their early language and development deviates from that of TD children. Phonological, phonetic and lexical aspects of language development in TD children as compared to those with various other developmental language impairments may be examined in PhD projects with Prof Best. However, it is essential that PhD candidates in this research area have prior training/experience with, and access to, the specific populations they wish to study.

Perception of Articulatory Information in Speech

One of the biggest unresolved issues in speech perception research is the nature of information that perceivers detect in speech. The mainstream assumption has long been that speech perception is based on detection and processing of acoustic information: acoustic features, cues, or holistic patterns of spoken words. Several alternative theories have posited that, rather than relying on acoustic information per se, perceivers detect amodal information (modality-nonspecific) about articulatory information in speech (Motor Theory; Direct Realism; Articulatory Phonology). We use a variety of methods to critically compare acoustic versus articulatory accounts of the nature of information that adult and infant perceivers detect in speech, using procedures that tap multi-modal and cross-modal (auditory, visual, haptic, proprioceptive) effects on infant and adult perception of speech.

Articulatory Phonology: How Speakers Produce the Coordinated Articulatory Gestures that Shape the Perceived Speech Signal

A critical piece of the puzzle of understanding the type of information that perceivers detect in speech is that we need to better understand how articulatory gestures structure the multimodal speech signal. Recent technological advances have greatly improved researchers' ability to track the motions of the speech articulators, both those hidden within the inner reaches of the vocal tract (tongue tip, body and root, velum, glottis) and those revealed directly or indirectly by the dynamic motions of talkers' faces (lip and jaw motion, plastic displacement of the cheeks by the mechanical and aerodynamic effects of articulatory gestures). The well-outfitted MARCS Speech Production Laboratory (MISPL) allows the use of two types of electromagnetometry (EMA: NDI Wave), ultrasound, nasal airflow, electropalatography, high-resolution active and passive-marker facial motion tracking (Optotrak, Vicon) and computational modelling, to investigate articulatory organization in a variety of languages, including native Australian languages, and to identify similarities and differences among various regional accents of English.

Qualifications and Recognition

  • BS, Michigan State University
  • MA, Michigan State University
  • PhD, Michigan State University
  • Individual NIH Postdoctoral Fellowship grant, Haskins Labs (w/ Alvin Liberman and Michael Studdert-Kennedy)

Roles

  • Leader, MARCS Institute Speech Production Laboratory (MISPL)
  • Convener of the Lingusitics SoirĂ©e monthly meetings

Research and Publications

Contact Details

Email C.Best@westernsydney.edu.au
Telephone +61 2 9772 6760
Location Building 1, Bankstown Campus