Paint me a picture: Western research reveals linguistic meaning in gestures and animations

A Western Sydney University study has revealed that far less meaning in language is encoded in words than originally thought. When encountering new informational content, our brain has the capacity to identify linguistic meanings, even when that content takes the form of gestures and visual animations.

According to Dr Lyn Tieu, co-author of the study and a linguistics expert from the School of Education and the MARCS Institute for Brain, Behaviour and Development at Western Sydney University, results from the study suggest that our minds can spontaneously organise new informational content in a linguistic fashion, even if that content is not linguistic in nature.

“Gestures and animations allow us to examine, on a very immediate basis, how people learn what meanings words can have,” explained Dr Tieu. “We can present participants with novel gestures and animations in a linguistic context, and see how they interpret them.”

Dr Tieu, along with co-authors Philippe Schlenker from New York University and the French National Centre for Scientific Research (CNRS) and Emmanuel Chemla (CNRS), studied people’s interpretations of special hybrid sentences that had words replaced either with gestures (e.g., mimicking taking off one’s glasses) or visual animations (e.g. color changes on the screen). They observed that people could access familiar types of linguistic meaning even from this non-linguistic content.

“The ability to navigate the complexities of linguistic meaning is integral to everyday communication, and we typically deploy this ability without a second thought. Our psycholinguistic experiments suggest that this ability to identify linguistic meaning likely stems from a more general cognitive ability,” said Dr Tieu.

The study ‘Linguistic inferences without words’, was first published in the Proceedings of the National Academy of Sciences (PNAS).

ENDS

26 April 2019

Media Unit