Most people speak several languages and coordinate speech and gestures in language-specific ways. Yet we know very little about the processes that drive multimodal language use and learning. This project explores experimentally how adults manage the interaction between their languages - seen e.g. in foreign accent - using techniques such as motion capture, electrophysiology, and virtual reality.
Most people in the world speak several languages and often learn new languages in adulthood. All speakers also coordinate speech and gestures in language-specific ways. The world is thus multilingual and communication is multimodal. Yet we know very little about the processes underlying the multimodal production, comprehension and learning of languages and even less about how learning and processing can be improved.
This program examines experimentally how adult multilingual speakers handle the interaction between their new and existing languages, reflected e.g. in foreign accent. We examine how the interaction between languages affect the multimodal production and comprehension of speech and gesture, changes during learning, and the role of gestures for learning. One part targets production to establish profiles of speech and gesture repertoires in mono- and multilingual speakers using sensor technology (articulography for speech, motion capture for gestures). Another part investigates whether recipients are sensitive to gestural accent using electrophysiological measures of brain activity. A third part focuses on learning using sensor data from part one to develop virtual speakers to be used in teaching.
The project breaks new ground and challenges the monolingual, unimodal and static theories of human language in favour of a multilingual, bimodal and dynamic perspective. It develops new measurements and experimental tools that can also be implemented in teaching.