Traditionally, research on human language has taken speech and written language as the only domains of investigation. However, there is now a wealth of empirical studies documenting visual aspects of language, ranging from rich studies of sign languages, which are self-contained visual language systems, to the field of gesture studies, which examines speech-associated gestures, facial expressions, and other bodily movements related to communicative expressions. But despite this large body of work, sign language and gestures are rarely treated together in theoretical discussions. This volume aims to remedy that by considering both types of visual language jointly in order to transcend (artificial) theoretical divides, and to arrive at a comprehensive account of the human language faculty. This collection seeks to pave the way for an inherently multimodal view of language, in which visible actions of the body play a crucial role. The 19 papers in this volume address four broad topics: (1) the multimodal nature of language; (2) multimodal representation of meaning; (3) multimodal and multichannel prosody; and (4) acquisition and development of visual language in children and adults.
|Status||Published - 2019 nov 13|
|Peer review utförd||Ja|