Multimodality and timing: A Study in Audio Description
Research areas and keywords
To perceive and experience visual scenes, images, gestures, facial expressions, events and movements on stage during film and theatre performances is something that the sighted audience takes for granted. Visually impaired audience can, however, miss important information that is only visible but not articulated through language or sound.
The task of the interpreter is to evoke vivid mental images by simultaneously describing visual scenes verbally for people without sight. That means that the interpreter under time pressure has to select relevant non-verbal information from the visual scene, linearize it, articulate it linguistically in an efficient manner to evoke such images and to time this piece of information so that it matches with what at the moment happens linguistically, ie what is said in the dialogues and what is heard in the sounds. This timing is crucial for the integration of the different modalities, for meaning–making and for achieving of the intended communicative effect.
|Effective start/end date||2014/01/01 → 2014/12/31|
2019/12/01 → 2021/12/31
Eva Sjöstrand, Ingela Byström, Magnus Lindgren, Johan Mårtensson, Annika Andersson, Birgitta Sahlén, Kristina Hansson, Marianne Gullberg, Roger Johansson, Mikael Johansson, Andreas Falck, Mikael Roll, Freddy Ståhlberg, Merle Horne, Frida Blomberg, Victoria Johansson, Sven Strömqvist, Jens Nirme, Agneta Gulz, Joost van de Weijer, Carita Paradis, Betty Tärning, Ines Bramao, Susan Sayehli, Simone Löhndorf, Caroline Willners, Johan Blomberg, Susanne Schötz, Magnus Haake, Jonas Brännström, Emily Grenner, Peter Gärdenfors, Jana Holsanova, Åsa Wengelin, Magnus Johnsson, Stefan Winberg, Christian Balkenius, Zahra Gharaee & Rasmus Bååth
Swedish Research Council
2008/01/01 → 2018/12/31
Project: Research › Interdisciplinary research