A lot of information in everyday life is visually processed through sight. For instance, to conceive visual events when watching a movie is something most of us take for granted. However, for people with severe visual impairment this is not possible. But through supporting audio description (AD), information accessibility can substantially improve. The goal of AD is to evoke "mental images" of visual events by describing them verbally. An audio describer thus needs to select what to describe, how to describe it, when to describe it, and to express this information aurally.
But how do people without sight understand and experience descriptions of visual scenes? Can they at all visualize information? There is substantial evidence that people without sight can use mental imagery similarly as sighted people. But there are also fundamental differences in the underlying processes. Blind people primarily use haptic and motor imagery, which is grounded in slower and cognitively more demanding processes than when sighted people engage in corresponding activities. This is critical to consider in AD when deciding what to describe, when to do it and how such information should be described and expressed. However, this is a research topic that to date has vastly been ignored in Sweden and in the rest of the world.
The theoretical aim of the present project is therefore to gain a better understanding of the principles that underlie successful communication between the sighted and the blind in AD. To that end we will conduct a series of experiments aiming to specify how the blind understand, segment and experience visual, spatial as well as temporal properties of event descriptions. The applied goal is to increase the quality of AD and support the training of audio describers and AD practices, and ultimately facilitate the understanding and accessibility of visual information for the visually impaired.