Projects per year
Abstract
In the present study, we use an event segmentation task as a method to assess the comprehension of film narratives (Johansson, Rastegar, Lyberg–Åhlander & Holsanova 2024). Sighted participants watched a Swedish film, while visually impaired participants experienced the same film with two AD versions — one explicitly expressing key event boundaries and another containing more implicitly conveyed ones. Both sighted and visually impaired participants were asked to delineate the narrative unfolding into distinct meaningful events.
Our findings indicate that visually impaired participants perceived event boundaries similarly to sighted participants, suggesting that AD effectively conveys the event structure. However, in the AD version with implicit expressions, event boundaries were less likely to be recognized. These results shed light on event segmentation dynamics in films, emphasizing the importance of how event boundaries are presented in AD. This has significant implications for improving the cinematic experience for visually impaired viewers, emphasizing the need for clear, explicit information about event boundaries within AD.
Event segmentation also holds significance in the domain of automated computer-generated description, since algorithms need to be trained to identify actions within dynamic scenes, discern connections between frames and actions, recognize event boundaries, and effectively express them. Our results are thus equally essential for automated computer-generated video description (Braun, Starr & Laaksonen, 2020; Starr, Braun & Delfani, 2020). A proper comprehension of how visually impaired end users perceive event boundaries in audio description can significantly contribute to the development of algorithms capable of accurately tracking referents across narrative depictions of time and space. This, in turn, may offer valuable assistance in establishing nominal and pronominal continuity and coherence, thereby addressing some of the key challenges inherent in the performance of such algorithms.
Acknowledgement
The research has been conducted together with Roger Johansson, Tina Rastegar and Viveka Lyberg–Åhlander. We gratefully acknowledge Lund University Humanities Laboratory, Tina Weidelt for her thorough and professional work with providing and narrating the audio descriptions, and Linn Petersdotter for her persistent and careful work with data collection.
This work was supported by a grant from FORTE 2018-00200 (Swedish Research Council for Health, Working Life and Welfare) and by TSI 2019 Lund University grant “Audio Description and Accessible Information” (2021-2025).
Original language | English |
---|---|
Pages | 116-119 |
Number of pages | 4 |
Publication status | Published - 2024 Nov 15 |
Event | Languages and the media conference - Budapest, Hungary Duration: 2024 Nov 13 → 2024 Nov 15 |
Conference
Conference | Languages and the media conference |
---|---|
Country/Territory | Hungary |
City | Budapest |
Period | 2024/11/13 → 2024/11/15 |
Subject classification (UKÄ)
- Information Systems, Social aspects (including Human Aspects of ICT)
- Film Studies
-
AUDEA: Syntolkning och tillgänglig information/Audio description and accessible information
Holsanova, J. (PI), Johansson, R. (Researcher), Mårtensson, J. (Researcher), Rudling, M. (Researcher) & Niehorster, D. C. (Researcher)
2021/12/01 → 2025/06/30
Project: Research
-
ADACOM: Audio description for accessible communication/ Syntolkning för tillgänglig kommunikation
Holsanova, J. (Project coordinator), Johansson, R. (Researcher), Lyberg Åhlander, V. (Researcher), Lindgren, M. (Researcher) & Mårtensson, J. (Researcher)
2019/01/01 → 2024/06/30
Project: Research
-
How the blind audience receive and experience audio descriptions of visual events
Johansson, R. (Researcher), Holsanova, J. (Researcher) & Lyberg Åhlander, V. (Researcher)
2019/01/01 → 2023/12/31
Project: Research
Activities
- 2 Presentation
-
Neural Event Segmentation In Narrative Film: Constructing And Remembering Events Across Sensory Modalities
Johansson, R. (Presenter), Isberg, K. (Contributor), Rudling, M. (Contributor), Mårtensson, J. (Contributor) & Holsanova, J. (Contributor)
2025 Sept 15 → 2025 Sept 20Activity: Talk or presentation › Presentation
-
Enhancing Audio Description Through Mental Imagery and Embodiment
Holsanova, J. (Speaker)
2025 May 15 → 2025 May 17Activity: Talk or presentation › Presentation
File