Projekt per år
Sammanfattning
A fundamental aspect of the human mind is its ability to perceive, understand, and remember life experiences, which we process as distinct and meaningful sequences of events, each with a clear beginning and end. This includes experiences from audiovisual media. However, while movies do not explicitly signal when one event ends and another begins, viewers still consistently and automatically perceive event boundaries. To assess the effectiveness of Audio Description (AD) in enhancing the film experience for visually impaired viewers and to understand the challenges and opportunities for automated, computer-generated video descriptions, it is crucial to explore how AD influences the perception and segmentation of a film’s narrative structure.
In a recent study (Johansson et al., 2024), we investigated how visually impaired participants perceived event boundaries in a film compared to their sighted peers. The sighted participants viewed the original film, while the visually impaired participants experienced two versions of AD of the same film: one with explicit indications of key event boundaries and another where these boundaries were more subtly conveyed. All participants were asked to segment the narrative into distinct and meaningful events.
Our findings reveal that visually impaired participants perceived event boundaries similarly to their sighted counterparts, indicating that AD can effectively convey the unfolding event structure. However, when event boundaries in AD were more subtly conveyed, they were less likely to be recognized. This underscores the importance of presenting clear and explicit event boundaries in AD to enhance the cinematic experience for visually impaired viewers.
Building on these findings, we are currently conducting a brain imaging study to further investigate how event structures are experienced, understood, and remembered in films with AD. This study targets the neural activation patterns in people with and without visual impairments as they experience event boundaries in film narratives, with and without AD.
While our research offers valuable insights for improving AD practices, it also holds significant implications for the development of automated, computer-generated video descriptions (Braun, Starr & Laaksonen, 2020; Starr, Braun & Delfani, 2020). A deeper understanding of how visually impaired users perceive, understand, and remember event boundaries in AD can inform the design of algorithms capable of accurately tracking narrative elements across time and space. Such advancements could address key challenges in maintaining continuity and coherence in AI-generated video descriptions. For instance, when applying current commercial AI models, such as ChatGPT 4.0, to identify event boundaries in the films from our studies, these models mainly identified visual contrasts at scene transitions or clear changes in visual flow. This approach contrasts sharply with human event segmentation, which fundamentally relies on a contextual understanding of the unfolding narrative, including spatiotemporal relationships among characters, as well as their intentions and goals toward each other – a capability that remains largely absent in the interpretation of audiovisual media by commercially available AI.
In a recent study (Johansson et al., 2024), we investigated how visually impaired participants perceived event boundaries in a film compared to their sighted peers. The sighted participants viewed the original film, while the visually impaired participants experienced two versions of AD of the same film: one with explicit indications of key event boundaries and another where these boundaries were more subtly conveyed. All participants were asked to segment the narrative into distinct and meaningful events.
Our findings reveal that visually impaired participants perceived event boundaries similarly to their sighted counterparts, indicating that AD can effectively convey the unfolding event structure. However, when event boundaries in AD were more subtly conveyed, they were less likely to be recognized. This underscores the importance of presenting clear and explicit event boundaries in AD to enhance the cinematic experience for visually impaired viewers.
Building on these findings, we are currently conducting a brain imaging study to further investigate how event structures are experienced, understood, and remembered in films with AD. This study targets the neural activation patterns in people with and without visual impairments as they experience event boundaries in film narratives, with and without AD.
While our research offers valuable insights for improving AD practices, it also holds significant implications for the development of automated, computer-generated video descriptions (Braun, Starr & Laaksonen, 2020; Starr, Braun & Delfani, 2020). A deeper understanding of how visually impaired users perceive, understand, and remember event boundaries in AD can inform the design of algorithms capable of accurately tracking narrative elements across time and space. Such advancements could address key challenges in maintaining continuity and coherence in AI-generated video descriptions. For instance, when applying current commercial AI models, such as ChatGPT 4.0, to identify event boundaries in the films from our studies, these models mainly identified visual contrasts at scene transitions or clear changes in visual flow. This approach contrasts sharply with human event segmentation, which fundamentally relies on a contextual understanding of the unfolding narrative, including spatiotemporal relationships among characters, as well as their intentions and goals toward each other – a capability that remains largely absent in the interpretation of audiovisual media by commercially available AI.
Originalspråk | engelska |
---|---|
Sidor | 61-63 |
Antal sidor | 3 |
Status | Published - 2025 |
Evenemang | 10th advanced reserch seminar on audio descripton - Universitat Autònoma de Barcelona, Barcelona, Spanien Varaktighet: 2025 mars 19 → 2025 mars 21 Konferensnummer: 10 https://webs.uab.cat/arsad/programme-2/ |
Konferens
Konferens | 10th advanced reserch seminar on audio descripton |
---|---|
Förkortad titel | ARSAD 2025 |
Land/Territorium | Spanien |
Ort | Barcelona |
Period | 2025/03/19 → 2025/03/21 |
Internetadress |
Ämnesklassifikation (UKÄ)
- Film
- Psykologi
Fingeravtryck
Utforska forskningsämnen för ”Mind the Boundaries: Neurocognitive and AI-Driven Insights into Event Perception in Audio-Described Films”. Tillsammans bildar de ett unikt fingeravtryck.Projekt
- 1 Aktiva
-
AUDEA: Syntolkning och tillgänglig information/Audio description and accessible information
Holsanova, J. (PI), Johansson, R. (Forskare), Mårtensson, J. (Forskare), Rudling, M. (Forskare) & Niehorster, D. C. (Forskare)
2021/12/01 → 2025/06/30
Projekt: Forskning
Utrustning
-
-
Lund University Bioimaging Centre
Westergren-Thorsson, G. (Manager)
Medicinska fakultetenInfrastruktur
Aktiviteter
- 1 Presentation
-
Neural Event Segmentation In Narrative Film: Constructing And Remembering Events Across Sensory Modalities
Johansson, R. (presentatör), Isberg, K. (medverkande), Rudling, M. (medverkande), Mårtensson, J. (medverkande) & Holsanova, J. (medverkande)
2025 sep. 15 → 2025 sep. 20Aktivitet: Föredrag eller presentation › Presentation