Compensation for a large gesture-speech asynchrony in instructional videos

Andrey Anikin, Jens Nirme, Sarah Alomari, Joakim Bonnevier, Magnus Haake

Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceedingpeer-review

Abstract

We investigated the pragmatic effects of gesture-speech lag by asking participants to reconstruct formations of geometric shapes based on instructional films in four conditions: sync, video or audio lag (±1,500 ms), audio only. All three video groups rated the task as less difficult compared to the audio-only group and performed better. The scores were slightly lower when sound preceded gestures (video lag), but not when gestures preceded sound (audio lag). Participants thus compensated for delays of 1.5 seconds in either direction, apparently without making a conscious effort. This greatly exceeds the previously reported time window for automatic multimodal integration.
Original languageEnglish
Title of host publicationGesture and Speech in Interaction - 4th edition (GESPIN 4)
EditorsGaëlle Ferré, Mark Tutton
Pages19-23
Publication statusPublished - 2015
EventGesture and Speech in Interaction (GESPIN 4) - Nantes, France
Duration: 2015 Sept 22015 Sept 4

Conference

ConferenceGesture and Speech in Interaction (GESPIN 4)
Country/TerritoryFrance
CityNantes
Period2015/09/022015/09/04

Subject classification (UKÄ)

  • Psychology

Free keywords

  • gesture-speech synchronization
  • multimodal integration
  • temporal synchronization
  • comprehension

Fingerprint

Dive into the research topics of 'Compensation for a large gesture-speech asynchrony in instructional videos'. Together they form a unique fingerprint.

Cite this