Skip to main navigation Skip to search Skip to main content

Cross-modal integration of affective facial expression and vocal prosody: an EEG study

Ethan Weed, Peer Christensen

Research output: Contribution to conferencePosterpeer-review

318 Downloads (Pure)

Abstract

We have all experienced how a telephone conversation can be more challenging than speaking face to face. Understanding the intended meaning of a speaker‘s words requires forming an impression of the current mental state of the speaker, including her beliefs, intentions, and emotional state (Sperber & Wilson, 1995). Facial expressions are an important source of this information. In this study, we wondered at what point emotional information was integrated in the processing stream. We hypothesized that the N400 component, which is sensitive to meaning at a variety of levels (Lau, Phillips, & Poeppel, 2008; Van Berkum, Van Den Brink, Tesink, Kos, & Hagoort, 2008), would be affected by incongruous emotions in face/voice pairs.To test this, we used EEG to record brain responses to congruous and incongruous face/voice stimuli in an oddball paradigm. Participants viewed faces showing either a happy or a sad expression. As they viewed the faces, participants heard a variety of spoken utterances delivered in either a sad or happy tone of voice.We found that incongruent facial expressions affected auditory processing of spoken stimuli at surprisingly early stages of the processing stream. Not only did we observe an N400-like effect in the incongruent condition, suggesting an attempt to integrate the incongruent facial and vocal stimuli, we also found that incongruent auditory stimuli elicited a larger N100 wave.Our results show that as early as 100 msec after onset of spoken utterances, the brain has made an initial comparison of the affect expressed by the speaker‘s facial expression, and that expressed by vocal prosody. This suggests that early multi-modal brain areas, as well as ―higher-level‖ areas, are involved in computations which may be critical to interpretation of speaker meaning, and that integration of face/voice affective information takes place long before an utterance is completed.
Original languageEnglish
Publication statusPublished - 2011
Externally publishedYes
EventThe 3rd Conference of the Scandinavian Association for Language and Cognition - University of Copenhagen, Copenhagen, Denmark
Duration: 2011 Jun 142011 Jun 16
Conference number: 3

Conference

ConferenceThe 3rd Conference of the Scandinavian Association for Language and Cognition
Country/TerritoryDenmark
CityCopenhagen
Period2011/06/142011/06/16

Subject classification (UKÄ)

  • Psychology (excluding Applied Psychology)

Fingerprint

Dive into the research topics of 'Cross-modal integration of affective facial expression and vocal prosody: an EEG study'. Together they form a unique fingerprint.

Cite this