Evaluation of stance annotation of Twitter data

Research output: Contribution to journalArticlepeer-review

Abstract

Taking stance towards any topic, event or idea is a common phenomenon on Twitter and social media in general. Twitter users express their opinions about different matters and assess other people’s opinions in various discursive ways. The identification and analysis of the linguistic ways that people use to take different stances leads to a better understanding of the language and user behaviour on Twitter. Stance is a multidimensional concept involving a broad range of related notions such as modality, evaluation and sentiment. In this study, we annotate data from Twitter using six notional stance categories —contrariety, hypotheticality, necessity, prediction, source of knowledge and uncertainty—­­ following a comprehensive annotation protocol including inter-coder reliability measurements. The relatively low agreement between annotators highlighted the challenges that the task entailed, which made us question the inter-annotator agreement score as a reliable measurement of annotation quality of notional categories. The nature of the data, the difficulty of the stance annotation task and the type of stance categories are discussed, and potential solutions are suggested.
Original languageEnglish
Pages (from-to)53-80
Number of pages38
JournalResearch in Corpus Linguistics
Volume11
Issue number1
Early online date2022
DOIs
Publication statusPublished - 2023

Subject classification (UKÄ)

  • General Language Studies and Linguistics
  • Specific Languages

Fingerprint

Dive into the research topics of 'Evaluation of stance annotation of Twitter data'. Together they form a unique fingerprint.

Cite this