Three Levels of AI Transparency

Research output: Contribution to journalArticlepeer-review

17 Downloads (Pure)


Transparency is generally cited as a key consideration towards building Trustworthy AI. However, the concept of transparency is fragmented in AI research, often limited to transparency of the algorithm alone. While considerable attempts have been made to expand the scope beyond the algorithm, there has yet to be a holistic approach that includes not only the AI system, but also the user, and society at large. We propose that AI transparency operates on three levels, (1) Algorithmic Transparency, (2) Interaction Transparency, and (3) Social Transparency, all of which need to be considered to build trust in AI. We expand upon these levels using current research directions, and identify research gaps resulting from the conceptual fragmentation of AI transparency highlighted within the context of the three levels.
Original languageEnglish
Publication statusAccepted/In press - 2022 Oct 4

Bibliographical note

© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Subject classification (UKÄ)

  • Interaction Technologies
  • Social Sciences Interdisciplinary
  • Information Systems, Social aspects


  • Artificial Intelligence
  • Transparency
  • Algorithm
  • Interaction
  • Society
  • Governance


Dive into the research topics of 'Three Levels of AI Transparency'. Together they form a unique fingerprint.

Cite this