Stabilizing Translucencies: Governing AI transparency by standardization

Research output: Contribution to journalArticlepeer-review

Abstract

Standards are put forward as important means to turn the ideals of ethical and responsible artificial intelligence into practice. One principle targeted for standardization is transparency. This article attends to the tension between standardization and transparency, by combining a theoretical exploration of these concepts with an empirical analysis of standardizations of artificial intelligence transparency. Conceptually, standards are underpinned by goals of stability and solidification, while transparency is considered a flexible see-through quality. In addition, artificial intelligence-technologies are depicted as ‘black boxed’, complex and in flux. Transparency as a solution for ethical artificial intelligence has, however, been problematized. In the empirical sample of standardizations, transparency is largely presented as a static, measurable, and straightforward information transfer, or as a window to artificial intelligence use. The standards are furthermore described as pioneering and able to shape technological futures, while their similarities suggest that artificial intelligence translucencies are already stabilizing into similar arrangements. To rely heavily upon standardization to govern artificial intelligence transparency still risks allocating rule-making to non-democratic processes, and while intended to bring clarity, the standardizations could also create new distributions of uncertainty and accountability. This article stresses the complexity of governing sociotechnical artificial intelligence principles by standardization. Overall, there is a risk that the governance of artificial intelligence is let to be too shaped by technological solutionism, allowing the standardization of social values (or even human rights) to be carried out in the same manner as that of any other technical product or procedure.
Original languageEnglish
JournalBig Data and Society
Volume11
Issue number1
DOIs
Publication statusPublished - 2024 Feb 25

Subject classification (UKÄ)

  • Sociology (excluding Social Work, Social Anthropology, Demography and Criminology)
  • Information Studies
  • Information Systems, Social aspects (including Human Aspects of ICT)

Free keywords

  • Artificial Intelligence
  • Algorithms
  • Transparency
  • Standards
  • Governance
  • Uncertainty
  • Standardization

Fingerprint

Dive into the research topics of 'Stabilizing Translucencies: Governing AI transparency by standardization'. Together they form a unique fingerprint.

Cite this