Distribution of responsibility for AI development: Expert views

Forskningsoutput: TidskriftsbidragArtikel i vetenskaplig tidskriftPeer review

Sammanfattning

The purpose of this paper is to increase the understanding of how different types of experts with influence over the development of AI, in this role, reflect upon distribution of forward-looking responsibility for AI development with regard to safety
and democracy. Forward-looking responsibility refers to the obligation to see to it that a particular state of affairs materialise. In the context of AI, actors somehow involved in AI development have the potential to guide AI development in a safe and
democratic direction. This study is based on qualitative interviews with such actors in different roles at research institutions, private companies, think tanks, consultancy agencies, parliaments, and non-governmental organisations. While the reflections
about distribution of responsibility differ among the respondents, one observation is that influence is seen as an important basis for distribution of responsibility. Another observation is that several respondents think of responsibility in terms of what it would entail in concrete measures. By showing how actors involved in AI development reflect on distribution of responsibility, this study contributes to a dialogue between the field of AI governance and the field of AI ethics.
Originalspråkengelska
TidskriftAI & Society: Knowledge, Culture and Communication
DOI
StatusPublished - 2025 jan. 12

Ämnesklassifikation (UKÄ)

  • Studier av offentlig förvaltning

Fria nyckelord

  • artificial intelligence (AI
  • moral responsibility
  • forward-looking responsibility
  • democracy
  • AI experts
  • qualitative interviews

Citera det här