This article uses a socio-legal perspective to analyse the use of ethics guidelines as governance tool in the development and use of artificial intelligence (AI). This has become a central policy area in several large jurisdictions, including China and Japan, as well as the EU, focused here. Particular emphasis is in this article placed on the Ethics Guidelines for Trustworthy AI published by the EU Commission’s High-Level Expert Group on Artificial Intelligence in April 2019, as well as the White Paper on Artificial Intelligence, published by the EU Commission in February 2020. The guidelines are reflected against partially overlapping and already existing legislation as well as the ephemeral concept construct surrounding AI as such. The article concludes by pointing to the i) challenges of a temporal discrepancy between technological and legal change; ii) the need of moving from principle to process in the governance of AI, and iii) and the multidisciplinary needs in the study of contemporary applications of data-dependent AI.
Stefan Larsson is a senior lecturer and Associate Professor in Technology and Social Change at Lund University, Sweden, Department of Technology and Society. He is a lawyer (LLM) and socio-legal researcher that holds a PhD in Sociology of Law as well as a PhD in Spatial Planning. His multidisciplinary research focuses on issues of trust and transparency on digital, data-driven markets, and the socio-legal impact of autonomous and AI-driven technologies. Recent publications of relevance include “Transparency in Artificial Intelligence”, co-authored with Fredrik Heintz, in Internet Policy Review (2020), “The Socio-Legal Relevance of Artificial Intelligence” in Droit et Société (2019), and “Conceptions in the Code. How Metaphors Explain Legal Challenges in Digital Times” (Oxford University Press, 2017).