AI Transparency and Consumer Trust

Project: Research

Project Details

Description

This is a five-year interdisciplinary research project with the purpose of producing knowledge on how transparency in applied artificial intelligence (AI) can strengthen consumer trust and promote fair and accountable applications of AI in consumer markets. The aim is to increase knowledge about trust in AI-based consumer products, to enable the technologies’ immense potential while at the same time avoiding negative effects. 

As consumers increasingly interact with AI on a daily basis, we observe that trust is one of the key components required for a general adoption of applied AI, effectively becoming a threshold for the adoption of AI. A key challenge of focus for this project is to ensure that AI is applied in a way that consumers trust to use or interact with, in processes that are sufficiently transparent and explainable to provide accountability when they fail or behave unexpectedly. In short, there is a risk of AI being underused in consumer markets if it is not trustworthy.
This interdisciplinary project explores how AI transparency relates to consumer trust and can enable fair and accountable uses of applied AI in consumer markets. We study i) how AI is governed in consumer markets at large; ii) the consumers’ norms and understanding of AI, especially with regards to transparency, and iii) how AI explainability can be developed in the intersection between social, legal and technical aspects in order to strengthen consumer trust.
We approach these challenges from a social-scientific perspective based in Sociology of Law – a field that empirically studies and conceptualizes both social and legal norms – combined with computer scientific expertise in AI, particularly relating to explainability in autonomous and algorithmic systems. Stefan Larsson, the PI, is a lawyer, holds a PhD in Sociology of Law and is an associate professor in Technology and Social Change at LTH, Lund University, and has published extensively on governance of new technologies and data-driven markets. Co-applicant, Fredrik Heintz, is an associate professor of Computer Science at Linköping University, and is the Director of the Graduate School for WASP, the President of the Swedish AI Society and a member of the European Commission High-Level Expert Group on AI (AI HLEG). His research resides in the intersection between knowledge representation and machine learning with applications both in algorithmic and autonomous systems. 

In order to ensure industrial and societal relevance throughout the entire project, we have appointed an advisory board that combines international research excellence with industrial and governmental representatives of key relevance for the aim of the project, including H&M, Coop, Telia, the Swedish Consumer Agency, the AI Sustainability Center as well as Google and Microsoft.The project targets both legal and social consequences of AI, as it is clear that a balanced approach to transparency in AI governance issues is needed to ensure an accountable and trusted market reception. As the retail sector, the insurance sector, and our homes become increasingly personalized, predictive and autonomously enabled, the stakes are raised for individuals, companies and society at large to develop fair, accountable and trusted AI.

Popular science description

How can consumer trust in artificial intelligence be strengthened?

How much does a consumer need to understand of artificial intelligence (AI) in order to trust it in commerce, in the insurance company’s application or in their home voice assistant? How transparent does it need to be to consumers, companies and supervisory authorities?

These are a few of the questions that will be studied in a project led by Stefan Larsson at Lund University.

Consumers are increasingly interacting with AI and autonomous systems in their everyday lives through recommendation systems, automated decision-making and voice and facial recognition. There are many benefits and great possibilities, for individuals, service developers, traders and society as a whole. At the same time, consumer trust and the reliability of these technologies is a threshold in the development of AI.

The research group will mainly study how AI is regulated in the consumer market, consumer attitudes and understanding of AI and how AI processes can be made more transparent based on a combined social sciences, legal and technological perspective.

The project is part of a national research programme, WASP-HS, which involves a total of SEK 660 million over ten years, initiated by Marianne and Marcus Wallenberg Foundation and Marcus and Amalia Wallenberg Foundation.
StatusFinished
Effective start/end date2020/01/012024/12/31

Collaborative partners

UKÄ subject classification

  • Social Sciences Interdisciplinary
  • Ethics
  • Law and Society
  • Law (excluding Law and Society)
  • Other Engineering and Technologies not elsewhere specified

Free keywords

  • AI Transparency
  • Trust
  • Consumer trust
  • AI accountability
  • explainability
  • consumer protection