AI Transparency and Consumer Trust
Project: Research › Interdisciplinary research
Research areas and keywords
- AI Transparency, Trust, Consumer trust, AI accountability, explainability, consumer protection
As consumers increasingly interact with AI on a daily basis, we observe that trust is one of the key components required for a general adoption of applied AI, effectively becoming a threshold for the adoption of AI. A key challenge of focus for this project is to ensure that AI is applied in a way that consumers trust to use or interact with, in processes that are sufficiently transparent and explainable to provide accountability when they fail or behave unexpectedly. In short, there is a risk of AI being underused in consumer markets if it is not trustworthy.
This interdisciplinary project explores how AI transparency relates to consumer trust and can enable fair and accountable uses of applied AI in consumer markets. We study i) how AI is governed in consumer markets at large; ii) the consumers’ norms and understanding of AI, especially with regards to transparency, and iii) how AI explainability can be developed in the intersection between social, legal and technical aspects in order to strengthen consumer trust.
We approach these challenges from a social-scientific perspective based in Sociology of Law – a field that empirically studies and conceptualizes both social and legal norms – combined with computer scientific expertise in AI, particularly relating to explainability in autonomous and algorithmic systems. Stefan Larsson, the PI, is a lawyer, holds a PhD in Sociology of Law and is an associate professor in Technology and Social Change at LTH, Lund University, and has published extensively on governance of new technologies and data-driven markets. Co-applicant, Fredrik Heintz, is an associate professor of Computer Science at Linköping University, and is the Director of the Graduate School for WASP, the President of the Swedish AI Society and a member of the European Commission High-Level Expert Group on AI (AI HLEG). His research resides in the intersection between knowledge representation and machine learning with applications both in algorithmic and autonomous systems.
In order to ensure industrial and societal relevance throughout the entire project, we have appointed an advisory board that combines international research excellence with industrial and governmental representatives of key relevance for the aim of the project, including H&M, Coop, Telia, the Swedish Consumer Agency, the AI Sustainability Center as well as Google and Microsoft.The project targets both legal and social consequences of AI, as it is clear that a balanced approach to transparency in AI governance issues is needed to ensure an accountable and trusted market reception. As the retail sector, the insurance sector, and our homes become increasingly personalized, predictive and autonomously enabled, the stakes are raised for individuals, companies and society at large to develop fair, accountable and trusted AI.
How much does a consumer need to understand of artificial intelligence (AI) in order to trust it in commerce, in the insurance company’s application or in their home voice assistant? How transparent does it need to be to consumers, companies and supervisory authorities?
These are a few of the questions that will be studied in a project led by Stefan Larsson at Lund University.
Consumers are increasingly interacting with AI and autonomous systems in their everyday lives through recommendation systems, automated decision-making and voice and facial recognition. There are many benefits and great possibilities, for individuals, service developers, traders and society as a whole. At the same time, consumer trust and the reliability of these technologies is a threshold in the development of AI.
The research group will mainly study how AI is regulated in the consumer market, consumer attitudes and understanding of AI and how AI processes can be made more transparent based on a combined social sciences, legal and technological perspective.
The project is part of a national research programme, WASP-HS, which involves a total of SEK 660 million over ten years, initiated by Marianne and Marcus Wallenberg Foundation and Marcus and Amalia Wallenberg Foundation.
|Effective start/end date||2020/01/01 → 2024/12/31|
- Lund University (lead)
- Linköping University (Joint applicant)
Kalle Åström, Jacek Malec, Stefan Larsson, Mattias Ohlsson, Christian Balkenius, Anamaria Dutceac Segesten, Jutta Haider, Robert Willim, Jonas Ledendal, Sonja Aits, Maria Hedlund, Jonas Wisbrant, Einar Heiberg, Elin Anna Topp, Jörn Janneck, Marcus Klang, Ingar Brinck & Olof Sundin
2018/01/01 → …
Project: Network › Interdisciplinary research, Internal collaboration (LU)
Related research output
Research output: Chapter in Book/Report/Conference proceeding › Book chapter
Research output: Book/Report › Report
Activity: Other › Media participation
Activity: Talk or presentation › Invited talk
Activity: Other › Media participation