This is a five-year interdisciplinary research project with the purpose of producing knowledge on how transparency in applied artificial intelligence (AI) can strengthen consumer trust and promote fair and accountable applications of AI in consumer markets. The aim is to increase knowledge about trust in AI-based consumer products, to enable the technologies’ immense potential while at the same time avoiding negative effects.
As consumers increasingly interact with AI on a daily basis, we observe that trust is one of the key components required for a general adoption of applied AI, effectively becoming a threshold for the adoption of AI. A key challenge of focus for this project is to ensure that AI is applied in a way that consumers trust to use or interact with, in processes that are sufficiently transparent and explainable to provide accountability when they fail or behave unexpectedly. In short, there is a risk of AI being underused in consumer markets if it is not trustworthy.
This interdisciplinary project explores how AI transparency relates to consumer trust and can enable fair and accountable uses of applied AI in consumer markets. We study i) how AI is governed in consumer markets at large; ii) the consumers’ norms and understanding of AI, especially with regards to transparency, and iii) how AI explainability can be developed in the intersection between social, legal and technical aspects in order to strengthen consumer trust.
We approach these challenges from a social-scientific perspective based in Sociology of Law – a field that empirically studies and conceptualizes both social and legal norms – combined with computer scientific expertise in AI, particularly relating to explainability in autonomous and algorithmic systems. Stefan Larsson, the PI, is a lawyer, holds a PhD in Sociology of Law and is an associate professor in Technology and Social Change at LTH, Lund University, and has published extensively on governance of new technologies and data-driven markets. Co-applicant, Fredrik Heintz, is an associate professor of Computer Science at Linköping University, and is the Director of the Graduate School for WASP, the President of the Swedish AI Society and a member of the European Commission High-Level Expert Group on AI (AI HLEG). His research resides in the intersection between knowledge representation and machine learning with applications both in algorithmic and autonomous systems.
In order to ensure industrial and societal relevance throughout the entire project, we have appointed an advisory board that combines international research excellence with industrial and governmental representatives of key relevance for the aim of the project, including H&M, Coop, Telia, the Swedish Consumer Agency, the AI Sustainability Center as well as Google and Microsoft.The project targets both legal and social consequences of AI, as it is clear that a balanced approach to transparency in AI governance issues is needed to ensure an accountable and trusted market reception. As the retail sector, the insurance sector, and our homes become increasingly personalized, predictive and autonomously enabled, the stakes are raised for individuals, companies and society at large to develop fair, accountable and trusted AI.
Hur stärks konsumenters tillit för artificiell intelligens?
Hur mycket behöver man som konsument förstå av artificiell intelligens, AI, för att känna tillit till den i handeln, till försäkringsbolagets användning eller för röstassistenten i hemmet? Hur transparent behöver den vara för konsumenter, företag och tillsynsmyndigheter?
De är några av de frågor som ska studeras i ett projekt som leds av Stefan Larsson vid Lunds universitet.
Konsumenter interagerar i allt högre utsträckning med AI och autonoma system i sin vardag, genom rekommendationssystem, automatiserat beslutsfattande och röst- och ansiktsigenkänning. Fördelarna är många och möjligheterna enorma, för både individer, tjänsteutvecklare, handlare och samhället i stort. Konsumenternas tillit och teknologiernas tillförlitlighet utgör samtidigt en tröskel för AI-utvecklingen.
Forskargruppen kommer framför allt att studera hur AI regleras på konsumentmarknader, konsumenternas egen inställning och förståelse för AI samt hur AI-processer kan bli mer transparenta utifrån ett kombinerat samhällsvetenskapligt, rättsligt och tekniskt perspektiv.