Description
How do you ensure fairness in the application of AI and ML? How transparent and explainable need autonomous and self-learning services be from a legal, social or ethical perspective? How do you balance privacy or other values against a need for access to training-data? Can an algorithm really illegally discriminate? And, to what extent can AI and ML be a method within social sciences or the humanities?
As autonomous technologies are increasingly applied in society - in services, homes and vehicles - there is a growing need to detect malpractices and unintended consequences as well as to ensure a fair use, even in a biased and unfair world. Increasingly, calls are made for ethical guidelines or regulations, as well as a need to better understand the impact of AI in terms of power relations, trust, transparency, accountability, and more, as the new technologies gain agency in our lives.
This workshop hosted by AI Lund addresses the need for the involvement of a more multidisciplinary range of competences in research and development of AI and ML. It intends to serve as a meeting between the social sciences and humanities, on the one hand, and the mathematical and computer science dependent disciplines on the other.
Before lunch there will be inspirational presentations from external guests and after lunch there will be more thematically focused workshops, including AI & ML as method for humanities and social sciences at large.
Note: We have outgrown Lundmarksalen in the astronomy building and moved to the event to LUX Nedre Aula, Helgonavägen 3, Lund
Programme09.30 coffee and mingleLocation: In the foyer outside LUX Aula
10.15-10.40 Stefan Larsson: Introducing ethical, legal and social consequences of AI.Location: LUX Aula
10.40 –11.20 Jan Erik Solem, CEO and co-founder at Mapillary (former ass prof at Lund University)Jan Erik will present learnings from deploying computer vision technology into global services. Training algorithms in cooperation with people as they work on solving their problems using computer vision. Creating large-scale image datasets while preserving privacy. Working with open data and collaborative models. Combining open data with running a business.
11.20 – 12.00 Anna Felländer, Government’s digital council, senior advisor BCG, affiliated researcher at KTHAnna will present on sustainable AI and the need for interdisciplinary awareness and and ethics. Consumers increasingly expect customized services and recommendations based on AI. Companies that utilise customer data to automate and predict have profits to retrieve, but how do they avoid unintended negative ethical consequences?
12.00 – 13.00 LUNCH13.00 –14.30 WorkshopsI. Cultural perspectives on Artificial Intelligence and Machine LearningThis session is organised by the interdisciplinary Digital Cultures Research Node (Dep. of Arts and Cultural Sciences). It will include a number of brief presentations focusing on cultural perspectives of AI & ML, e.g. cultural assumptions inscribed into AI systems, understandings of algorithmic processes, for example those at work in search engines or in “smart homes", transformed literacies or image recognition algorithms etc and guided discussions. Hosts: Jutta Haider (Information Studies), Moa Petersén (Art History & Visual Studies), Robert Willim (Ethnology)
Location: LUX C214
The session will be hosted by political scientist Anamaria Dutceac Segesten and ML-expert Karl Åström, accompanied with
Marcus Klang, PhD Student in Natural Language Processing (NLP). The purpose is to present cases of projects that apply AI/ML methods to social science research. The presentations serve as an opening to a broader dialogue on what can and could be done when utilising AI and ML as part of the methodological toolkit for social sciences and the humanities. Test your ideas here!
Location: LUX över aula
Programme:
13:00 – 13:10 Sverker Sikström, Psychology, LU “SemanticExcel.com: A Userfriendly Online-Tool for Statistical Analysis of Text-data in Social Sciences”
13:10 – 13:20 Michael Bossetta, Political Science, University of Copenhagen “Using Machine Learning and Automation to Weaponize Twitter: A Simulated Cyberattack during the 2018 US Midterms”
13:20 – 13:30 Kalle Åström, Mathematics, LU “How Robots See: Image Detection of Humans, Animals, and Cars”
13:30 – 13:40 Anamaria Dutceac Segesten, European Studies, LU ”Analyzing Media Texts Using Topic Models: A Five-Language Comparison”
13:40 – 13:50 Marcus Klang, Computer Science, LU “Finding Things in Strings: Multilingual Entity Linking”
13:50 – 14:30 Q&A
Drawing on recent developments in critical studies on AI and autonomous process applied in society this workshop will address issues of fairness, accountability as well as transparency (FAT). This includes but is by no means limited to law and socio-legal studies where the applications and effects of AI is seen as an object for scrutiny. Also the Swedish government calls for “a sustainable AI” and the “need for standards, norms and ethical guidelines”. The workshop is hosted by socio-legal scholar Stefan Larsson and invited speaker Anna Felländer.
Location: LUX Nedre Aula
Speakers:
Stefan Larsson, law & society: Fairness, Accountability and (seven notions of) Transparency
Thore Husfeldt, computer science: A Glimpse of Algorithmic Fairness
Ekaterina Katja DeVries, law & technology: On fairness, generative models (and law)
Olle Häggström, mathematical statistics: Future AI advances: the multifaceted risk landscape
Ulrika Wennersten, business law: Intellectual property, AI, liability
Jakob Svensson, media & communication studies: What is data, and will it make humans obsolete? Key questions in the age of data-essentialism
Location: LUX Nedre Aula
15.05 Post ws fika & mingleLocation: Outside Lundmarksalen in Astronomihuset (100 m north of LUX)
Period | 2018 Nov 22 |
---|---|
Event type | Workshop |
Conference number | 4 |
Location | Lund, SwedenShow on map |
Degree of Recognition | National |
Related content
-
Projects
-
Lund University AI Research
Project: Network