Fairness in social robotics: gender as a case study for developing a multidisciplinary framework for social robotics and socio-legal studies of AI

Project: Research

Project Details

Description

Much recent effort aims to promote the development of trustworthy AI. Among key requirements for trustworthy AI are diversity, non-discrimination and fairness, meaning that the development must support inclusion and diversity in the entire cycle of an AI system, from design to deployment. For example, when building an AI system using datasets for training purposes, attention to gender diversity is necessary in order to avoid gender biases, but also to be able to handle notions of gender or other social categories as the system interacts with human subjects, and even adapts in this interaction. This project focuses on gender diversity as a case study, bringing together a team of social roboticists (Uppsala Social Robotics Lab, Uppsala University) and experts of socio-legal studies (Lund University). The aim is to define a multidisciplinary framework to study how to design and develop fair and trustworthy AI for social robotics.

This project is funded by WASP-HS.
StatusFinished
Effective start/end date2021/03/012021/12/31

Collaborative partners

Funding

  • Marianne och Marcus Wallenbergs Stiftelse

UN Sustainable Development Goals

In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):

  • SDG 5 - Gender Equality

Subject classification (UKÄ)

  • Law
  • Robotics and automation
  • Gender Studies
  • Other Social Sciences

Free keywords

  • social robotics
  • fairness
  • trustworthy AI
  • gender
  • socio-legal studies