Project Details

Description

Which values and norms should be applied when we assess the function of autonomous AI systems from an ethical perspective? This is a question that a research group led by Christian Balkenius at Lund University will be examining.

Even if technology is universal, morality is not. Ethical theories are already driving the debate regarding what AI should and should not do.

In this project, researchers will carry out a consequence analysis of various ethical theories to describe the mechanisms required for autonomous AI systems to act morally. The work will be based in modern theory of learning, which assumes that the AI systems can learn about consequences based on experience and on observations of human behaviour and emotions.

Central concepts are utility, equality and superiority over human beings. The ability of robots to take over simple tasks and execute them at a lower cost, with better accuracy and without rest makes the utility obvious. The ability of robots and AI systems to take over more complex tasks and execute them as if they were almost human illustrates the concept of equality. The next leap in AI development, when they become autodidactic but without human faults and shortcomings and when they will perhaps be able to make better decisions than humans, can make them superior to humans. But how will they act when faced with ethical dilemmas? Should the benefit of the many be at the expense of the few? What consideration should be given to human rights and integrity? What algorithms should be in control? What does a state, or a society, hold as good? Who decides? This is an interdisciplinary project conducted by researchers in robotics, philosophy and cognitive science.
StatusActive
Effective start/end date2020/01/012024/12/31

Funding

  • Marianne och Marcus Wallenbergs Stiftelse