Abstract
How to make robots safe for humans is intensely debated, within academia as well as in industry, media and on the political arena. Hardly any discussion of the subject fails to mention Isaac Asimov’s three laws of Robotics. Asimov’s laws and the Trolley Problem are usually discussed separately but there is a connection in that the Trolley Problem poses a seemingly unsolvable problem for Asimov’s First Law, that states: A robot may not injure a human being or, through inaction, allow a human being to come to harm. That is, it contains an active and a passive clause and obliges the robot to obey both, while the Trolley Problem forces us to choose between these two options. The object of this paper is to investigate if and how Asimov’s First Law of Robotics can handle a situation where we are forced to choose between the active and the passive clauses of the law. We discuss four possible solutions to the challenge explicitly or implicitly used by Asimov. We conclude that all four suggestions would solve the problem but in different ways and with different implications for other dilemmas in robot ethics. We also conclude that considering the urgency of finding ways to secure a safe coexistence between humans and robots, we should not let the Trolley Problem stand in the way of using the First Law of robotics for this purpose. If we want to use Asimov’s laws for this purpose, we also recommend discarding the active clause of the First Law.
Original language | English |
---|---|
Journal | Journal of Science Fiction and Philosophy |
Volume | 7 |
Publication status | Published - 2024 Jul 1 |
Subject classification (UKÄ)
- Philosophy
- Specific Literatures