Abstract
This paper follows directly from an earlier paper where we discussed the requirements for an artifact to be a moral agent and concluded that the artifactual question is ultimately a red herring. As before, we take moral agency to be that condition in which an agent can appropriately be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and specifically moral context and embodied in a suitable physical form. It must be, in some substantive sense, alive. It must exhibit self-conscious awareness. It must exhibit sophisticated conceptual abilities, going well beyond what the likely majority of conceptual agents possess: not least that it must possess a well-developed moral space of reasons. Finally, it must be able to communicate its moral agency through some system of signs: a “private” moral world is not enough. After reviewing these conditions and pouring cold water on recent claims for having achieved “minimal” machine consciousness, we turn our attention to a number of existing and, in some cases, commonplace artifacts that lack moral agency yet nevertheless require one to take a moral stance toward them, as if they were moral agents. Finally, we address another class of agents raising a related set of issues: autonomous military robots.
Original language | English |
---|---|
Journal | International Journal of Machine Consciousness |
Volume | 6 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2014 |
Bibliographical note
http://www.worldscientific.com/loi/ijmcSubject classification (UKÄ)
- Languages and Literature
Free keywords
- moral agency
- moral stance
- responsibility
- concepts
- consciousness
- autopoiesis