Outline of a sensory-motor perspective on intrinsically moral agents

Forskningsoutput: TidskriftsbidragArtikel i vetenskaplig tidskrift

Standard

Outline of a sensory-motor perspective on intrinsically moral agents. / Balkenius, Christian; Cañamero, Lola; Pärnamets, Philip; Johansson, Birger; Butz, Martin; Olsson, Andreas.

I: Adaptive Behavior, Vol. 24, Nr. 5, 03.11.2016, s. 306-319.

Forskningsoutput: TidskriftsbidragArtikel i vetenskaplig tidskrift

Harvard

APA

CBE

MLA

Vancouver

Author

Balkenius, Christian ; Cañamero, Lola ; Pärnamets, Philip ; Johansson, Birger ; Butz, Martin ; Olsson, Andreas. / Outline of a sensory-motor perspective on intrinsically moral agents. I: Adaptive Behavior. 2016 ; Vol. 24, Nr. 5. s. 306-319.

RIS

TY - JOUR

T1 - Outline of a sensory-motor perspective on intrinsically moral agents

AU - Balkenius, Christian

AU - Cañamero, Lola

AU - Pärnamets, Philip

AU - Johansson, Birger

AU - Butz, Martin

AU - Olsson, Andreas

PY - 2016/11/3

Y1 - 2016/11/3

N2 - We propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame

AB - We propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame

KW - Autonomous robots

KW - embodied emotions

KW - sensory-motor grounding

KW - embodied interaction

KW - empathy

KW - intrinsic morality

U2 - 10.1177/1059712316667203

DO - 10.1177/1059712316667203

M3 - Article

VL - 24

SP - 306

EP - 319

JO - Adaptive Behavior

JF - Adaptive Behavior

SN - 1741-2633

IS - 5

ER -