Trust me when I speak: Using speech to mitigate effects of robotic errors on trust

Aktivitet: Föredrag eller presentationPresentation


How feelings of trust evolve in Human-Robot Interaction is a subject that has received increasing attention from researchers over the last few years. Much of this attention, however, has been focused on how the performance of the robot affects the trust of its user. While errors can, and often do, have a negative impact on trust, anthropomorphic characteristics, such as using a humanoid robot, can be used to mitigate the negative effects of errors. Robots with a humanoid appearance can be perceived to be more trustworthy when making a mistake, compared to more abstract and mechanical-looking robots. We want to test if the perceived ability of linguistic speech can be used together with a humanoid appearance to increase the anthropomorphism of a robot and strengthen its mitigating effects.

For this purpose, we are planning an experiment where a humanoid robot has to solve a sequence completion task (either verbally or by pointing at a number), with the participant assessing whether the response was correct. We will use a 2×3 design, where we manipulate the response mode (verbal vs. non-verbal) and the degree of error (no error vs. slight error vs. severe error). The robot will complete a series of ten sequence completion tasks, after each of which participants will rate trustworthiness and general perception (Godspeed Questionnaire) of the robot. We will measure whether the different conditions have an impact on the participants’ behaviour (reaction time and number of identified errors) and the robot’s perceived trustworthiness. We anticipate that making a severe error would affect trust more than a slight error, and that using verbal instead of non-verbal responses should mitigate this effect.
Period2023 okt. 5
EvenemangstitelSweCog 2023
Typ av evenemangKonferens
PlatsGothenburg, SverigeVisa på karta