Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
View graph of relations
Original language | English |
---|
Title of host publication | Social Robotics |
---|
Subtitle of host publication | 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings |
---|
Publisher | Springer Verlag |
---|
Pages | 42-52 |
---|
Number of pages | 11 |
---|
Volume | 10652 LNCS |
---|
ISBN (Electronic) | 9783319700229 |
---|
ISBN (Print) | 9783319700212 |
---|
DOIs | |
---|
Publication status | E-pub ahead of print - 24 Oct 2017 |
---|
Event | 9th International Conference on Social Robotics, ICSR 2017 - Tsukuba, Japan Duration: 22 Nov 2017 → 24 Nov 2017 |
---|
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|
Volume | 10652 LNAI |
---|
ISSN (Print) | 0302-9743 |
---|
ISSN (Electronic) | 1611-3349 |
---|
Conference | 9th International Conference on Social Robotics, ICSR 2017 |
---|
Country/Territory | Japan |
---|
City | Tsukuba |
---|
Period | 22/11/17 → 24/11/17 |
---|
Abstract
Trust is a key factor in human users’ acceptance of robots in a home or human oriented environment. Humans should be able to trust that they can safely interact with their robot. Robots will sometimes make errors, due to mechanical or functional failures. It is therefore important that a domestic robot should have acceptable interactive behaviours when exhibiting and recovering from an error situation. In order to define these behaviours, it is firstly necessary to consider that errors can have different degrees of consequences. We hypothesise that the severity of the consequences and the timing of a robot’s different types of erroneous behaviours during an interaction may have different impacts on users’ attitudes towards a domestic robot. In this study we used an interactive storyboard presenting ten different scenarios in which a robot performed different tasks under five different conditions. Each condition included the ten different tasks performed by the robot, either correctly, or with small or big errors. The conditions with errors were complemented with four correct behaviours. At the end of each experimental condition, participants were presented with an emergency scenario to evaluate their current trust in the robot. We conclude that there is correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot.
ID: 13840485