Abstract
On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: (1) the robot's abilities and limitations; in particular when it makes errors with different severity of consequences, (2) individual differences, (3) the dynamics of human-robot trust, and (4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots' errors on people's trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots' errors had greater impact on people's trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals' personalities, expectations and previous experiences.
Original language | English |
---|---|
Pages (from-to) | 380-421 |
Number of pages | 42 |
Journal | Interaction Studies |
Volume | 24 |
Issue number | 3 |
DOIs | |
Publication status | Published - 15 Feb 2024 |
Keywords
- antecedents of trust
- faulty robots
- previous experiences
- robots' errors
- social robotics
- trust