Abstract
This paper describes a context-dependent social gaze-control system implemented as part of a humanoid social robot. The system enables the robot to direct its gaze at multiple humans who are interacting with each other and with the robot. The attention mechanism of the gaze-control system is based on features that have been proven to guide human attention: nonverbal and verbal cues, proxemics, the visual field of view, and the habituation effect. Our gaze-control system uses Kinect skeleton tracking together with speech recognition and SHORE-based facial expression recognition to implement the same features. As part of a pilot evaluation, we collected the gaze behavior of 11 participants in an eye-tracking study. We showed participants videos of two-person interactions and tracked their gaze behavior. A comparison of the human gaze behavior with the behavior of our gaze-control system running on the same videos shows that it replicated human gaze behavior 89% of the time.
Original language | English |
---|---|
Article number | 6736067 |
Pages (from-to) | 157-168 |
Number of pages | 12 |
Journal | IEEE Transactions on Human-Machine Systems |
Volume | 44 |
Issue number | 2 |
DOIs | |
Publication status | Published - Apr 2014 |
Keywords
- Active vision
- context-dependent social gaze behavior
- human-robot interaction
- scene analysis
- social attention