Human and activity detection has always been a vital task in Human-Robot Interaction (HRI) scenarios, such as those involving assistive robots. In particular, skeleton-based Human Activity Recognition (HAR) offers a robust and effective detection method based on human biomechanics. Recent advancements in human pose estimation have made it possible to extract skeleton positioning data accurately and quickly using affordable cameras. In interaction with a human, robots can therefore capture detailed information from a close distance and flexible perspective. However, recognition accuracy is susceptible to robot movements, where the robot often fails to capture the entire scene. To address this we propose the adoption of external cameras to improve the accuracy of activity recognition on a mobile robot. In support of this proposal, we present the dataset RH-HAR-SK that combines multiple camera perspectives augmented with human skeleton extraction obtained by the HRNet pose estimation. We apply qualitative and quantitative analysis techniques to the extracted skeleton and its joints to demonstrate the additional value of external cameras to the robot's recognition pipeline. Results show that while the robot's camera can provide optimal recognition accuracy in some specific scenarios, an external camera increases overall performance.
|Title of host publication||ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions|
|Publication status||Accepted/In press - 2 Mar 2023|
|Event||ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions - Venice, Italy|
Duration: 24 Apr 2023 → 28 Apr 2023
|Conference||ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions|
|Abbreviated title||ACHI 2023|
|Period||24/04/23 → 28/04/23|