A Multimodal User Interface for an Assistive Robotic Shopping Cart

Dmitry Ryumin, Ildar Kagirov, Alexandr Axyonov, Nikita Pavlyuk, Anton Saveliev, Irina Kipyatkova, Milos Zelezny, Iosif Mporas, Alexey Karpov

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
123 Downloads (Pure)


This paper presents the research and development of the prototype of the assistive mobile information robot (AMIR). The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition and synthesis techniques and a high degree of robot autonomy. AMIR prototype’s aim is to be used as a robotic cart for shopping in grocery stores and/or supermarkets. Among the main topics covered in this paper are the presentation of the interface (three modalities), the single-handed gesture recognition system (based on a collected database of Russian sign language elements), as well as the technical description of the robotic platform (architecture, navigation algorithm). The use of multimodal interfaces, namely the speech and gesture modalities, make human-robot interaction natural and intuitive, as well as sign language recognition allows hearing-impaired people to use this robotic cart. AMIR prototype has promising perspectives for real usage in supermarkets, both due to its assistive capabilities and its multimodal user interface.
Original languageEnglish
Issue number12
Publication statusPublished - 8 Dec 2020


Dive into the research topics of 'A Multimodal User Interface for an Assistive Robotic Shopping Cart'. Together they form a unique fingerprint.

Cite this