University of Hertfordshire

From the same journal

From the same journal

By the same authors

Generation of tactile maps for artificial skin

Research output: Contribution to journalArticle


  • 906044

    Final published version, 536 KB, PDF document

View graph of relations
Original languageEnglish
Article numbere26561
Number of pages11
JournalPLoS ONE
Publication statusPublished - 10 Nov 2011


Prior research has shown that representations of retinal surfaces can be learned from the intrinsic structure of visual sensory data in neural simulations, in robots, as well as by animals. Furthermore, representations of cochlear (frequency) surfaces can be learned from auditory data in neural simulations. Advances in hardware technology have allowed the development of artificial skin for robots, realising a new sensory modality which differs in important respects from vision and audition in its sensorimotor characteristics. This provides an opportunity to further investigate ordered sensory map formation using computational tools. We show that it is possible to learn representations of non-trivial tactile surfaces, which require topologically and geometrically involved three-dimensional embeddings. Our method automatically constructs a somatotopic map corresponding to the configuration of tactile sensors on a rigid body, using only intrinsic properties of the tactile data. The additional complexities involved in processing the tactile modality require the development of a novel multi-dimensional scaling algorithm. This algorithm, ANISOMAP, extends previous methods and outperforms them, producing high-quality reconstructions of tactile surfaces in both simulation and hardware tests. In addition, the reconstruction turns out to be robust to unanticipated hardware failure.


Original article can be found at: Copyright The Authors. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. This research was partly funded by the European Commission's Seventh Framework Programme FP7/2007-2013 under agreement no. 231500. The work described in this paper is conducted within the EU project RoboSKIN.

ID: 463345