University of Hertfordshire

A reward driven connectionist model of cognitive development

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Documents

  • 903610

    Accepted author manuscript, 22 KB, PDF document

  • L. Peters
  • N. Davey
  • Pamela Smith
  • D.J. Messer
View graph of relations
Original languageEnglish
Title of host publicationProcs of the European Conf on Cognitive Science
EditorsS Bagnara
Pages491-496
Publication statusPublished - 1999

Abstract

Children learn many skills under self-supervision where exemplars of target responses are not available. Connectionist models which rely on supervised learning are therefore not appropriate for modelling all forms of cognitive development. A task in this class, for which considerable data has been gathered in relationship to Karmiloff-Smith’s Model of Representational Redescription (RR) (Karmiloff-Smith, 1973, 1992); is one in which children learn through trial and error to balance objects. Data from these studies have been used to derive a training set and a new approach to modelling cognitive development has been taken in which learning through a dual backpropagation network (Munro, 1987) is reward-driven. Results have shown that the model can successfully learn and simulate aspects of children’s behaviour without explicit training information being defined. This approach however is incapable of modelling all levels of the RR Model.

ID: 462543