A reward driven connectionist model of cognitive development

L. Peters, N. Davey, Pamela Smith, D.J. Messer

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    88 Citations (Scopus)
    28 Downloads (Pure)

    Abstract

    Children learn many skills under self-supervision where exemplars of target responses are not available. Connectionist models which rely on supervised learning are therefore not appropriate for modelling all forms of cognitive development. A task in this class, for which considerable data has been gathered in relationship to Karmiloff-Smith’s Model of Representational Redescription (RR) (Karmiloff-Smith, 1973, 1992); is one in which children learn through trial and error to balance objects. Data from these studies have been used to derive a training set and a new approach to modelling cognitive development has been taken in which learning through a dual backpropagation network (Munro, 1987) is reward-driven. Results have shown that the model can successfully learn and simulate aspects of children’s behaviour without explicit training information being defined. This approach however is incapable of modelling all levels of the RR Model.
    Original languageEnglish
    Title of host publicationProcs of the European Conf on Cognitive Science
    EditorsS Bagnara
    Pages491-496
    Publication statusPublished - 1999

    Fingerprint

    Dive into the research topics of 'A reward driven connectionist model of cognitive development'. Together they form a unique fingerprint.

    Cite this