Trust in Cognitive Models: Understandability and Computational Reliabilism

Noman Javed, Angelo Pirrone, Laura Bartlett, Peter Lane, Fernand Gobet

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

148 Downloads (Pure)


The realm of knowledge production, once considered a solely human endeavour, has transformed with the rising prominence of artificial intelligence. AI not only generates new forms of knowledge but also plays a substantial role in scientific discovery. This development raises a fundamental question: can we trust knowledge generated by AI systems? Cognitive modelling, a field at the intersection between psychology and computer science that aims to comprehend human behaviour under various experimental conditions, underscores the importance of trust. To address this concern, we identified understandability and computational reliabilism as two essential aspects of trustworthiness in cognitive
modelling. This paper delves into both dimensions of trust, taking as case study a system for semi-automatically generating cognitive models. These models evolved interactively as computer programs using genetic programming. The selection of genetic programming, coupled with simplification algorithms, aims to create understandable cognitive models. To discuss reliability, we adopted computational reliabilism and demonstrate how our test-driven software development methodology instils reliability in the model generation process and the models themselves.
Original languageEnglish
Title of host publicationProceedings of the AISB Convention 2023
EditorsBerndt Muller
PublisherSwansea University
Number of pages8
ISBN (Electronic)978-1-908187-85-7
Publication statusPublished - 14 Apr 2023
EventAISB Convention 2023 - Swansea University
Duration: 13 Apr 202314 Apr 2023


ConferenceAISB Convention 2023


Dive into the research topics of 'Trust in Cognitive Models: Understandability and Computational Reliabilism'. Together they form a unique fingerprint.

Cite this