University of Hertfordshire

From the same journal

By the same authors

  • Christoph Metzner
  • Achim Schweikard
  • Tuomo Maeki-Marttunen
  • Bartosz Zurowski
  • Volker Steuber
View graph of relations
Original languageEnglish
Article numberP258
Pages (from-to)147
Number of pages1
JournalBMC Neuroscience
Volume18
Issue1
DOIs
Publication statusPublished - 17 Aug 2017
Event26th Annual Computational Neuroscience
Meeting (CNS*2017): Part 3
- Antwerp, Belgium
Duration: 15 Jul 201720 Jul 2017

Abstract

Research on psychiatric disorders has gradually shifted its focus fromcomplex clinical phenotypes towards the identifcation of biomarkers and endophenotypic measures. Computational approaches havegained signifcantly more attention over the last years, and this hasled to the emergence of ‘Computational Psychiatry’ as an independentdiscipline. Computational modelling of biomarkers promises to morereadily shed light on the mechanisms underlying disorders and tofacilitate the discovery of novel medications [1]. However, in order todevelop a computational model, scientists need to have an in-depthunderstanding of the current, relevant experimental data, the currentstate of computational modeling and the state-of-the-art of statisticaltesting. Based on this knowledge, they have to choose the appropriate criteria with which the model predictions and experimental observations will be compared [2]. In a feld where both the number ofexperimental and computational studies grows rapidly, as is the casefor psychiatry, this becomes more and more impracticable. Omar et al.therefore proposed a framework for automated validation of scientifcmodels, SciUnit [3]. Here, we propose to adopt this framework for thecomputational psychiatry community and to collaboratively buildcommon repositories of experimental observations, computationalmodels, test suites and tools. As a case in point, we have implementedtest suites for auditory steady-state response defcits in schizophrenicpatients, which are based on observations from several experimentalstudies [4–6], and we demonstrate how existing computational models [6, 7] can be validated against these observations and comparedagainst each other. We have included sets of observations from threeexperimental studies, which concur on most fndings but also disagreeon some. This allows us to demonstrate the usefulness of our approachin highlighting and clarifying existing, potentially conficting, experimental data. We have included computational models that not onlycomprise biophysically detailed as well as abstract models, but thatalso difer in implementation (native Python vs. Genesis vs NeuroML2),in order to demonstrate the fexibility of the approach. Furthermore, this additionally allows us to showcase the ability of the framework tocompare models against each other based on a set of experimentalobservations. Furthermore, our approach enables us to assess the variability of the produced model output, and therefore the robustness ofthe fndings, by generating a distribution of model instances wherecertain parameters, such as the precise timing of noise (however, notstrength and type of noise) or the precise connectivity (however, notthe distribution of connections) vary, which then are used to producea distribution of model outputs. This can inform on the robustness ofthe fndings and be compared against the variability of experimentalobservations.

Notes

© The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

ID: 13776786