University of Hertfordshire

From the same journal

By the same authors

Standard

A Framework for Automated Validation and Comparison of Models of Neurophysiological and Neurocognitive Biomarkers of Psychiatric Disorders. / Metzner, Christoph; Schweikard, Achim; Maeki-Marttunen, Tuomo; Zurowski, Bartosz; Steuber, Volker.

In: BMC Neuroscience, Vol. 18, No. 1, P258, 17.08.2017, p. 147.

Research output: Contribution to journalMeeting abstractpeer-review

Harvard

APA

Vancouver

Author

Metzner, Christoph ; Schweikard, Achim ; Maeki-Marttunen, Tuomo ; Zurowski, Bartosz ; Steuber, Volker. / A Framework for Automated Validation and Comparison of Models of Neurophysiological and Neurocognitive Biomarkers of Psychiatric Disorders. In: BMC Neuroscience. 2017 ; Vol. 18, No. 1. pp. 147.

Bibtex

@article{1a0fb3f10e38439fa0988ddb3286b3b2,
title = "A Framework for Automated Validation and Comparison of Models of Neurophysiological and Neurocognitive Biomarkers of Psychiatric Disorders",
abstract = "Research on psychiatric disorders has gradually shifted its focus fromcomplex clinical phenotypes towards the identifcation of biomarkers and endophenotypic measures. Computational approaches havegained signifcantly more attention over the last years, and this hasled to the emergence of {\textquoteleft}Computational Psychiatry{\textquoteright} as an independentdiscipline. Computational modelling of biomarkers promises to morereadily shed light on the mechanisms underlying disorders and tofacilitate the discovery of novel medications [1]. However, in order todevelop a computational model, scientists need to have an in-depthunderstanding of the current, relevant experimental data, the currentstate of computational modeling and the state-of-the-art of statisticaltesting. Based on this knowledge, they have to choose the appropriate criteria with which the model predictions and experimental observations will be compared [2]. In a feld where both the number ofexperimental and computational studies grows rapidly, as is the casefor psychiatry, this becomes more and more impracticable. Omar et al.therefore proposed a framework for automated validation of scientifcmodels, SciUnit [3]. Here, we propose to adopt this framework for thecomputational psychiatry community and to collaboratively buildcommon repositories of experimental observations, computationalmodels, test suites and tools. As a case in point, we have implementedtest suites for auditory steady-state response defcits in schizophrenicpatients, which are based on observations from several experimentalstudies [4–6], and we demonstrate how existing computational models [6, 7] can be validated against these observations and comparedagainst each other. We have included sets of observations from threeexperimental studies, which concur on most fndings but also disagreeon some. This allows us to demonstrate the usefulness of our approachin highlighting and clarifying existing, potentially conficting, experimental data. We have included computational models that not onlycomprise biophysically detailed as well as abstract models, but thatalso difer in implementation (native Python vs. Genesis vs NeuroML2),in order to demonstrate the fexibility of the approach. Furthermore, this additionally allows us to showcase the ability of the framework tocompare models against each other based on a set of experimentalobservations. Furthermore, our approach enables us to assess the variability of the produced model output, and therefore the robustness ofthe fndings, by generating a distribution of model instances wherecertain parameters, such as the precise timing of noise (however, notstrength and type of noise) or the precise connectivity (however, notthe distribution of connections) vary, which then are used to producea distribution of model outputs. This can inform on the robustness ofthe fndings and be compared against the variability of experimentalobservations.",
author = "Christoph Metzner and Achim Schweikard and Tuomo Maeki-Marttunen and Bartosz Zurowski and Volker Steuber",
note = "{\textcopyright} The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.; 26th Annual Computational Neuroscience<br/>Meeting (CNS*2017): Part 3 ; Conference date: 15-07-2017 Through 20-07-2017",
year = "2017",
month = aug,
day = "17",
doi = "10.1186/s12868-017-0372-1",
language = "English",
volume = "18",
pages = "147",
journal = "BMC Neuroscience",
issn = "1471-2202",
publisher = "BioMed Central",
number = "1",

}

RIS

TY - JOUR

T1 - A Framework for Automated Validation and Comparison of Models of Neurophysiological and Neurocognitive Biomarkers of Psychiatric Disorders

AU - Metzner, Christoph

AU - Schweikard, Achim

AU - Maeki-Marttunen, Tuomo

AU - Zurowski, Bartosz

AU - Steuber, Volker

N1 - © The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

PY - 2017/8/17

Y1 - 2017/8/17

N2 - Research on psychiatric disorders has gradually shifted its focus fromcomplex clinical phenotypes towards the identifcation of biomarkers and endophenotypic measures. Computational approaches havegained signifcantly more attention over the last years, and this hasled to the emergence of ‘Computational Psychiatry’ as an independentdiscipline. Computational modelling of biomarkers promises to morereadily shed light on the mechanisms underlying disorders and tofacilitate the discovery of novel medications [1]. However, in order todevelop a computational model, scientists need to have an in-depthunderstanding of the current, relevant experimental data, the currentstate of computational modeling and the state-of-the-art of statisticaltesting. Based on this knowledge, they have to choose the appropriate criteria with which the model predictions and experimental observations will be compared [2]. In a feld where both the number ofexperimental and computational studies grows rapidly, as is the casefor psychiatry, this becomes more and more impracticable. Omar et al.therefore proposed a framework for automated validation of scientifcmodels, SciUnit [3]. Here, we propose to adopt this framework for thecomputational psychiatry community and to collaboratively buildcommon repositories of experimental observations, computationalmodels, test suites and tools. As a case in point, we have implementedtest suites for auditory steady-state response defcits in schizophrenicpatients, which are based on observations from several experimentalstudies [4–6], and we demonstrate how existing computational models [6, 7] can be validated against these observations and comparedagainst each other. We have included sets of observations from threeexperimental studies, which concur on most fndings but also disagreeon some. This allows us to demonstrate the usefulness of our approachin highlighting and clarifying existing, potentially conficting, experimental data. We have included computational models that not onlycomprise biophysically detailed as well as abstract models, but thatalso difer in implementation (native Python vs. Genesis vs NeuroML2),in order to demonstrate the fexibility of the approach. Furthermore, this additionally allows us to showcase the ability of the framework tocompare models against each other based on a set of experimentalobservations. Furthermore, our approach enables us to assess the variability of the produced model output, and therefore the robustness ofthe fndings, by generating a distribution of model instances wherecertain parameters, such as the precise timing of noise (however, notstrength and type of noise) or the precise connectivity (however, notthe distribution of connections) vary, which then are used to producea distribution of model outputs. This can inform on the robustness ofthe fndings and be compared against the variability of experimentalobservations.

AB - Research on psychiatric disorders has gradually shifted its focus fromcomplex clinical phenotypes towards the identifcation of biomarkers and endophenotypic measures. Computational approaches havegained signifcantly more attention over the last years, and this hasled to the emergence of ‘Computational Psychiatry’ as an independentdiscipline. Computational modelling of biomarkers promises to morereadily shed light on the mechanisms underlying disorders and tofacilitate the discovery of novel medications [1]. However, in order todevelop a computational model, scientists need to have an in-depthunderstanding of the current, relevant experimental data, the currentstate of computational modeling and the state-of-the-art of statisticaltesting. Based on this knowledge, they have to choose the appropriate criteria with which the model predictions and experimental observations will be compared [2]. In a feld where both the number ofexperimental and computational studies grows rapidly, as is the casefor psychiatry, this becomes more and more impracticable. Omar et al.therefore proposed a framework for automated validation of scientifcmodels, SciUnit [3]. Here, we propose to adopt this framework for thecomputational psychiatry community and to collaboratively buildcommon repositories of experimental observations, computationalmodels, test suites and tools. As a case in point, we have implementedtest suites for auditory steady-state response defcits in schizophrenicpatients, which are based on observations from several experimentalstudies [4–6], and we demonstrate how existing computational models [6, 7] can be validated against these observations and comparedagainst each other. We have included sets of observations from threeexperimental studies, which concur on most fndings but also disagreeon some. This allows us to demonstrate the usefulness of our approachin highlighting and clarifying existing, potentially conficting, experimental data. We have included computational models that not onlycomprise biophysically detailed as well as abstract models, but thatalso difer in implementation (native Python vs. Genesis vs NeuroML2),in order to demonstrate the fexibility of the approach. Furthermore, this additionally allows us to showcase the ability of the framework tocompare models against each other based on a set of experimentalobservations. Furthermore, our approach enables us to assess the variability of the produced model output, and therefore the robustness ofthe fndings, by generating a distribution of model instances wherecertain parameters, such as the precise timing of noise (however, notstrength and type of noise) or the precise connectivity (however, notthe distribution of connections) vary, which then are used to producea distribution of model outputs. This can inform on the robustness ofthe fndings and be compared against the variability of experimentalobservations.

U2 - 10.1186/s12868-017-0372-1

DO - 10.1186/s12868-017-0372-1

M3 - Meeting abstract

VL - 18

SP - 147

JO - BMC Neuroscience

JF - BMC Neuroscience

SN - 1471-2202

IS - 1

M1 - P258

T2 - 26th Annual Computational Neuroscience<br/>Meeting (CNS*2017): Part 3

Y2 - 15 July 2017 through 20 July 2017

ER -