University of Hertfordshire

The Importance of Self-excitation in Spiking Neural Networks Evolved to Recognize Temporal Patterns

Research output: Chapter in Book/Report/Conference proceedingConference contribution

View graph of relations
Original languageEnglish
Title of host publicationArtificial Neural Networks and Machine Learning – ICANN 2019
Subtitle of host publicationTheoretical Neural Computation - 28th International Conference on Artificial Neural Networks, 2019, Proceedings
EditorsIgor V. Tetko, Pavel Karpov, Fabian Theis, Vera Kurková
PublisherSpringer Verlag
Pages758-771
Number of pages14
ISBN (Print)9783030304867
DOIs
Publication statusE-pub ahead of print - 9 Sep 2019
Event28th International Conference on Artificial Neural Networks, ICANN 2019 - Munich, Germany
Duration: 17 Sep 201919 Sep 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11727 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference28th International Conference on Artificial Neural Networks, ICANN 2019
CountryGermany
CityMunich
Period17/09/1919/09/19

Abstract

Biological and artificial spiking neural networks process information by changing their states in response to the temporal patterns of input and of the activity of the network itself. Here we analyse very small networks, evolved to recognize three signals in a specific pattern (ABC) in a continuous temporal stream of signals (..CABCACB..). This task can be accomplished by networks with just four neurons (three interneurons and one output). We show that evolving the networks in the presence of noise and variation of the intervals of silence between signals biases the solutions towards networks that can maintain their states (a form of memory), while the majority of networks evolved without variable intervals between signals cannot do so. We demonstrate that in most networks, the evolutionary process leads to the presence of superfluous connections that can be pruned without affecting the ability of the networks to perform the task and, if the unpruned network can maintain memory, so does the pruned network. We then analyse how these small networks can perform their tasks, using a paradigm of finite state transducers. This analysis shows that self-excitatory loops (autapses) in these networks are crucial for both the recognition of the pattern and for memory maintenance.

ID: 17545388