Spiking Neural Networks as Finite State Transducers for Temporal Pattern Recognition: CNS*2022

Research output: Contribution to journalMeeting abstractpeer-review

Abstract

Spiking neural networks (SNNs) have been extensively studied to understand and reproduce the capabilities of the human brain. According to the biophysical properties of biological neurons, a variety of neuronal models have been proposed. Previous work has modelled SNNs as timed automata by representing each neuron as an automaton. Having a formal computational model of spiking neural networks will help in understanding the relationship between the underlying structure and the corresponding behaviour. We suggest a formal model of a finite state transducer (FST) to describe the functioning of minimal SNNs evolved to perform temporal pattern recognition. The operation and the constituents of the evolved SNNs are found to have a one to one correspondence with the 6 tuple of a FST that maps the input string ABC to the output string {0, 1}. Thus, a spiking neural network recognising a temporal pattern of three signals can be formalised as a 6 tuple machine, SNN = (Q, Σ, ∆, q0, F, σ), where Q is the finite set of network states, the spiking of interneurons in the interval following an input signal describes a network state = {start, hA, hAB, hABC}, Σ is the finite set of input channels = {A, B, C}, ∆ represents the finite set of spiking behaviours of the output neurons {spiking, quiet}, q0 is the starting state of the network (the spiking behaviour of interneurons when the network receives a signal in the wrong order), F is the finite set of final states (when the network receives signals in the correct order ABC) = {hABC}, and σ defines transitions between network states σ ⊆ Q × Σ × ∆ × Q. The evolved network receives a continuous input stream and produces spike(s) in the output neuron only if the signals are received in the correct order (Figure 1a). A segment of the spiking activity in the network is shown in Figure 1b. Before the onset of the first correct signal (around 350 ms), the network is in the start state (continuous spiking of neurons N1 and N2). When the network receives the first target signal A, N1 speeds up and shuts down N2 – transforming the network to hA state, represented by continuous spiking of N1. The persistent spiking of N1 also prevents the output neuron from spiking. Subsequently, when the network receives a signal on channel B, it shuts down N1 and transforms to the hAB state, enabling the output neuron to spike when receiving the last target signal C. The analysis of the evolved SNNs revealed that these transitions between network states accomplish temporal pattern recognition. Moreover, we demonstrate that the behaviour of a spiking neural network can be formalised as a finite state transducer. In future, we plan to wire minimal SNNs together to build larger systems that can recognise complex patterns.
Original languageEnglish
JournalJournal of Computational Neuroscience
Volume51
DOIs
Publication statusPublished - 3 Jan 2023
Event31st Annual Computational Neuroscience Meeting: CNS*2022 - Melbourne VIC, Australia
Duration: 16 Jul 202220 Jul 2022
Conference number: 31
https://www.cnsorg.org/cns-2022-meeting-program

Fingerprint

Dive into the research topics of 'Spiking Neural Networks as Finite State Transducers for Temporal Pattern Recognition: CNS*2022'. Together they form a unique fingerprint.

Cite this