Papers
Topics
Authors
Recent
2000 character limit reached

An Introduction to Probabilistic Spiking Neural Networks: Probabilistic Models, Learning Rules, and Applications (1910.01059v2)

Published 2 Oct 2019 in cs.LG, cs.NE, eess.SP, and stat.ML

Abstract: Spiking neural networks (SNNs) are distributed trainable systems whose computing elements, or neurons, are characterized by internal analog dynamics and by digital and sparse synaptic communications. The sparsity of the synaptic spiking inputs and the corresponding event-driven nature of neural processing can be leveraged by energy-efficient hardware implementations, which can offer significant energy reductions as compared to conventional artificial neural networks (ANNs). The design of training algorithms lags behind the hardware implementations. Most existing training algorithms for SNNs have been designed either for biological plausibility or through conversion from pretrained ANNs via rate encoding. This article provides an introduction to SNNs by focusing on a probabilistic signal processing methodology that enables the direct derivation of learning rules by leveraging the unique time-encoding capabilities of SNNs. We adopt discrete-time probabilistic models for networked spiking neurons and derive supervised and unsupervised learning rules from first principles via variational inference. Examples and open research problems are also provided.

Citations (70)

Summary

  • The paper introduces a probabilistic framework that models spiking neuron behavior using GLM and a sigmoid-based spike emission probability.
  • The paper details both supervised and unsupervised learning methods, employing Maximum Likelihood, SGD, and ELBO optimization for training SNNs.
  • The paper demonstrates energy efficiency and effective performance in tasks like digit classification and sequence prediction, promoting neuromorphic computing.

An Introduction to Probabilistic Spiking Neural Networks

The paper "An Introduction to Probabilistic Spiking Neural Networks: Probabilistic Models, Learning Rules, and Applications" presents a comprehensive overview of Spiking Neural Networks (SNNs) within a probabilistic framework. This approach diverges from traditional deterministic models, offering advantages in terms of energy efficiency and learning flexibility.

Core Concepts

The discussion centers on the probabilistic models of SNNs, where neurons emit binary spikes influenced by membrane potentials, with synaptic weights, biases, and both feedforward and feedback filtered traces forming the model's parameters. The Generalized Linear Model (GLM) is emphasized, presenting the neuron's spike emission probability mediated by a sigmoid function applied to its membrane potential.

Learning Algorithms

For training SNNs, the paper outlines both supervised and unsupervised methods utilizing Maximum Likelihood (ML) criteria. In fully observable networks, the Stochastic Gradient Descent (SGD), both in batch and online forms, is applied. Local neuron updates are derived from gradients of spiking probabilities, eschewing a backpropagation requirement, thus maintaining energy efficiency.

Conversely, when hidden neurons form part of the network architecture, variational inference becomes necessary to approximate the posterior distributions over these latent variables. This is facilitated through the Evidence Lower Bound (ELBO) maximization, typically integrated with Monte Carlo methods to handle intractable sums. The resulting learning rules adhere to a three-factor Hebbian-like structure, blending local activity terms with global learning signals.

Implementation and Examples

Practical scenarios were explored through both batch and online learning setups. For batch learning, an SNN trained on the USPS digit classification task demonstrated the balance between accuracy and energy consumption, effectively matching the performance of an ANN given adequate time (samples). In an online setting, an SNN was tasked with predicting sequences represented through rate and time coding, further highlighting the unique temporal encoding capabilities of SNNs that afford energy-efficient computation.

Implications and Future Directions

The probabilistic perspective provides significant analytic and practical benefits for SNNs, including the derivation of biologically plausible and theoretically sound learning algorithms. The avenue of energy-efficient and sparse event-driven computing holds promise for neuromorphic hardware implementations. However, the field remains ripe with opportunities. The realization of efficient encoding/decoding interfaces remains a key challenge. Further research could enhance meta-learning approaches, develop distributed architectures, or refine regularization techniques to curtail spiking activity without sacrificing learning capacity.

In conclusion, "An Introduction to Probabilistic Spiking Neural Networks" underscores both the current advancements and the unresolved puzzles in this nascent domain. It opens pathways for realizing SNNs as a practical, low-power alternative to standard ANNs in a wide array of signal processing and learning applications.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.