Papers
Topics
Authors
Recent
2000 character limit reached

An Introduction to Spiking Neural Networks: Probabilistic Models, Learning Rules, and Applications (1812.03929v5)

Published 10 Dec 2018 in eess.SP, cs.IT, cs.LG, cs.NE, math.IT, and stat.ML

Abstract: Spiking Neural Networks (SNNs) are distributed trainable systems whose computing elements, or neurons, are characterized by internal analog dynamics and by digital and sparse synaptic communications. The sparsity of the synaptic spiking inputs and the corresponding event-driven nature of neural processing can be leveraged by hardware implementations that have demonstrated significant energy reductions as compared to conventional Artificial Neural Networks (ANNs). Most existing training algorithms for SNNs have been designed either for biological plausibility or through conversion from pre-trained ANNs via rate encoding. This paper aims at providing an introduction to SNNs by focusing on a probabilistic signal processing methodology that enables the direct derivation of learning rules leveraging the unique time encoding capabilities of SNNs. To this end, the paper adopts discrete-time probabilistic models for networked spiking neurons, and it derives supervised and unsupervised learning rules from first principles by using variational inference. Examples and open research problems are also provided.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.