Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 162 tok/s
Gemini 2.5 Pro 56 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 164 tok/s Pro
GPT OSS 120B 426 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Spiking Neural Predictive Coding for Continual Learning from Data Streams (1908.08655v3)

Published 23 Aug 2019 in cs.NE, cs.LG, and q-bio.NC

Abstract: For energy-efficient computation in specialized neuromorphic hardware, we present spiking neural coding, an instantiation of a family of artificial neural models grounded in the theory of predictive coding. This model, the first of its kind, works by operating in a never-ending process of "guess-and-check", where neurons predict the activity values of one another and then adjust their own activities to make better future predictions. The interactive, iterative nature of our system fits well into the continuous time formulation of sensory stream prediction and, as we show, the model's structure yields a local synaptic update rule, which can be used to complement or as an alternative to online spike-timing dependent plasticity. In this article, we experiment with an instantiation of our model consisting of leaky integrate-and-fire units. However, the framework within which our system is situated can naturally incorporate more complex neurons such as the Hodgkin-Huxley model. Our experimental results in pattern recognition demonstrate the potential of the model when binary spike trains are the primary paradigm for inter-neuron communication. Notably, spiking neural coding is competitive in terms of classification performance and experiences less forgetting when learning from task sequence, offering a more computationally economical, biologically-plausible alternative to popular artificial neural networks.

Citations (24)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.