Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 161 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 127 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4.5 26 tok/s Pro
2000 character limit reached

Neuromodulated Neural Architectures with Local Error Signals for Memory-Constrained Online Continual Learning (2007.08159v2)

Published 16 Jul 2020 in cs.LG and stat.ML

Abstract: The ability to learn continuously from an incoming data stream without catastrophic forgetting is critical for designing intelligent systems. Many existing approaches to continual learning rely on stochastic gradient descent and its variants. However, these algorithms have to implement various strategies, such as memory buffers or replay, to overcome well-known shortcomings of stochastic gradient descent methods in terms of stability, greed, and short-term memory. To that end, we develop a biologically-inspired light weight neural network architecture that incorporates local learning and neuromodulation to enable input processing over data streams and online learning. Next, we address the challenge of hyperparameter selection for tasks that are not known in advance by implementing transfer metalearning: using a Bayesian optimization to explore a design space spanning multiple local learning rules and their hyperparameters, we identify high performing configurations in classical single task online learning and we transfer them to continual learning tasks with task-similarity considerations. We demonstrate the efficacy of our approach on both single task and continual learning setting. For the single task learning setting, we demonstrate superior performance over other local learning approaches on the MNIST, Fashion MNIST, and CIFAR-10 datasets. Using high performing configurations metalearned in the single task learning setting, we achieve superior continual learning performance on Split-MNIST, and Split-CIFAR-10 data as compared with other memory-constrained learning approaches, and match that of the state-of-the-art memory-intensive replay-based approaches.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.