Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 64 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 452 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Neuromodulated Neural Architectures with Local Error Signals for Memory-Constrained Online Continual Learning (2007.08159v2)

Published 16 Jul 2020 in cs.LG and stat.ML

Abstract: The ability to learn continuously from an incoming data stream without catastrophic forgetting is critical for designing intelligent systems. Many existing approaches to continual learning rely on stochastic gradient descent and its variants. However, these algorithms have to implement various strategies, such as memory buffers or replay, to overcome well-known shortcomings of stochastic gradient descent methods in terms of stability, greed, and short-term memory. To that end, we develop a biologically-inspired light weight neural network architecture that incorporates local learning and neuromodulation to enable input processing over data streams and online learning. Next, we address the challenge of hyperparameter selection for tasks that are not known in advance by implementing transfer metalearning: using a Bayesian optimization to explore a design space spanning multiple local learning rules and their hyperparameters, we identify high performing configurations in classical single task online learning and we transfer them to continual learning tasks with task-similarity considerations. We demonstrate the efficacy of our approach on both single task and continual learning setting. For the single task learning setting, we demonstrate superior performance over other local learning approaches on the MNIST, Fashion MNIST, and CIFAR-10 datasets. Using high performing configurations metalearned in the single task learning setting, we achieve superior continual learning performance on Split-MNIST, and Split-CIFAR-10 data as compared with other memory-constrained learning approaches, and match that of the state-of-the-art memory-intensive replay-based approaches.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.