Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 49 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Online Training of Hopfield Networks using Predictive Coding (2406.14723v1)

Published 20 Jun 2024 in cs.NE

Abstract: Neuroscience and AI have progressed in tandem, each contributing to our understanding of the brain, and inspiring recent developments in biologically-plausible neural networks (NNs) and learning rules. Predictive coding (PC), and its learning rule, have been shown to approximate error backpropagation in a biologically relevant manner, with local weight updates that depend only on the activity of the pre- and post-synaptic neurons. Unlike traditional feedforward NNs where the flow of information goes in one direction, PC models mimic the brain more accurately by passing information bidirectionally: prediction in one direction, and correction/error in the other. PC models learn by clamping some neurons to target values and running the network to equilibrium. At equilibrium, the network calculates its own error gradients right at the location where they are used for weight updates. Traditional backprop requires the computation graph to be feedforward. However, the PC version of backprop does not have this requirement. Amazingly, no one has demonstrated the application of PC learning directly to recurrent neural networks (RNNs). Hopfield networks (HNs) are RNNs that implement a content-addressable memory, learning patterns (or ``memories'') that can be retrieved from partial or corrupted patterns. In this paper, we show that a HN can be trained using the PC learning rules without modification. To our knowledge, this is the first time PC learning has been applied directly to train a RNN, without the need to unroll it in time. Our results indicate that the PC-trained HNs behave like classical HNs.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.