Emergent Mind

Deep Learning without Weight Symmetry

(2405.20594)
Published May 31, 2024 in cs.LG , cs.AI , and q-bio.NC

Abstract

Backpropagation (BP), a foundational algorithm for training artificial neural networks, predominates in contemporary deep learning. Although highly successful, it is often considered biologically implausible. A significant limitation arises from the need for precise symmetry between connections in the backward and forward pathways to backpropagate gradient signals accurately, which is not observed in biological brains. Researchers have proposed several algorithms to alleviate this symmetry constraint, such as feedback alignment and direct feedback alignment. However, their divergence from backpropagation dynamics presents challenges, particularly in deeper networks and convolutional layers. Here we introduce the Product Feedback Alignment (PFA) algorithm. Our findings demonstrate that PFA closely approximates BP and achieves comparable performance in deep convolutional networks while avoiding explicit weight symmetry. Our results offer a novel solution to the longstanding weight symmetry problem, leading to more biologically plausible learning in deep convolutional networks compared to earlier methods.

Comparison of learning algorithms for multilayer networks transmitting errors backward: BP, FA, DFA, SF, KP, WM, PAL, PFA.

Overview

  • The paper introduces the Product Feedback Alignment (PFA) algorithm to address the weight symmetry problem in backpropagation (BP), proposing a more biologically plausible alternative.

  • PFA aligns feedforward weights with the product of two feedback weights, allowing performance comparable to BP even in deeper networks and complex tasks, validated through experiments on MNIST, CIFAR-10, and ImageNet datasets.

  • The algorithm shows robustness in sparse connectivity scenarios, demonstrating superior resilience compared to other methods like Feedback Alignment (FA) and Direct Feedback Alignment (DFA), with potential for neuromorphic system applications.

An Overview of the Product Feedback Alignment Algorithm

The paper "Deep Learning without Weight Symmetry" by Ji-An Li and Marcus K. Benna introduces the Product Feedback Alignment (PFA) algorithm, addressing the notorious weight symmetry problem in backpropagation (BP). BP remains a cornerstone algorithm for training artificial neural networks but is often criticized for its lack of biological plausibility. Its requirement for symmetric weights in forward and backward connections is not observed in biological neural networks. The new algorithm proposed in this work seeks to overcome this limitation while maintaining performance comparable to BP.

Introduction and Motivation

Artificial neural networks, much like their biological counterparts, need to efficiently update synaptic weights to improve performance on tasks. The BP algorithm, though successful, is not biologically plausible due to the need for precisely symmetric weights in the forward and backward passes. Biological neural networks do not exhibit this symmetry, prompting the need for alternative learning rules. Previous attempts to address this issue, such as Feedback Alignment (FA) and Direct Feedback Alignment (DFA), have shown limited performance, especially in deep networks and convolutional layers.

Product Feedback Alignment (PFA) Algorithm

The PFA algorithm proposed in this study avoids explicit weight symmetry by introducing an additional population of neurons to align forward weights with the product of feedback weights. Mathematically, feedforward weights ( W ) align with the product of two feedback weights ( R ) and ( B ) (i.e., ( W \propto (RB)T )). This innovation allows PFA to approximate the performance of BP closely, even in deeper networks and more complex tasks.

Empirical Results

The effectiveness of PFA is validated through various experiments, including training on the MNIST, CIFAR-10, and ImageNet datasets using different neural network architectures:

  • MNIST Dataset: A two-hidden-layer feedforward network was trained. PFA achieved test accuracy comparable to BP, significantly outperforming FA and DFA. Metrics such as backward-forward weight alignment and weight norm ratio further confirmed that PFA approximates BP closely.
  • CIFAR-10 Dataset: ResNet-20 was employed for this experiment. PFA maintained performance consistency with BP and SF, particularly excelling over FA and DFA in task accuracy and stability of error propagation.
  • ImageNet Dataset: Training with ResNet-18, PFA's performance was near that of BP, surpassing SF, which struggled with this more complex dataset and architecture.

Sparse Connectivity

The authors explored PFA's robustness in scenarios with sparse connections, a feature typical in biological brains. While task performance degraded with increasing sparsity in FA, DFA, and SF, PFA demonstrated superior resilience, suggesting potential advantages over existing approaches under biologically realistic constraints.

Implications and Future Directions

The introduction of PFA provides meaningful insights into developing more biologically plausible learning algorithms for deep neural networks. By efficiently avoiding explicit weight symmetry, PFA promises practical applications in scenarios where biological realism is paramount, such as neuromorphic systems.

The theoretical implications suggest that alignment mechanisms leveraging additional neuronal populations can facilitate more complex learning tasks without the constraints of traditional BP. Future research may explore plasticity rules for feedback weight adjustment to reduce the expansion ratio in PFA, potentially enhancing its biological plausibility.

In summary, the PFA algorithm marks a significant step towards reconciling the performance of deep learning models with the constraints observed in biological neural networks. While challenges such as convolutional weight sharing and computational overhead remain, PFA's promise in handling sparse connectivity and its close approximation to BP in performance positions it as a substantive advancement in the quest for biologically plausible algorithms.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.