Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Biologically Plausible Computing: A Comprehensive Comparison (2406.16062v1)

Published 23 Jun 2024 in cs.NE

Abstract: Backpropagation is a cornerstone algorithm in training neural networks for supervised learning, which uses a gradient descent method to update network weights by minimizing the discrepancy between actual and desired outputs. Despite its pivotal role in propelling deep learning advancements, the biological plausibility of backpropagation is questioned due to its requirements for weight symmetry, global error computation, and dual-phase training. To address this long-standing challenge, many studies have endeavored to devise biologically plausible training algorithms. However, a fully biologically plausible algorithm for training multilayer neural networks remains elusive, and interpretations of biological plausibility vary among researchers. In this study, we establish criteria for biological plausibility that a desirable learning algorithm should meet. Using these criteria, we evaluate a range of existing algorithms considered to be biologically plausible, including Hebbian learning, spike-timing-dependent plasticity, feedback alignment, target propagation, predictive coding, forward-forward algorithm, perturbation learning, local losses, and energy-based learning. Additionally, we empirically evaluate these algorithms across diverse network architectures and datasets. We compare the feature representations learned by these algorithms with brain activity recorded by non-invasive devices under identical stimuli, aiming to identify which algorithm can most accurately replicate brain activity patterns. We are hopeful that this study could inspire the development of new biologically plausible algorithms for training multilayer networks, thereby fostering progress in both the fields of neuroscience and machine learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (19)
  1. Changze Lv (22 papers)
  2. Yufei Gu (4 papers)
  3. Zhengkang Guo (2 papers)
  4. Zhibo Xu (6 papers)
  5. Yixin Wu (18 papers)
  6. Feiran Zhang (4 papers)
  7. Tianyuan Shi (10 papers)
  8. Zhenghua Wang (7 papers)
  9. Ruicheng Yin (5 papers)
  10. Yu Shang (13 papers)
  11. Siqi Zhong (2 papers)
  12. Xiaohua Wang (26 papers)
  13. Muling Wu (13 papers)
  14. Wenhao Liu (83 papers)
  15. Tianlong Li (13 papers)
  16. Jianhao Zhu (4 papers)
  17. Cenyuan Zhang (10 papers)
  18. Zixuan Ling (8 papers)
  19. Xiaoqing Zheng (44 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.