Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RNNs of RNNs: Recursive Construction of Stable Assemblies of Recurrent Neural Networks (2106.08928v6)

Published 16 Jun 2021 in cs.LG, math.DS, and q-bio.NC

Abstract: Recurrent neural networks (RNNs) are widely used throughout neuroscience as models of local neural activity. Many properties of single RNNs are well characterized theoretically, but experimental neuroscience has moved in the direction of studying multiple interacting areas, and RNN theory needs to be likewise extended. We take a constructive approach towards this problem, leveraging tools from nonlinear control theory and machine learning to characterize when combinations of stable RNNs will themselves be stable. Importantly, we derive conditions which allow for massive feedback connections between interacting RNNs. We parameterize these conditions for easy optimization using gradient-based techniques, and show that stability-constrained "networks of networks" can perform well on challenging sequential-processing benchmark tasks. Altogether, our results provide a principled approach towards understanding distributed, modular function in the brain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Leo Kozachkov (10 papers)
  2. Michaela Ennis (3 papers)
  3. Jean-Jacques Slotine (80 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.