Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Modular Boundaries in Recurrent Neural Networks (2310.20601v2)

Published 31 Oct 2023 in q-bio.NC, cs.AI, and cs.LG

Abstract: Recent theoretical and experimental work in neuroscience has focused on the representational and dynamical character of neural manifolds --subspaces in neural activity space wherein many neurons coactivate. Importantly, neural populations studied under this "neural manifold hypothesis" are continuous and not cleanly divided into separate neural populations. This perspective clashes with the "modular hypothesis" of brain organization, wherein neural elements maintain an "all-or-nothing" affiliation with modules. In line with this modular hypothesis, recent research on recurrent neural networks suggests that multi-task networks become modular across training, such that different modules specialize for task-general dynamical motifs. If the modular hypothesis is true, then it would be important to use a dimensionality reduction technique that captures modular structure. Here, we investigate the features of such a method. We leverage RNNs as a model system to study the character of modular neural populations, using a community detection method from network science known as modularity maximization to partition neurons into distinct modules. These partitions allow us to ask the following question: do these modular boundaries matter to the system? ...

Citations (2)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube