Interpretable statistical representations of neural population dynamics and geometry (2304.03376v4)
Abstract: The dynamics of neuron populations commonly evolve on low-dimensional manifolds. Thus, we need methods that learn the dynamical processes over neural manifolds to infer interpretable and consistent latent representations. We introduce a representation learning method, MARBLE, that decomposes on-manifold dynamics into local flow fields and maps them into a common latent space using unsupervised geometric deep learning. In simulated non-linear dynamical systems, recurrent neural networks, and experimental single-neuron recordings from primates and rodents, we discover emergent low-dimensional latent representations that parametrise high-dimensional neural dynamics during gain modulation, decision-making, and changes in the internal state. These representations are consistent across neural networks and animals, enabling the robust comparison of cognitive computations. Extensive benchmarking demonstrates state-of-the-art within- and across-animal decoding accuracy of MARBLE compared with current representation learning approaches, with minimal user input. Our results suggest that manifold structure provides a powerful inductive bias to develop powerful decoding algorithms and assimilate data across experiments.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.