Decomposed Linear Dynamical Systems (dLDS) for learning the latent components of neural dynamics (2206.02972v2)
Abstract: Learning interpretable representations of neural dynamics at a population level is a crucial first step to understanding how observed neural activity relates to perception and behavior. Models of neural dynamics often focus on either low-dimensional projections of neural activity, or on learning dynamical systems that explicitly relate to the neural state over time. We discuss how these two approaches are interrelated by considering dynamical systems as representative of flows on a low-dimensional manifold. Building on this concept, we propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data as a sparse combination of simpler, more interpretable components. Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time. The decomposed nature of the dynamics is more expressive than previous switched approaches for a given number of parameters and enables modeling of overlapping and non-stationary dynamics. In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system, learn efficient representations, and capture smooth transitions between dynamical modes, focusing on intuitive low-dimensional non-stationary linear and nonlinear systems. Furthermore, we highlight our model's ability to efficiently capture and demix population dynamics generated from multiple independent subnetworks, a task that is computationally impractical for switched models. Finally, we apply our model to neural "full brain" recordings of C. elegans data, illustrating a diversity of dynamics that is obscured when classified into discrete states.
- On State Estimation in Switching Environments. IEEE Transactions on Automatic Control, AC-15(1):10โ17, 1970. ISSN 15582523. doi: 10.1109/TAC.1970.1099359.
- K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing, 54(11):4311โ4322, 2006.
- The isomap algorithm and topological stability. Science, 295(5552):7โ7, 2002.
- Estimation and tracking. Artech House, Boston, MA, 1990.
- Sparse-coding variational auto-encoders. bioRxiv, page 399246, 2018.
- Templates for convex cone problems with applications to sparse signal recovery. Mathematical programming computation, 3(3):165โ218, 2011.
- Rapid fluctuations in functional connectivity of cortical networks encode spontaneous behavior. bioRxiv, 2021. doi: 10.1101/2021.08.15.456390.
- C.ย M. Bishop. Neural networks for pattern recognition. Oxford University Press, 2005.
- Variational inference: A review for statisticians. arXiv preprint arXiv:1601.00670, 2016.
- Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932โ3937, 2016.
- Chaw-Bing (MIT Lincolnย Laboratory) Chang and Michael (MIT Electronic Systemsย Laboratory) Athans. State estimation for discrete systems. IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 14(3):418โ425, 1978. ISSN 02755823. doi: 10.5711/1082598316323.
- Dynamic filtering of time-varying sparse signals via โ1subscriptโ1\ell_{1}roman_โ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. IEEE Transactions on Signal Processing, 64(21):5644โ5656, 2016.
- Neural population dynamics during reaching. Nature, 487(7405):51โ56, 2012.
- Representing closed transformation paths in encoded network latent space. 34:3666โ3675, April 2020. doi: 10.1609/aaai.v34i04.5775. URL https://ojs.aaai.org/index.php/AAAI/article/view/5775.
- Variational autoencoder with learned latent structure. In Arindam Banerjee and Kenji Fukumizu, editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pages 2359โ2367. PMLR, April 2021a. URL https://proceedings.mlr.press/v130/connor21a.html.
- Learning identity-preserving transformations on data manifolds, 2021b. URL https://arxiv.org/abs/2106.12096.
- Distinguishing discrete and continuous behavioral variability using warped autoregressive hmms. Advances in Neural Information Processing Systems, 35:23838โ23850, 2022.
- Learning transport operators for image manifolds. In Y.ย Bengio, D.ย Schuurmans, J.ย Lafferty, C.ย Williams, and A.ย Culotta, editors, Advances in Neural Information Processing Systems, volumeย 22. Curran Associates, Inc., 2009. URL https://proceedings.neurips.cc/paper/2009/file/a1d50185e7426cbb0acad1e6ca74b9aa-Paper.pdf.
- Dimensionality reduction for large-scale neural recordings. Nature neuroscience, 17(11):1500โ1509, 2014.
- Theoretical neuroscience: computational and mathematical modeling of neural systems. 2001.
- High-speed, cortex-wide volumetric recording of neuroactivity at cellular resolution using light beads microscopy. Nature Methods, 18(9):1103โ1111, 2021.
- On the expressivity of neural networks for deep reinforcement learning. In International conference on machine learning, pages 2627โ2637. PMLR, 2020.
- Variational sparse coding with learned thresholding. arXiv preprint arXiv:2205.03665, 2022.
- Richard FitzHugh. Impulses and physiological states in theoretical models of nerve membrane. Biophysical journal, 1(6):445โ466, 1961.
- Nonparametric bayesian learning of switching linear dynamical systems. In D.ย Koller, D.ย Schuurmans, Y.ย Bengio, and L.ย Bottou, editors, Advances in Neural Information Processing Systems, volumeย 21. Curran Associates, Inc., 2008. URL https://proceedings.neurips.cc/paper/2008/file/950a4152c2b4aa3ad78bdd6b366cc179-Paper.pdf.
- Neural manifolds for the control of movement. Neuron, 94(5):978โ984, 2017.
- Switching state-space models. Technical report, Kingโs College Road, Toronto M5S 3H5, 1996.
- Factorial hidden markov models. Advances in Neural Information Processing Systems, 8, 1995.
- Recurrent switching dynamical systems models for multiple interacting neural populations. Advances in neural information processing systems, 33:14867โ14878, 2020.
- Learning an internal dynamics model from control demonstration. In International Conference on Machine Learning, pages 606โ614. PMLR, 2013.
- Jamesย D. Hamilton. Analysis of time series subject to changes in regime. Journal of Econometrics, 45(1-2):39โ70, 1990. ISSN 03044076. doi: 10.1016/0304-4076(90)90093-9.
- Variational autoencoder: An unsupervised model for encoding and decoding fmri activity in visual cortex. NeuroImage, 198:125โ136, 2019.
- Time-varying autoregression with low-rank tensors. SIAM Journal on Applied Dynamical Systems, 20(4):2335โ2358, 2021.
- Simon Haykin. Adaptive Filter Theory (3rd Ed.). Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1996. ISBN 0-13-322760-X.
- A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology, 117(4):500, 1952.
- Global Brain Dynamics Embed the Motor Command Sequence of Caenorhabditis elegans. Cell, 163(3):656โ669, 2015. ISSN 10974172. doi: 10.1016/j.cell.2015.09.034.
- Enabling hyperparameter optimization in sequential autoencoders for spiking neural data. Advances in Neural Information Processing Systems, 32, 2019.
- A mechanistic multi-area recurrent network model of decision-making. Advances in Neural Information Processing Systems, 34, 2021.
- Bayesian learning and inference in recurrent switching linear dynamical systems. In Artificial Intelligence and Statistics, pages 914โ922. PMLR, 2017.
- SSM: Bayesian Learning and Inference for State Space Models (version 0.0.1, 10 2020. URL https://github.com/lindermanlab/ssm.
- Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans. bioRxiv, page 621540, 2019. doi: 10.1101/621540. URL https://www.biorxiv.org/content/10.1101/621540v1.abstract?%3Fcollection=.
- Zacharyย C Lipton. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3):31โ57, 2018.
- Linear state-space model with time-varying dynamics. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part II 14, pages 338โ353. Springer, 2014.
- Hierarchical coupled-geometry analysis for neuronal structure and activity pattern discovery. IEEE Journal of Selected Topics in Signal Processing, 10(7):1238โ1253, 2016. doi: 10.1109/JSTSP.2016.2602061.
- Methods for interpreting and understanding deep neural networks. Digital signal processing, 73:1โ15, 2018.
- K.ย P. Murphy. Switching Kalman filters. Technical report, Technical Report, UC Berkeley, 1998.
- Tree-structured recurrent switching linear dynamical systems for multi-scale modeling. arXiv preprint arXiv:1811.12386, 2018.
- A view of the em algorithm that justifies incremental, sparse, and other variants. In Learning in Graphical Models, pages 355โ368. Kluwer Academic Publishers, 1998.
- Geometry of abstract learned knowledge in the hippocampus. Nature, 595(7865):80โ84, 2021.
- B.ย A. Olshausen and D.ย Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(13):607โ609, June 1996.
- Inferring single-trial neural population dynamics using sequential auto-encoders. Nature methods, 15(10):805โ815, 2018.
- Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825โ2830, 2011.
- Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995โ999, 2008.
- Dynamic mode decomposition with control. SIAM Journal on Applied Dynamical Systems, 15(1):142โ161, 2016.
- On the expressive power of deep neural networks. In international conference on machine learning, pages 2847โ2854. PMLR, 2017.
- Pylopsโa linear-operator python library for scalable algebra and optimization. SoftwareX, 11:100361, 2020.
- Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323โ2326, 2000.
- Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification. Nature Neuroscience, 24(1):140โ149, 2021a.
- Where is all the nonlinearity: flexible nonlinear modeling of behaviorally relevant neural dynamics using recurrent neural networks. bioRxiv, 2021b.
- Towards the neural population doctrine. Current opinion in neurobiology, 55:103โ111, 2019.
- Different scaling of linear models and deep learning in ukbiobank brain images versus machine-learning datasets. Nature communications, 11(1):1โ15, 2020.
- Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings. Science, 372(6539):eabf4588, 2021.
- A neural network that finds a naturalistic solution for the production of muscle activity. Nature neuroscience, 18(7):1025โ1033, 2015.
- Computation through neural population dynamics. Annual Review of Neuroscience, 43:249โ275, 2020. ISSN 15454126. doi: 10.1146/annurev-neuro-092619-094115.
- Flexible timing by temporal scaling of cortical responses. Nature Neuroscience 2017 21:1, 21:102โ110, 12 2017. ISSN 1546-1726. doi: 10.1038/s41593-017-0028-6. URL https://www.nature.com/articles/s41593-017-0028-6.
- Gaussian process based nonlinear latent structure discovery in multivariate spike train data. Advances in neural information processing systems, 30, 2017.
- Mixture of trajectory models for neural decoding of goal-directed movements. Journal of neurophysiology, 97(5):3763โ3780, 2007.
- Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Advances in neural information processing systems, 21, 2008.
- Manuel Zimmer. Kato2015 whole brain imaging data, January 2021. URL osf.io/2395t.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.