Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 33 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 220 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Variational excess risk bound for general state space models (2312.09607v1)

Published 15 Dec 2023 in stat.ME and stat.ML

Abstract: In this paper, we consider variational autoencoders (VAE) for general state space models. We consider a backward factorization of the variational distributions to analyze the excess risk associated with VAE. Such backward factorizations were recently proposed to perform online variational learning and to obtain upper bounds on the variational estimation error. When independent trajectories of sequences are observed and under strong mixing assumptions on the state space model and on the variational distribution, we provide an oracle inequality explicit in the number of samples and in the length of the observation sequences. We then derive consequences of this theoretical result. In particular, when the data distribution is given by a state space model, we provide an upper bound for the Kullback-Leibler divergence between the data distribution and its estimator and between the variational posterior and the estimated state space posterior distributions.Under classical assumptions, we prove that our results can be applied to Gaussian backward kernels built with dense and recurrent neural networks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349.
  2. Online variational filtering and parameter learning. Advances in Neural Information Processing Systems, 34.
  3. Inference in Hidden Markov Models (Springer Series in Statistics). Springer-Verlag, Berlin, Heidelberg.
  4. Additive smoothing error in backward variational inference for general state-space models. arXiv preprint arXiv:2206.00319.
  5. Consistency of variational Bayes inference for estimation and model selection in mixtures. Electronic Journal of Statistics, 12(2):2995 – 3035.
  6. Diffusion bridges vector quantized variational autoencoders. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S., editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 4141–4156. PMLR.
  7. Consistent estimation of the filtering and marginal smoothing distributions in nonparametric hidden markov models. IEEE Transactions on Information Theory, 63(8):4758–4777.
  8. Nonlinear time series: Theory, methods and applications with R examples. CRC press.
  9. A pseudo-marginal sequential Monte Carlo online smoothing algorithm. Bernoulli, 28(4):2606 – 2633.
  10. Validated variational inference via practical posterior error bounds. In International Conference on Artificial Intelligence and Statistics, pages 1792–1802. PMLR.
  11. Statistical guarantees for variational autoencoders using pac-bayesian theory. Advances in Neural Information Processing Systems.
  12. Sequential Monte Carlo smoothing with application to parameter estimation in nonlinear state space models. Bernoulli, 14(1):155 – 179.
  13. On empirical bayes variational autoencoder: An excess risk bound. In Proceedings of Machine Learning Research, Conference on Learning Theory COLT 2021.
  14. NVAE: A deep hierarchical variational autoencoder. Advances in neural information processing systems, 33:19667–19679.
  15. van de Geer, S. (2000). Empirical processes in M-estimation. Cambridge University Press.
  16. Wainwright, M. J. (2019). High-dimensional statistics, volume 48 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge. A non-asymptotic viewpoint.
Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.