Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 149 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 41 tok/s Pro
GPT-4o 73 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Analytical Approximation of the ELBO Gradient in the Context of the Clutter Problem (2404.10550v3)

Published 16 Apr 2024 in cs.LG and stat.ML

Abstract: We propose an analytical solution for approximating the gradient of the Evidence Lower Bound (ELBO) in variational inference problems where the statistical model is a Bayesian network consisting of observations drawn from a mixture of a Gaussian distribution embedded in unrelated clutter, known as the clutter problem. The method employs the reparameterization trick to move the gradient operator inside the expectation and relies on the assumption that, because the likelihood factorizes over the observed data, the variational distribution is generally more compactly supported than the Gaussian distribution in the likelihood factors. This allows efficient local approximation of the individual likelihood factors, which leads to an analytical solution for the integral defining the gradient expectation. We integrate the proposed gradient approximation as the expectation step in an EM (Expectation Maximization) algorithm for maximizing ELBO and test against classical deterministic approaches in Bayesian inference, such as the Laplace approximation, Expectation Propagation and Mean-Field Variational Inference. The proposed method demonstrates good accuracy and rate of convergence together with linear computational complexity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. C. M. Bishop. Pattern Recognition and Machine Learning. Springer New York, 2006.
  2. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112:859–877, 2017.
  3. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B, 39:1–38, 1977.
  4. Implicit reparameterization gradients. In Advances in Neural Information Processing Systems (NIPS 2018), 2018.
  5. M. Jankowiak and F. Obermeyer. Pathwise derivatives beyond the reparameterization trick. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pages 2235–2244, 2018.
  6. An introduction to variational methods for graphical models. Machine Learning, 37:183–233, 1999.
  7. D. P. Kingma and M. Welling. Auto-encoding variational bayes. In International Conference on Learning Representations, ICLR 2014, 2014.
  8. T. P. Minka. Expectation propagation for approximate bayesian inference. In Proceedings of the Seventeenth Annual Conference on Uncertainty in Artificial Intelligence (UAI-2001), pages 362–369, 2001a.
  9. T. P. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 2001b.
  10. Variational bayesian inference with stochastic search. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, volume 2, 2012.
  11. Black box variational inference. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, volume 33, pages 814–822, 2014.
  12. The generalized reparameterization gradient. In Advances in Neural Information Processing Systems (NIPS 2016), 2016.
  13. Advances in variational inference. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8):2008–2026, 2019.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 5 tweets and received 11 likes.

Upgrade to Pro to view all of the tweets about this paper: