Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Particle-Based Algorithm for Distributional Optimization on \textit{Constrained Domains} via Variational Transport and Mirror Descent (2208.00587v3)

Published 1 Aug 2022 in math.OC and cs.AI

Abstract: We consider the optimization problem of minimizing an objective functional, which admits a variational form and is defined over probability distributions on the constrained domain, which poses challenges to both theoretical analysis and algorithmic design. Inspired by the mirror descent algorithm for constrained optimization, we propose an iterative particle-based algorithm, named Mirrored Variational Transport (mirrorVT), extended from the Variational Transport framework [7] for dealing with the constrained domain. In particular, for each iteration, mirrorVT maps particles to an unconstrained dual domain induced by a mirror map and then approximately perform Wasserstein gradient descent on the manifold of distributions defined over the dual space by pushing particles. At the end of iteration, particles are mapped back to the original constrained domain. Through simulated experiments, we demonstrate the effectiveness of mirrorVT for minimizing the functionals over probability distributions on the simplex- and Euclidean ball-constrained domains. We also analyze its theoretical properties and characterize its convergence to the global minimum of the objective functional.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
  1. Efficient constrained sampling via the mirror-langevin algorithm. Advances in Neural Information Processing Systems, 34:28405–28418, 2021.
  2. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167–175, 2003.
  3. Convergence of langevin mcmc in kl-divergence. In Algorithmic Learning Theory, pages 186–211. PMLR, 2018.
  4. Efficient projections onto the l 1-ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine learning, pages 272–279, 2008.
  5. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.
  6. Mirrored langevin dynamics. Advances in Neural Information Processing Systems, 31, 2018.
  7. Infinite-dimensional optimization for zero-sum games via variational transport. In International Conference on Machine Learning, pages 7033–7044. PMLR, 2021.
  8. Stein variational gradient descent: A general purpose bayesian inference algorithm. Advances in neural information processing systems, 29, 2016.
  9. A complete recipe for stochastic gradient mcmc. Advances in neural information processing systems, 28, 2015.
  10. A note on learning with integral operators. In COLT. Citeseer, 2009.
  11. Filippo Santambrogio. {{\{{Euclidean, metric, and Wasserstein}}\}} gradient flows: an overview. Bulletin of Mathematical Sciences, 7(1):87–154, 2017.
  12. Sampling with mirrored stein operators. arXiv preprint arXiv:2106.12506, 2021.
  13. Cédric Villani. Optimal transport: old and new, volume 338. Springer, 2009.
  14. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681–688, 2011.
  15. Andre Wibisono. Sampling as optimization in the space of measures: The langevin dynamics as a composite optimization problem. In Conference on Learning Theory, pages 2093–3027. PMLR, 2018.
  16. Global convergence of langevin dynamics based algorithms for nonconvex optimization. Advances in Neural Information Processing Systems, 31, 2018.
  17. First-order methods for geodesically convex optimization. In Conference on Learning Theory, pages 1617–1638. PMLR, 2016.
Citations (2)

Summary

We haven't generated a summary for this paper yet.