2000 character limit reached
High Probability Convergence for Accelerated Stochastic Mirror Descent (2210.00679v1)
Published 3 Oct 2022 in math.OC, cs.DS, and cs.LG
Abstract: In this work, we describe a generic approach to show convergence with high probability for stochastic convex optimization. In previous works, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution as opposed to the domain diameter. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations.