Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convergence Rate Bounds for the Mirror Descent Method: IQCs and the Bregman Divergence (2204.00502v2)

Published 1 Apr 2022 in math.OC, cs.SY, and eess.SY

Abstract: This paper is concerned with convergence analysis for the mirror descent (MD) method, a well-known algorithm in convex optimization. An analysis framework via integral quadratic constraints (IQCs) is constructed to analyze the convergence rate of the MD method with strongly convex objective functions in both continuous-time and discrete-time. We formulate the problem of finding convergence rates of the MD algorithms into feasibility problems of linear matrix inequalities (LMIs) in both schemes. In particular, in continuous-time, we show that the Bregman divergence function, which is commonly used as a Lyapunov function for this algorithm, is a special case of the class of Lyapunov functions associated with the Popov criterion, when the latter is applied to an appropriate reformulation of the problem. Thus, applying the Popov criterion and its combination with other IQCs, can lead to convergence rate bounds with reduced conservatism. We also illustrate via examples that the convergence rate bounds derived can be tight.

Citations (2)

Summary

We haven't generated a summary for this paper yet.