Emergent Mind

Convergence Rate Bounds for the Mirror Descent Method: IQCs and the Bregman Divergence

(2204.00502)
Published Apr 1, 2022 in math.OC , cs.SY , and eess.SY

Abstract

This paper is concerned with convergence analysis for the mirror descent (MD) method, a well-known algorithm in convex optimization. An analysis framework via integral quadratic constraints (IQCs) is constructed to analyze the convergence rate of the MD method with strongly convex objective functions in both continuous-time and discrete-time. We formulate the problem of finding convergence rates of the MD algorithms into feasibility problems of linear matrix inequalities (LMIs) in both schemes. In particular, in continuous-time, we show that the Bregman divergence function, which is commonly used as a Lyapunov function for this algorithm, is a special case of the class of Lyapunov functions associated with the Popov criterion, when the latter is applied to an appropriate reformulation of the problem. Thus, applying the Popov criterion and its combination with other IQCs, can lead to convergence rate bounds with reduced conservatism. We also illustrate via examples that the convergence rate bounds derived can be tight.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.