Emergent Mind

On the convergence of decentralized gradient descent with diminishing stepsize, revisited

(2203.09079)
Published Mar 17, 2022 in math.OC , cs.SY , and eess.SY

Abstract

Distributed optimization has received a lot of interest in recent years due to its wide applications in various fields. In this work, we revisit the convergence property of the decentralized gradient descent [A. Nedi{\'c}-A.Ozdaglar (2009)] on the whole space given by $$ xi(t+1) = \summ{j=1}w{ij}xj(t) - \alpha(t) \nabla fi(xi(t)), $$ where the stepsize is given as $\alpha (t) = \frac{a}{(t+w)p}$ with $0< p\leq 1$. Under the strongly convexity assumption on the total cost function $f$ with local cost functions $f_i$ not necessarily being convex, we show that the sequence converges to the optimizer with rate $O(t{-p})$ when the values of $a>0$ and $w>0$ are suitably chosen.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.