Emergent Mind

An Optimal Stochastic Algorithm for Decentralized Nonconvex Finite-sum Optimization

(2210.13931)
Published Oct 25, 2022 in math.OC and cs.LG

Abstract

This paper studies the decentralized nonconvex optimization problem $\min{x\in{\mathbb R}d} f(x)\triangleq \frac{1}{m}\sum{i=1}m fi(x)$, where $fi(x)\triangleq \frac{1}{n}\sum{j=1}n f{i,j}(x)$ is the local function on the $i$-th agent of the network. We propose a novel stochastic algorithm called DEcentralized probAbilistic Recursive gradiEnt deScenT (\DEAREST), which integrates the techniques of variance reduction, gradient tracking and multi-consensus. We construct a Lyapunov function that simultaneously characterizes the function value, the gradient estimation error and the consensus error for the convergence analysis. Based on this measure, we provide a concise proof to show DEAREST requires at most ${\mathcal O}(mn+\sqrt{mn}L\varepsilon{-2})$ incremental first-order oracle (IFO) calls and ${\mathcal O}({L\varepsilon{-2}}/{\sqrt{1-\lambda_2(W)}}\,)$ communication rounds to find an $\varepsilon$-stationary point in expectation, where $L$ is the smoothness parameter and $\lambda_2(W)$ is the second-largest eigenvalue of the gossip matrix $W$. We can verify both of the IFO complexity and communication complexity match the lower bounds. To the best of our knowledge, DEAREST is the first optimal algorithm for decentralized nonconvex finite-sum optimization.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.