Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stochastic Proximal Gradient Consensus Over Random Networks (1511.08905v2)

Published 28 Nov 2015 in math.OC, cs.IT, and math.IT

Abstract: We consider solving a convex, possibly stochastic optimization problem over a randomly time-varying multi-agent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochastic proximal-gradient consensus (DySPGC) algorithm, with the following key features: i) it works for both the static and certain randomly time-varying networks, ii) it allows the agents to utilize either the exact or stochastic gradient information, iii) it is convergent with provable rate. In particular, we show that the proposed algorithm converges to a global optimal solution, with a rate of $\mathcal{O}(1/r)$ [resp. $\mathcal{O}(1/\sqrt{r})$] when the exact (resp. stochastic) gradient is available, where r is the iteration counter. Interestingly, the developed algorithm bridges a number of (seemingly unrelated) distributed optimization algorithms, such as the EXTRA (Shi et al. 2014), the PG-EXTRA (Shi et al. 2015), the IC/IDC-ADMM (Chang et al. 2014), and the DLM (Ling et al. 2015) and the classical distributed subgradient method. Identifying such relationship allows for significant generalization of these methods. We also discuss one such generalization which accelerates the DySPGC (hence accelerating EXTRA, PG-EXTRA, IC-ADMM).

Citations (75)

Summary

We haven't generated a summary for this paper yet.