Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 464 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Stochastic Proximal Gradient Consensus Over Random Networks (1511.08905v2)

Published 28 Nov 2015 in math.OC, cs.IT, and math.IT

Abstract: We consider solving a convex, possibly stochastic optimization problem over a randomly time-varying multi-agent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochastic proximal-gradient consensus (DySPGC) algorithm, with the following key features: i) it works for both the static and certain randomly time-varying networks, ii) it allows the agents to utilize either the exact or stochastic gradient information, iii) it is convergent with provable rate. In particular, we show that the proposed algorithm converges to a global optimal solution, with a rate of $\mathcal{O}(1/r)$ [resp. $\mathcal{O}(1/\sqrt{r})$] when the exact (resp. stochastic) gradient is available, where r is the iteration counter. Interestingly, the developed algorithm bridges a number of (seemingly unrelated) distributed optimization algorithms, such as the EXTRA (Shi et al. 2014), the PG-EXTRA (Shi et al. 2015), the IC/IDC-ADMM (Chang et al. 2014), and the DLM (Ling et al. 2015) and the classical distributed subgradient method. Identifying such relationship allows for significant generalization of these methods. We also discuss one such generalization which accelerates the DySPGC (hence accelerating EXTRA, PG-EXTRA, IC-ADMM).

Citations (75)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.