Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 174 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 98 tok/s Pro
Kimi K2 190 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Prox-DBRO-VR: A Unified Analysis on Byzantine-Resilient Decentralized Stochastic Composite Optimization with Variance Reduction and Non-Asymptotic Convergence Rates (2305.08051v12)

Published 14 May 2023 in math.OC, cs.SY, and eess.SY

Abstract: Decentralized stochastic gradient algorithms efficiently solve large-scale finite-sum optimization problems when all agents in the network are reliable. However, most of these algorithms are not resilient to adverse conditions, such as malfunctioning agents, software bugs, and cyber attacks. This paper aims to handle a class of general composite optimization problems over multi-agent systems (MASs) in the presence of an unknown number of Byzantine agents. Building on a resilient aggregation mechanism and the proximal-gradient mapping method, a Byzantine-resilient decentralized stochastic proximal-gradient algorithmic framework is proposed, dubbed Prox-DBRO-VR, which achieves an optimization and control goal using only local computations and communications. To asymptotically reduce the noise variance arising from local gradient estimation and accelerate the convergence, we incorporate two localized variance-reduced (VR) techniques (SAGA and LSVRG) into Prox-DBRO-VR to design Prox-DBRO-SAGA and Prox-DBRO-LSVRG. By analyzing the contraction relationships among the gradient-learning error, resilient consensus condition, and convergence error in a unified theoretical framework, it is proved that both Prox-DBRO-SAGA and Prox-DBRO-LSVRG, with a well-designed constant (resp., decaying) step-size, converge linearly (resp., sub-linearly) inside an error ball around the optimal solution to the original problem under standard assumptions. A trade-off between convergence accuracy and Byzantine resilience in both linear and sub-linear cases is also characterized. In numerical experiments, the effectiveness and practicability of the proposed algorithms are manifested via resolving a decentralized sparse machine-learning problem under various Byzantine attacks.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.