Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 58 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 179 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Distributed Optimization via Gradient Descent with Event-Triggered Zooming over Quantized Communication (2309.04588v1)

Published 8 Sep 2023 in eess.SY, cs.SY, and math.OC

Abstract: In this paper, we study unconstrained distributed optimization strongly convex problems, in which the exchange of information in the network is captured by a directed graph topology over digital channels that have limited capacity (and hence information should be quantized). Distributed methods in which nodes use quantized communication yield a solution at the proximity of the optimal solution, hence reaching an error floor that depends on the quantization level used; the finer the quantization the lower the error floor. However, it is not possible to determine in advance the optimal quantization level that ensures specific performance guarantees (such as achieving an error floor below a predefined threshold). Choosing a very small quantization level that would guarantee the desired performance, requires {information} packets of very large size, which is not desirable (could increase the probability of packet losses, increase delays, etc) and often not feasible due to the limited capacity of the channels available. In order to obtain a communication-efficient distributed solution and a sufficiently close proximity to the optimal solution, we propose a quantized distributed optimization algorithm that converges in a finite number of steps and is able to adjust the quantization level accordingly. The proposed solution uses a finite-time distributed optimization protocol to find a solution to the problem for a given quantization level in a finite number of steps and keeps refining the quantization level until the difference in the solution between two successive solutions with different quantization levels is below a certain pre-specified threshold.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.