Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Accelerated Dual Descent for Network Optimization (1104.1157v1)

Published 6 Apr 2011 in math.OC and cs.SY

Abstract: Dual descent methods are commonly used to solve network optimization problems because their implementation can be distributed through the network. However, their convergence rates are typically very slow. This paper introduces a family of dual descent algorithms that use approximate Newton directions to accelerate the convergence rate of conventional dual descent. These approximate directions can be computed using local information exchanges thereby retaining the benefits of distributed implementations. The approximate Newton directions are obtained through matrix splitting techniques and sparse Taylor approximations of the inverse Hessian.We show that, similarly to conventional Newton methods, the proposed algorithm exhibits superlinear convergence within a neighborhood of the optimal value. Numerical analysis corroborates that convergence times are between one to two orders of magnitude faster than existing distributed optimization methods. A connection with recent developments that use consensus iterations to compute approximate Newton directions is also presented.

Citations (50)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.