Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 31 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 11 tok/s Pro
GPT-5 High 9 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Distributed Adaptive Greedy Quasi-Newton Methods with Explicit Non-asymptotic Convergence Bounds (2311.18210v1)

Published 30 Nov 2023 in math.OC and cs.DC

Abstract: Though quasi-Newton methods have been extensively studied in the literature, they either suffer from local convergence or use a series of line searches for global convergence which is not acceptable in the distributed setting. In this work, we first propose a line search free greedy quasi-Newton (GQN) method with adaptive steps and establish explicit non-asymptotic bounds for both the global convergence rate and local superlinear rate. Our novel idea lies in the design of multiple greedy quasi-Newton updates, which involves computing Hessian-vector products, to control the Hessian approximation error, and a simple mechanism to adjust stepsizes to ensure the objective function improvement per iterate. Then, we extend it to the master-worker framework and propose a distributed adaptive GQN method whose communication cost is comparable with that of first-order methods, yet it retains the superb convergence property of its centralized counterpart. Finally, we demonstrate the advantages of our methods via numerical experiments.

Citations (4)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)