Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization (1803.01370v2)

Published 4 Mar 2018 in math.OC, cs.LG, and stat.ML

Abstract: We propose a communication- and computation-efficient distributed optimization algorithm using second-order information for solving ERM problems with a nonsmooth regularization term. Current second-order and quasi-Newton methods for this problem either do not work well in the distributed setting or work only for specific regularizers. Our algorithm uses successive quadratic approximations, and we describe how to maintain an approximation of the Hessian and solve subproblems efficiently in a distributed manner. The proposed method enjoys global linear convergence for a broad range of non-strongly convex problems that includes the most commonly used ERMs, thus requiring lower communication complexity. It also converges on non-convex problems, so has the potential to be used on applications such as deep learning. Initial computational results on convex problems demonstrate that our method significantly improves on communication cost and running time over the current state-of-the-art methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ching-pei Lee (19 papers)
  2. Cong Han Lim (5 papers)
  3. Stephen J. Wright (71 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.