Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Gradient Methods Converge Faster with Over-Parameterization (but you should do a line-search) (2006.06835v3)

Published 11 Jun 2020 in cs.LG, math.OC, and stat.ML

Abstract: Adaptive gradient methods are typically used for training over-parameterized models. To better understand their behaviour, we study a simplistic setting -- smooth, convex losses with models over-parameterized enough to interpolate the data. In this setting, we prove that AMSGrad with constant step-size and momentum converges to the minimizer at a faster $O(1/T)$ rate. When interpolation is only approximately satisfied, constant step-size AMSGrad converges to a neighbourhood of the solution at the same rate, while AdaGrad is robust to the violation of interpolation. However, even for simple convex problems satisfying interpolation, the empirical performance of both methods heavily depends on the step-size and requires tuning, questioning their adaptivity. We alleviate this problem by automatically determining the step-size using stochastic line-search or Polyak step-sizes. With these techniques, we prove that both AdaGrad and AMSGrad retain their convergence guarantees, without needing to know problem-dependent constants. Empirically, we demonstrate that these techniques improve the convergence and generalization of adaptive gradient methods across tasks, from binary classification with kernel mappings to multi-class classification with deep networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sharan Vaswani (35 papers)
  2. Issam Laradji (37 papers)
  3. Frederik Kunstner (10 papers)
  4. Si Yi Meng (5 papers)
  5. Mark Schmidt (74 papers)
  6. Simon Lacoste-Julien (95 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.