Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adafactor: Adaptive Learning Rates with Sublinear Memory Cost (1804.04235v1)

Published 11 Apr 2018 in cs.LG, cs.AI, and stat.ML

Abstract: In several recently proposed stochastic optimization methods (e.g. RMSProp, Adam, Adadelta), parameter updates are scaled by the inverse square roots of exponential moving averages of squared past gradients. Maintaining these per-parameter second-moment estimators requires memory equal to the number of parameters. For the case of neural network weight matrices, we propose maintaining only the per-row and per-column sums of these moving averages, and estimating the per-parameter second moments based on these sums. We demonstrate empirically that this method produces similar results to the baseline. Secondly, we show that adaptive methods can produce larger-than-desired updates when the decay rate of the second moment accumulator is too slow. We propose update clipping and a gradually increasing decay rate scheme as remedies. Combining these methods and dropping momentum, we achieve comparable results to the published Adam regime in training the Transformer model on the WMT 2014 English-German machine translation task, while using very little auxiliary storage in the optimizer. Finally, we propose scaling the parameter updates based on the scale of the parameters themselves.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Noam Shazeer (37 papers)
  2. Mitchell Stern (18 papers)
Citations (946)

Summary

  • The paper introduces a novel optimization algorithm, Adafactor, that reduces memory usage by estimating second moments using a low-rank factored approach.
  • It employs update clipping to stabilize training and prevent large update steps, ensuring consistent performance during optimization.
  • Experimental results on translation tasks demonstrate that Adafactor achieves comparable BLEU scores to Adam while significantly reducing memory overhead.

Adafactor: Adaptive Learning Rates with Sublinear Memory Cost

The paper by Noam Shazeer and Mitchell Stern introduces a new optimization algorithm called Adafactor. This algorithm is designed to mitigate the significant memory overhead associated with previously established adaptive learning rate methods such as Adam, RMSProp, and Adadelta. Adafactor achieves comparable performance to Adam but with memory usage proportional to the number of rows and columns of neural network weight matrices, a substantial optimization over the traditional quadratic dependency on the number of parameters.

The authors build upon existing gradient-based optimization techniques, focusing particularly on the memory requirements for maintaining second-moment estimates, which track the exponential moving average of squared gradients. In high-dimensional neural networks, these estimates are typically stored per parameter, contributing to a substantial memory footprint. Adafactor reduces this burden by maintaining only the per-row and per-column sums of these moving averages, enabling the reconstruction of the full matrix through a low-rank approximation according to these sums.

Methodology

Factored Second Moment Estimation

Adafactor's primary innovation lies in its factored representation of the squared gradient accumulator. For a parameter matrix WRn×mW \in \mathbb{R}^{n \times m}, rather than maintaining an exponential moving average VV for each element, Adafactor stores RtR_t and CtC_t, the per-row and per-column sums respectively, calculated as:

Rt=β2Rt1+(1β2)(Gt21m)R_t = \beta_2 R_{t-1} + (1 - \beta_2) (G_t^2 \cdot 1_m)

Ct=β2Ct1+(1β2)(1nGt2)C_t = \beta_2 C_{t-1} + (1 - \beta_2) (1_n^\top \cdot G_t^2)

The full accumulator can then be approximated as:

V^t=RtCt1nRt\hat{V}_t = \frac{R_t C_t}{1_n^\top R_t}

This approach ensures that the memory footprint is reduced from O(nm)O(n \cdot m) to O(n+m)O(n + m), a significant improvement, particularly for large-scale models.

Update Clipping

The paper also addresses instability in training by introducing update clipping. This mechanism scales down the updates if their root-mean-square (RMS) exceeds a threshold dd, effectively preventing larger-than-desired updates which can destabilize the training. The clipped update is given by:

U^t=Utmax(1,RMS(Ut)/d)\hat{U}_t = \frac{U_t}{\max(1, \mathrm{RMS}(U_t) / d)}

This method shows efficacy in stabilizing training without significantly impacting empirical performance.

Experiments and Results

The empirical evaluations were conducted using the Transformer model on the WMT 2014 English-German translation task. Experiments demonstrate that the Adafactor algorithm with factored second-moment estimation achieves performance virtually indistinguishable from Adam while dramatically reducing auxiliary memory usage.

\textbf{Key Results:}

  • Adafactor with factored second moments and update clipping achieves BLEU scores comparable to Adam.
  • Stability improvements using update clipping were empirically validated (BLEU of 25.6 with stability measures vs. BLEU of 0.1 without, in certain configurations).

Implications and Future Work

The reduction in memory usage afforded by Adafactor has important practical implications for scaling neural networks. It enables larger models to be trained on the same hardware, pushing the boundaries of feasible model sizes. The introduction of update clipping also suggests new avenues for increasing stability in stochastic optimization.

Future work could explore further refinement of the low-rank approximation methods employed in Adafactor, potential extensions to other model structures, and integration with mixed-precision training for additional performance gains. Additionally, theoretical analysis based on convergence proofs for non-convex settings could further solidify Adafactor's standing in the optimization algorithms landscape.

In conclusion, Adafactor presents a significant advancement in the field of optimization for large-scale neural networks. By combining efficient memory usage with robust adaptive learning rates, it provides a solid foundation for continued progress in deep learning research and applications.