Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimization Theory for ReLU Neural Networks Trained with Normalization Layers (2006.06878v1)

Published 11 Jun 2020 in cs.LG, math.OC, and stat.ML

Abstract: The success of deep neural networks is in part due to the use of normalization layers. Normalization layers like Batch Normalization, Layer Normalization and Weight Normalization are ubiquitous in practice, as they improve generalization performance and speed up training significantly. Nonetheless, the vast majority of current deep learning theory and non-convex optimization literature focuses on the un-normalized setting, where the functions under consideration do not exhibit the properties of commonly normalized neural networks. In this paper, we bridge this gap by giving the first global convergence result for two-layer neural networks with ReLU activations trained with a normalization layer, namely Weight Normalization. Our analysis shows how the introduction of normalization layers changes the optimization landscape and can enable faster convergence as compared with un-normalized neural networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yonatan Dukler (10 papers)
  2. Quanquan Gu (198 papers)
  3. Guido Montúfar (40 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.