Emergent Mind

Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK

(2007.04596)
Published Jul 9, 2020 in cs.LG , math.OC , and stat.ML

Abstract

We consider the dynamic of gradient descent for learning a two-layer neural network. We assume the input $x\in\mathbb{R}d$ is drawn from a Gaussian distribution and the label of $x$ satisfies $f{\star}(x) = a{\top}|W{\star}x|$, where $a\in\mathbb{R}d$ is a nonnegative vector and $W{\star} \in\mathbb{R}{d\times d}$ is an orthonormal matrix. We show that an over-parametrized two-layer neural network with ReLU activation, trained by gradient descent from random initialization, can provably learn the ground truth network with population loss at most $o(1/d)$ in polynomial time with polynomial samples. On the other hand, we prove that any kernel method, including Neural Tangent Kernel, with a polynomial number of samples in $d$, has population loss at least $\Omega(1 / d)$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.