Emergent Mind

Abstract

Proving algorithm-dependent generalization error bounds for gradient-type optimization methods has attracted significant attention recently in learning theory. However, most existing trajectory-based analyses require either restrictive assumptions on the learning rate (e.g., fast decreasing learning rate), or continuous injected noise (such as the Gaussian noise in Langevin dynamics). In this paper, we introduce a new discrete data-dependent prior to the PAC-Bayesian framework, and prove a high probability generalization bound of order $O(\frac{1}{n}\cdot \sum{t=1}T(\gammat/\varepsilont)2\left|{\mathbf{g}t}\right|2)$ for Floored GD (i.e. a version of gradient descent with precision level $\varepsilont$), where $n$ is the number of training samples, $\gammat$ is the learning rate at step $t$, $\mathbf{g}t$ is roughly the difference of the gradient computed using all samples and that using only prior samples. $\left|{\mathbf{g}t}\right|$ is upper bounded by and and typical much smaller than the gradient norm $\left|{\nabla f(Wt)}\right|$. We remark that our bound holds for nonconvex and nonsmooth scenarios. Moreover, our theoretical results provide numerically favorable upper bounds of testing errors (e.g., $0.037$ on MNIST). Using a similar technique, we can also obtain new generalization bounds for certain variants of SGD. Furthermore, we study the generalization bounds for gradient Langevin Dynamics (GLD). Using the same framework with a carefully constructed continuous prior, we show a new high probability generalization bound of order $O(\frac{1}{n} + \frac{L2}{n2}\sum{t=1}T(\gammat/\sigmat)2)$ for GLD. The new $1/n2$ rate is due to the concentration of the difference between the gradient of training samples and that of the prior.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.