Generalization Bounds for Neural Networks via Approximate Description Length (1910.05697v1)
Abstract: We investigate the sample complexity of networks with bounds on the magnitude of its weights. In particular, we consider the class [ H=\left{W_t\circ\rho\circ \ldots\circ\rho\circ W_{1} :W_1,\ldots,W_{t-1}\in M_{d, d}, W_t\in M_{1,d}\right} ] where the spectral norm of each $W_i$ is bounded by $O(1)$, the Frobenius norm is bounded by $R$, and $\rho$ is the sigmoid function $\frac{ex}{1+ex}$ or the smoothened ReLU function $ \ln (1+ex)$. We show that for any depth $t$, if the inputs are in $[-1,1]d$, the sample complexity of $H$ is $\tilde O\left(\frac{dR2}{\epsilon2}\right)$. This bound is optimal up to log-factors, and substantially improves over the previous state of the art of $\tilde O\left(\frac{d2R2}{\epsilon2}\right)$. We furthermore show that this bound remains valid if instead of considering the magnitude of the $W_i$'s, we consider the magnitude of $W_i - W_i0$, where $W_i0$ are some reference matrices, with spectral norm of $O(1)$. By taking the $W_i0$ to be the matrices at the onset of the training process, we get sample complexity bounds that are sub-linear in the number of parameters, in many typical regimes of parameters. To establish our results we develop a new technique to analyze the sample complexity of families $H$ of predictors. We start by defining a new notion of a randomized approximate description of functions $f:X\to\mathbb{R}d$. We then show that if there is a way to approximately describe functions in a class $H$ using $d$ bits, then $d/\epsilon2$ examples suffices to guarantee uniform convergence. Namely, that the empirical loss of all the functions in the class is $\epsilon$-close to the true loss. Finally, we develop a set of tools for calculating the approximate description length of classes of functions that can be presented as a composition of linear function classes and non-linear functions.