Emergent Mind

Risk Bounds for High-dimensional Ridge Function Combinations Including Neural Networks

(1607.01434)
Published Jul 5, 2016 in math.ST , stat.ML , and stat.TH

Abstract

Let $ f{\star} $ be a function on $ \mathbb{R}d $ with an assumption of a spectral norm $ v{f{\star}} $. For various noise settings, we show that $ \mathbb{E}|\hat{f} - f{\star} |2 \leq \left(v4{f{\star}}\frac{\log d}{n}\right){1/3} $, where $ n $ is the sample size and $ \hat{f} $ is either a penalized least squares estimator or a greedily obtained version of such using linear combinations of sinusoidal, sigmoidal, ramp, ramp-squared or other smooth ridge functions. The candidate fits may be chosen from a continuum of functions, thus avoiding the rigidity of discretizations of the parameter space. On the other hand, if the candidate fits are chosen from a discretization, we show that $ \mathbb{E}|\hat{f} - f{\star} |2 \leq \left(v3_{f{\star}}\frac{\log d}{n}\right){2/5} $. This work bridges non-linear and non-parametric function estimation and includes single-hidden layer nets. Unlike past theory for such settings, our bound shows that the risk is small even when the input dimension $ d $ of an infinite-dimensional parameterized dictionary is much larger than the available sample size. When the dimension is larger than the cube root of the sample size, this quantity is seen to improve the more familiar risk bound of $ v_{f{\star}}\left(\frac{d\log (n/d)}{n}\right){1/2} $, also investigated here.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.