Emergent Mind

Finite-sample and asymptotic analysis of generalization ability with an application to penalized regression

(1609.03344)
Published Sep 12, 2016 in stat.ML , cs.LG , math.ST , q-fin.EC , stat.CO , and stat.TH

Abstract

In this paper, we study the performance of extremum estimators from the perspective of generalization ability (GA): the ability of a model to predict outcomes in new samples from the same population. By adapting the classical concentration inequalities, we derive upper bounds on the empirical out-of-sample prediction errors as a function of the in-sample errors, in-sample data size, heaviness in the tails of the error distribution, and model complexity. We show that the error bounds may be used for tuning key estimation hyper-parameters, such as the number of folds $K$ in cross-validation. We also show how $K$ affects the bias-variance trade-off for cross-validation. We demonstrate that the $\mathcal{L}2$-norm difference between penalized and the corresponding un-penalized regression estimates is directly explained by the GA of the estimates and the GA of empirical moment conditions. Lastly, we prove that all penalized regression estimates are $L2$-consistent for both the $n \geqslant p$ and the $n < p$ cases. Simulations are used to demonstrate key results. Keywords: generalization ability, upper bound of generalization error, penalized regression, cross-validation, bias-variance trade-off, $\mathcal{L}_2$ difference between penalized and unpenalized regression, lasso, high-dimensional data.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.