Emergent Mind

On best subset regression

(1112.0918)
Published Dec 5, 2011 in stat.ME , stat.CO , and stat.ML

Abstract

In this paper we discuss the variable selection method from \ell0-norm constrained regression, which is equivalent to the problem of finding the best subset of a fixed size. Our study focuses on two aspects, consistency and computation. We prove that the sparse estimator from such a method can retain all of the important variables asymptotically for even exponentially growing dimensionality under regularity conditions. This indicates that the best subset regression method can efficiently shrink the full model down to a submodel of a size less than the sample size, which can be analyzed by well-developed regression techniques for such cases in a follow-up study. We provide an iterative algorithm, called orthogonalizing subset selection (OSS), to address computational issues in best subset regression. OSS is an EM algorithm, and thus possesses the monotonicity property. For any sparse estimator, OSS can improve its fit of the model by putting it as an initial point. After this improvement, the sparsity of the estimator is kept. Another appealing feature of OSS is that, similarly to an effective algorithm for a continuous optimization problem, OSS can converge to the global solution to the \ell0-norm constrained regression problem if the initial point lies in a neighborhood of the global solution. An accelerating algorithm of OSS and its combination with forward stepwise selection are also investigated. Simulations and a real example are presented to evaluate the performances of the proposed methods.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.