Emergent Mind

Minimax-optimal rates for sparse additive models over kernel classes via convex programming

(1008.3654)
Published Aug 21, 2010 in math.ST , cs.IT , math.IT , and stat.TH

Abstract

Sparse additive models are families of $d$-variate functions that have the additive decomposition $f* = \sum{j \in S} f*j$, where $S$ is an unknown subset of cardinality $s \ll d$. In this paper, we consider the case where each univariate component function $f*_j$ lies in a reproducing kernel Hilbert space (RKHS), and analyze a method for estimating the unknown function $f*$ based on kernels combined with $\ell1$-type convex regularization. Working within a high-dimensional framework that allows both the dimension $d$ and sparsity $s$ to increase with $n$, we derive convergence rates (upper bounds) in the $L2(\mathbb{P})$ and $L2(\mathbb{P}n)$ norms over the class $\MyBigClass$ of sparse additive models with each univariate function $f*_j$ in the unit ball of a univariate RKHS with bounded kernel function. We complement our upper bounds by deriving minimax lower bounds on the $L2(\mathbb{P})$ error, thereby showing the optimality of our method. Thus, we obtain optimal minimax rates for many interesting classes of sparse additive models, including polynomials, splines, and Sobolev classes. We also show that if, in contrast to our univariate conditions, the multivariate function class is assumed to be globally bounded, then much faster estimation rates are possible for any sparsity $s = \Omega(\sqrt{n})$, showing that global boundedness is a significant restriction in the high-dimensional setting.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.