Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 179 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 40 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 451 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

SLOPE is Adaptive to Unknown Sparsity and Asymptotically Minimax (1503.08393v3)

Published 29 Mar 2015 in math.ST, cs.IT, math.IT, and stat.TH

Abstract: We consider high-dimensional sparse regression problems in which we observe $y = X \beta + z$, where $X$ is an $n \times p$ design matrix and $z$ is an $n$-dimensional vector of independent Gaussian errors, each with variance $\sigma2$. Our focus is on the recently introduced SLOPE estimator ((Bogdan et al., 2014)), which regularizes the least-squares estimates with the rank-dependent penalty $\sum_{1 \le i \le p} \lambda_i |\hat \beta|{(i)}$, where $|\hat \beta|{(i)}$ is the $i$th largest magnitude of the fitted coefficients. Under Gaussian designs, where the entries of $X$ are i.i.d.~$\mathcal{N}(0, 1/n)$, we show that SLOPE, with weights $\lambda_i$ just about equal to $\sigma \cdot \Phi{-1}(1-iq/(2p))$ ($\Phi{-1}(\alpha)$ is the $\alpha$th quantile of a standard normal and $q$ is a fixed number in $(0,1)$) achieves a squared error of estimation obeying [ \sup_{| \beta|0 \le k} \,\, \mathbb{P} \left(| \hat{\beta}{\text{SLOPE}} - \beta |2 > (1+\epsilon) \, 2\sigma2 k \log(p/k) \right) \longrightarrow 0 ] as the dimension $p$ increases to $\infty$, and where $\epsilon > 0$ is an arbitrary small constant. This holds under a weak assumption on the $\ell_0$-sparsity level, namely, $k/p \rightarrow 0$ and $(k\log p)/n \rightarrow 0$, and is sharp in the sense that this is the best possible error any estimator can achieve. A remarkable feature is that SLOPE does not require any knowledge of the degree of sparsity, and yet automatically adapts to yield optimal total squared errors over a wide range of $\ell_0$-sparsity classes. We are not aware of any other estimator with this property.

Citations (143)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.