Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 104 tok/s
Gemini 3.0 Pro 54 tok/s
Gemini 2.5 Flash 165 tok/s Pro
Kimi K2 202 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

De-Biasing The Lasso With Degrees-of-Freedom Adjustment (1902.08885v3)

Published 24 Feb 2019 in math.ST, stat.ML, and stat.TH

Abstract: This paper studies schemes to de-bias the Lasso in a linear model $y=X\beta+\epsilon$ where the goal is to construct confidence intervals for $a_0T\beta$ in a direction $a_0$, where $X$ has iid $N(0,\Sigma)$ rows. We show that previously analyzed propositions to de-bias the Lasso require a modification in order to enjoy efficiency in a full range of sparsity. This modification takes the form of a degrees-of-freedom adjustment that accounts for the dimension of the model selected by Lasso. Let $s_0$ be the true sparsity. If $\Sigma$ is known and the ideal score vector proportional to $X\Sigma{-1}a_0$ is used, the unadjusted de-biasing schemes proposed previously enjoy efficiency if $s_0\lll n{2/3}$. However, if $s_0\ggg n{2/3}$, the unadjusted schemes cannot be efficient in certain $a_0$: then it is necessary to modify existing procedures by a degrees-of-freedom adjustment. This modification grants asymptotic efficiency for any $a_0$ when $s_0/p\to 0$ and $s_0\log(p/s_0)/n \to 0$. If $\Sigma$ is unknown, efficiency is granted for general $a_0$ when $$\frac{s_0\log p}{n}+\min\Big{\frac{s_\Omega\log p}{n},\frac{|\Sigma{-1}a_0|_1\sqrt{\log p}}{|\Sigma{-1/2}a_0|_2 \sqrt n}\Big}+\frac{\min(s_\Omega,s_0)\log p}{\sqrt n}\to0$$ where $s_\Omega=|\Sigma{-1}a_0|_0$, provided that the de-biased estimate is modified with the degrees-of-freedom adjustment. The dependence in $s_0,s_\Omega$ and $|\Sigma{-1}a_0|_1$ is optimal. Our estimated score vector provides a novel methodology to handle dense $a_0$. Our analysis shows that the degrees-of-freedom adjustment is not needed when the initial bias in direction $a_0$ is small, which is granted under stringent conditions on $\Sigma{-1}$. The main proof argument is an interpolation path similar to that typically used to derive Slepian's lemma. It yields a new $\ell_\infty$ error bound for the Lasso which is of independent interest.

Citations (27)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.