Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 30 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Generalized Linear Models with Structured Sparsity Estimators (2104.14371v1)

Published 29 Apr 2021 in stat.ML, cs.LG, and econ.EM

Abstract: In this paper, we introduce structured sparsity estimators in Generalized Linear Models. Structured sparsity estimators in the least squares loss are introduced by Stucky and van de Geer (2018) recently for fixed design and normal errors. We extend their results to debiased structured sparsity estimators with Generalized Linear Model based loss. Structured sparsity estimation means penalized loss functions with a possible sparsity structure used in the chosen norm. These include weighted group lasso, lasso and norms generated from convex cones. The significant difficulty is that it is not clear how to prove two oracle inequalities. The first one is for the initial penalized Generalized Linear Model estimator. Since it is not clear how a particular feasible-weighted nodewise regression may fit in an oracle inequality for penalized Generalized Linear Model, we need a second oracle inequality to get oracle bounds for the approximate inverse for the sample estimate of second-order partial derivative of Generalized Linear Model. Our contributions are fivefold: 1. We generalize the existing oracle inequality results in penalized Generalized Linear Models by proving the underlying conditions rather than assuming them. One of the key issues is the proof of a sample one-point margin condition and its use in an oracle inequality. 2. Our results cover even non sub-Gaussian errors and regressors. 3. We provide a feasible weighted nodewise regression proof which generalizes the results in the literature from a simple l_1 norm usage to norms generated from convex cones. 4. We realize that norms used in feasible nodewise regression proofs should be weaker or equal to the norms in penalized Generalized Linear Model loss. 5. We can debias the first step estimator via getting an approximate inverse of the singular-sample second order partial derivative of Generalized Linear Model loss.

Citations (2)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)