Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 48 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 107 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 473 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

High Dimensional Classification via Regularized and Unregularized Empirical Risk Minimization: Precise Error and Optimal Loss (1905.13742v2)

Published 31 May 2019 in stat.ML and cs.LG

Abstract: This article provides, through theoretical analysis, an in-depth understanding of the classification performance of the empirical risk minimization framework, in both ridge-regularized and unregularized cases, when high dimensional data are considered. Focusing on the fundamental problem of separating a two-class Gaussian mixture, the proposed analysis allows for a precise prediction of the classification error for a set of numerous data vectors $\mathbf{x} \in \mathbb Rp$ of sufficiently large dimension $p$. This precise error depends on the loss function, the number of training samples, and the statistics of the mixture data model. It is shown to hold beyond Gaussian distribution under some additional non-sparsity condition of the data statistics. Building upon this quantitative error analysis, we identify the simple square loss as the optimal choice for high dimensional classification in both ridge-regularized and unregularized cases, regardless of the number of training samples.

Citations (16)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)