Emergent Mind

Abstract

This paper provides a framework to show the concentration of solutions $Y*$ to convex minimizing problem where the objective function $\phi(X)(Y)$ depends on some random vector $X$ satisfying concentration of measure hypotheses. More precisely, the convex problem translates into a contractive fixed point equation that ensure the transmission of the concentration from $X$ to $Y*$. This result is of central interest to characterize many machine learning algorithms which are defined through implicit equations (e.g., logistic regression, lasso, boosting, etc.). Based on our framework, we provide precise estimations for the first moments of the solution $Y*$, when $X= (x1,\ldots, xn)$ is a data matrix of independent columns and $\phi(X)(y)$ writes as a sum $\frac{1}{n}\sum{i=1}n hi(x_iTY)$. That allows to describe the behavior and performance (e.g., generalization error) of a wide variety of machine learning classifiers.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.