Emergent Mind

Near-Optimal Sample Complexity Bounds for Maximum Likelihood Estimation of Multivariate Log-concave Densities

(1802.10575)
Published Feb 28, 2018 in math.ST , cs.IT , cs.LG , math.IT , and stat.TH

Abstract

We study the problem of learning multivariate log-concave densities with respect to a global loss function. We obtain the first upper bound on the sample complexity of the maximum likelihood estimator (MLE) for a log-concave density on $\mathbb{R}d$, for all $d \geq 4$. Prior to this work, no finite sample upper bound was known for this estimator in more than $3$ dimensions. In more detail, we prove that for any $d \geq 1$ and $\epsilon>0$, given $\tilde{O}d((1/\epsilon){(d+3)/2})$ samples drawn from an unknown log-concave density $f0$ on $\mathbb{R}d$, the MLE outputs a hypothesis $h$ that with high probability is $\epsilon$-close to $f0$, in squared Hellinger loss. A sample complexity lower bound of $\Omegad((1/\epsilon){(d+1)/2})$ was previously known for any learning algorithm that achieves this guarantee. We thus establish that the sample complexity of the log-concave MLE is near-optimal, up to an $\tilde{O}(1/\epsilon)$ factor.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.