Emergent Mind

Learning Multivariate Log-concave Distributions

(1605.08188)
Published May 26, 2016 in cs.LG , cs.IT , math.IT , math.ST , and stat.TH

Abstract

We study the problem of estimating multivariate log-concave probability density functions. We prove the first sample complexity upper bound for learning log-concave densities on $\mathbb{R}d$, for all $d \geq 1$. Prior to our work, no upper bound on the sample complexity of this learning problem was known for the case of $d>3$. In more detail, we give an estimator that, for any $d \ge 1$ and $\epsilon>0$, draws $\tilde{O}d \left( (1/\epsilon){(d+5)/2} \right)$ samples from an unknown target log-concave density on $\mathbb{R}d$, and outputs a hypothesis that (with high probability) is $\epsilon$-close to the target, in total variation distance. Our upper bound on the sample complexity comes close to the known lower bound of $\Omegad \left( (1/\epsilon){(d+1)/2} \right)$ for this problem.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.