Emergent Mind

Near-Optimal Density Estimation in Near-Linear Time Using Variable-Width Histograms

(1411.0169)
Published Nov 1, 2014 in cs.LG , cs.DS , math.ST , and stat.TH

Abstract

Let $p$ be an unknown and arbitrary probability distribution over $[0,1)$. We consider the problem of {\em density estimation}, in which a learning algorithm is given i.i.d. draws from $p$ and must (with high probability) output a hypothesis distribution that is close to $p$. The main contribution of this paper is a highly efficient density estimation algorithm for learning using a variable-width histogram, i.e., a hypothesis distribution with a piecewise constant probability density function. In more detail, for any $k$ and $\epsilon$, we give an algorithm that makes $\tilde{O}(k/\epsilon2)$ draws from $p$, runs in $\tilde{O}(k/\epsilon2)$ time, and outputs a hypothesis distribution $h$ that is piecewise constant with $O(k \log2(1/\epsilon))$ pieces. With high probability the hypothesis $h$ satisfies $d{\mathrm{TV}}(p,h) \leq C \cdot \mathrm{opt}k(p) + \epsilon$, where $d{\mathrm{TV}}$ denotes the total variation distance (statistical distance), $C$ is a universal constant, and $\mathrm{opt}k(p)$ is the smallest total variation distance between $p$ and any $k$-piecewise constant distribution. The sample size and running time of our algorithm are optimal up to logarithmic factors. The "approximation factor" $C$ in our result is inherent in the problem, as we prove that no algorithm with sample size bounded in terms of $k$ and $\epsilon$ can achieve $C<2$ regardless of what kind of hypothesis distribution it uses.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.