Emergent Mind

Minimax rates of entropy estimation on large alphabets via best polynomial approximation

(1407.0381)
Published Jul 1, 2014 in cs.IT , math.IT , math.ST , and stat.TH

Abstract

Consider the problem of estimating the Shannon entropy of a distribution over $k$ elements from $n$ independent samples. We show that the minimax mean-square error is within universal multiplicative constant factors of $$\Big(\frac{k }{n \log k}\Big)2 + \frac{\log2 k}{n}$$ if $n$ exceeds a constant factor of $\frac{k}{\log k}$; otherwise there exists no consistent estimator. This refines the recent result of Valiant-Valiant \cite{VV11} that the minimal sample size for consistent entropy estimation scales according to $\Theta(\frac{k}{\log k})$. The apparatus of best polynomial approximation plays a key role in both the construction of optimal estimators and, via a duality argument, the minimax lower bound.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.