Emergent Mind

Ensemble Estimation of Information Divergence

(1601.06884)
Published Jan 26, 2016 in cs.IT and math.IT

Abstract

Recent work has focused on the problem of nonparametric estimation of information divergence functionals. Many existing approaches are restrictive in their assumptions on the density support set or require difficult calculations at the support boundary which must be known a priori. The MSE convergence rate of a leave-one-out kernel density plug-in divergence functional estimator for general bounded density support sets is derived where knowledge of the support boundary is not required. The theory of optimally weighted ensemble estimation is generalized to derive a divergence estimator that achieves the parametric rate when the densities are sufficiently smooth. The asymptotic distribution of this estimator and some guidelines for tuning parameter selection are provided. Based on the theory, an empirical estimator of R\'{e}nyi-$\alpha$ divergence is proposed that outperforms the standard kernel density plug-in estimator, especially in high dimension. The estimator is shown to be robust to the choice of tuning parameters. As an illustration, we use the estimator to estimate bounds on the Bayes error rate of a classification problem.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.