Emergent Mind

Analysis of KNN Density Estimation

(2010.00438)
Published Sep 30, 2020 in stat.ML and cs.LG

Abstract

We analyze the $\ell1$ and $\ell\infty$ convergence rates of k nearest neighbor density estimation method. Our analysis includes two different cases depending on whether the support set is bounded or not. In the first case, the probability density function has a bounded support and is bounded away from zero. We show that kNN density estimation is minimax optimal under both $\ell1$ and $\ell\infty$ criteria, if the support set is known. If the support set is unknown, then the convergence rate of $\ell1$ error is not affected, while $\ell\infty$ error does not converge. In the second case, the probability density function can approach zero and is smooth everywhere. Moreover, the Hessian is assumed to decay with the density values. For this case, our result shows that the $\ell\infty$ error of kNN density estimation is nearly minimax optimal. The $\ell1$ error does not reach the minimax lower bound, but is better than kernel density estimation.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.