Emergent Mind

Tight Bounds on the Hardness of Learning Simple Nonparametric Mixtures

(2203.15150)
Published Mar 28, 2022 in cs.LG , math.ST , stat.ML , and stat.TH

Abstract

We study the problem of learning nonparametric distributions in a finite mixture, and establish tight bounds on the sample complexity for learning the component distributions in such models. Namely, we are given i.i.d. samples from a pdf $f$ where $$ f=w1f1+w2f2, \quad w1+w2=1, \quad w1,w2>0 $$ and we are interested in learning each component $fi$. Without any assumptions on $fi$, this problem is ill-posed. In order to identify the components $fi$, we assume that each $fi$ can be written as a convolution of a Gaussian and a compactly supported density $\nui$ with $\text{supp}(\nu1)\cap \text{supp}(\nu2)=\emptyset$. Our main result shows that $(\frac{1}{\varepsilon}){\Omega(\log\log \frac{1}{\varepsilon})}$ samples are required for estimating each $fi$. The proof relies on a quantitative Tauberian theorem that yields a fast rate of approximation with Gaussians, which may be of independent interest. To show this is tight, we also propose an algorithm that uses $(\frac{1}{\varepsilon}){O(\log\log \frac{1}{\varepsilon})}$ samples to estimate each $f_i$. Unlike existing approaches to learning latent variable models based on moment-matching and tensor methods, our proof instead involves a delicate analysis of an ill-conditioned linear system via orthogonal functions. Combining these bounds, we conclude that the optimal sample complexity of this problem properly lies in between polynomial and exponential, which is not common in learning theory.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.