Emergent Mind

Learning Mixtures of Spherical Gaussians via Fourier Analysis

(2004.05813)
Published Apr 13, 2020 in cs.DS and cs.LG

Abstract

Suppose that we are given independent, identically distributed samples $xl$ from a mixture $\mu$ of no more than $k$ of $d$-dimensional spherical gaussian distributions $\mui$ with variance $1$, such that the minimum $\ell2$ distance between two distinct centers $yl$ and $yj$ is greater than $\sqrt{d} \Delta$ for some $c \leq \Delta $, where $c\in (0,1)$ is a small positive universal constant. We develop a randomized algorithm that learns the centers $yl$ of the gaussians, to within an $\ell2$ distance of $\delta < \frac{\Delta\sqrt{d}}{2}$ and the weights $wl$ to within $cw_{min}$ with probability greater than $1 - \exp(-k/c)$. The number of samples and the computational time is bounded above by $poly(k, d, \frac{1}{\delta})$. Such a bound on the sample and computational complexity was previously unknown when $\omega(1) \leq d \leq O(\log k)$. When $d = O(1)$, this follows from work of Regev and Vijayaraghavan. These authors also show that the sample complexity of learning a random mixture of gaussians in a ball of radius $\Theta(\sqrt{d})$ in $d$ dimensions, when $d$ is $\Theta( \log k)$ is at least $poly(k, \frac{1}{\delta})$, showing that our result is tight in this case.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.