Emergent Mind

The information bottleneck and geometric clustering

(1712.09657)
Published Dec 27, 2017 in stat.ML , cs.AI , cs.IT , cs.LG , and math.IT

Abstract

The information bottleneck (IB) approach to clustering takes a joint distribution $P!\left(X,Y\right)$ and maps the data $X$ to cluster labels $T$ which retain maximal information about $Y$ (Tishby et al., 1999). This objective results in an algorithm that clusters data points based upon the similarity of their conditional distributions $P!\left(Y\mid X\right)$. This is in contrast to classic "geometric clustering'' algorithms such as $k$-means and gaussian mixture models (GMMs) which take a set of observed data points $\left{ \mathbf{x}_{i}\right} _{i=1:N}$ and cluster them based upon their geometric (typically Euclidean) distance from one another. Here, we show how to use the deterministic information bottleneck (DIB) (Strouse and Schwab, 2017), a variant of IB, to perform geometric clustering, by choosing cluster labels that preserve information about data point location on a smoothed dataset. We also introduce a novel method to choose the number of clusters, based on identifying solutions where the tradeoff between number of clusters used and spatial information preserved is strongest. We apply this approach to a variety of simple clustering problems, showing that DIB with our model selection procedure recovers the generative cluster labels. We also show that, in particular limits of our model parameters, clustering with DIB and IB is equivalent to $k$-means and EM fitting of a GMM with hard and soft assignments, respectively. Thus, clustering with (D)IB generalizes and provides an information-theoretic perspective on these classic algorithms.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.