Emergent Mind

Gaussian AutoEncoder

(1811.04751)
Published Nov 12, 2018 in cs.LG and stat.ML

Abstract

Generative AutoEncoders require a chosen probability distribution in latent space, usually multivariate Gaussian. The original Variational AutoEncoder (VAE) uses randomness in encoder - causing problematic distortion, and overlaps in latent space for distinct inputs. It turned out unnecessary: we can instead use deterministic encoder with additional regularizer to ensure that sample distribution in latent space is close to the required. The original approach (WAE) uses Wasserstein metric, what required comparing with random sample and using an arbitrarily chosen kernel. Later CWAE finally derived a non-random analytic formula by averaging $L_2$ distance of Gaussian-smoothened sample over all 1D projections. However, these arbitrarily chosen regularizers do not lead to Gaussian distribution. This article proposes approach for regularizers directly optimizing agreement between empirical distribution function and its desired CDF for chosen properties, for example radii and distances for Gaussian distribution, or coordinate-wise, to directly attract this distribution in latent space of AutoEncoder. We can also attract different distributions with this general approach, for example latent space uniform distribution on $[0,1]D$ hypercube or torus would allow for data compression without entropy coding, increased density near codewords would optimize for the required quantization.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.