Doubly Stochastic Adversarial Autoencoder (1807.07603v1)
Abstract: Any autoencoder network can be turned into a generative model by imposing an arbitrary prior distribution on its hidden code vector. Variational Autoencoder (VAE) [2] uses a KL divergence penalty to impose the prior, whereas Adversarial Autoencoder (AAE) [1] uses {\it generative adversarial networks} GAN [3]. GAN trades the complexities of {\it sampling} algorithms with the complexities of {\it searching} Nash equilibrium in minimax games. Such minimax architectures get trained with the help of data examples and gradients flowing through a generator and an adversary. A straightforward modification of AAE is to replace the adversary with the maximum mean discrepancy (MMD) test [4-5]. This replacement leads to a new type of probabilistic autoencoder, which is also discussed in our paper. We propose a novel probabilistic autoencoder in which the adversary of AAE is replaced with a space of {\it stochastic} functions. This replacement introduces a new source of randomness, which can be considered as a continuous control for encouraging {\it explorations}. This prevents the adversary from fitting too closely to the generator and therefore leads to a more diverse set of generated samples.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.