Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions (1807.06650v3)

Published 17 Jul 2018 in cs.LG and stat.ML

Abstract: We present a neural network architecture based upon the Autoencoder (AE) and Generative Adversarial Network (GAN) that promotes a convex latent distribution by training adversarially on latent space interpolations. By using an AE as both the generator and discriminator of a GAN, we pass a pixel-wise error function across the discriminator, yielding an AE which produces non-blurry samples that match both high- and low-level features of the original images. Interpolations between images in this space remain within the latent-space distribution of real images as trained by the discriminator, and therfore preserve realistic resemblances to the network inputs. Code available at https://github.com/timsainb/GAIA

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tim Sainburg (5 papers)
  2. Marvin Thielk (2 papers)
  3. Brad Theilman (2 papers)
  4. Benjamin Migliori (3 papers)
  5. Timothy Gentner (1 paper)
Citations (52)

Summary

We haven't generated a summary for this paper yet.