Emergent Mind

Taking Control of Intra-class Variation in Conditional GANs Under Weak Supervision

(1811.11296)
Published Nov 27, 2018 in cs.CV , cs.AI , and cs.LG

Abstract

Generative Adversarial Networks (GANs) are able to learn mappings between simple, relatively low-dimensional, random distributions and points on the manifold of realistic images in image-space. The semantics of this mapping, however, are typically entangled such that meaningful image properties cannot be controlled independently of one another. Conditional GANs (cGANs) provide a potential solution to this problem, allowing specific semantics to be enforced during training. This solution, however, depends on the availability of precise labels, which are sometimes difficult or near impossible to obtain, e.g. labels representing lighting conditions or describing the background. In this paper we introduce a new formulation of the cGAN that is able to learn disentangled, multivariate models of semantically meaningful variation and which has the advantage of requiring only the weak supervision of binary attribute labels. For example, given only labels of ambient / non-ambient lighting, our method is able to learn multivariate lighting models disentangled from other factors such as the identity and pose. We coin the method intra-class variation isolation (IVI) and the resulting network the IVI-GAN. We evaluate IVI-GAN on the CelebA dataset and on synthetic 3D morphable model data, learning to disentangle attributes such as lighting, pose, expression, and even the background.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.