Papers
Topics
Authors
Recent
2000 character limit reached

On the Latent Space of Wasserstein Auto-Encoders (1802.03761v1)

Published 11 Feb 2018 in stat.ML and cs.LG

Abstract: We study the role of latent space dimensionality in Wasserstein auto-encoders (WAEs). Through experimentation on synthetic and real datasets, we argue that random encoders should be preferred over deterministic encoders. We highlight the potential of WAEs for representation learning with promising results on a benchmark disentanglement task.

Citations (51)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.