Emergent Mind

Unsupervised Many-to-Many Image-to-Image Translation Across Multiple Domains

(1911.12552)
Published Nov 28, 2019 in cs.CV , cs.LG , and eess.IV

Abstract

Unsupervised multi-domain image-to-image translation aims to synthesis images among multiple domains without labeled data, which is more general and complicated than one-to-one image mapping. However, existing methods mainly focus on reducing the large costs of modeling and do not pay enough attention to the quality of generated images. In some target domains, their translation results may not be expected or even it has model collapse. To improve the image quality, we propose an effective many-to-many mapping framework for unsupervised multi-domain image-to-image translation. There are two key aspects in our method. The first is a proposed many-to-many architecture with only one domain-shared encoder and several domain-specialized decoders to effectively and simultaneously translate images across multiple domains. The second is two proposed constraints extended from one-to-one mappings to further help improve the generation. All the evaluations demonstrate our framework is superior to existing methods and provides an effective solution for multi-domain image-to-image translation.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.