Emergent Mind

Abstract

Domain shift is a very challenging problem for semantic segmentation. Any model can be easily trained on synthetic data, where images and labels are artificially generated, but it will perform poorly when deployed on real environments. In this paper, we address the problem of domain adaptation for semantic segmentation of street scenes. Many state-of-the-art approaches focus on translating the source image while imposing that the result should be semantically consistent with the input. However, we advocate that the image semantics can also be exploited to guide the translation algorithm. To this end, we rethink the generative model to enforce this assumption and strengthen the connection between pixel-level and feature-level domain alignment. We conduct extensive experiments by training common semantic segmentation models with our method and show that the results we obtain on the synthetic-to-real benchmarks surpass the state-of-the-art.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.