Emergent Mind

Task Guided Compositional Representation Learning for ZDA

(2109.05934)
Published Sep 13, 2021 in cs.CV

Abstract

Zero-shot domain adaptation (ZDA) methods aim to transfer knowledge about a task learned in a source domain to a target domain, while data from target domain are not available. In this work, we address learning feature representations which are invariant to and shared among different domains considering task characteristics for ZDA. To this end, we propose a method for task-guided ZDA (TG-ZDA) which employs multi-branch deep neural networks to learn feature representations exploiting their domain invariance and shareability properties. The proposed TG-ZDA models can be trained end-to-end without requiring synthetic tasks and data generated from estimated representations of target domains. The proposed TG-ZDA has been examined using benchmark ZDA tasks on image classification datasets. Experimental results show that our proposed TG-ZDA outperforms state-of-the-art ZDA methods for different domains and tasks.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.