Emergent Mind

Co-Teaching for Unsupervised Domain Adaptation and Expansion

(2204.01210)
Published Apr 4, 2022 in cs.CV

Abstract

Unsupervised Domain Adaptation (UDA) essentially trades a model's performance on a source domain for improving its performance on a target domain. To resolve the issue, Unsupervised Domain Expansion (UDE) has been proposed recently. UDE tries to adapt the model for the target domain as UDA does, and in the meantime maintains its source-domain performance. In both UDA and UDE settings, a model tailored to a given domain, let it be the source or the target domain, is assumed to well handle samples from the given domain. We question the assumption by reporting the existence of cross-domain visual ambiguity: Given the lack of a crystally clear boundary between the two domains, samples from one domain can be visually close to the other domain. Such sorts of samples are typically in minority in their host domain, so they tend to be overlooked by the domain-specific model, but can be better handled by a model from the other domain. We exploit this finding, and accordingly propose Co-Teaching (CT). The CT method is instantiated with knowledge distillation based CT (kdCT) plus mixup based CT (miCT). Specifically, kdCT transfers knowledge from a leading-teacher network and an assistant-teacher network to a student network, so the cross-domain ambiguity will be better handled by the student. Meanwhile, miCT further enhances the generalization ability of the student. Extensive experiments on two image classification datasets and two driving-scene segmentation datasets justify the viability of CT for UDA and UDE.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.