Surprisingly simple semi-supervised domain adaptation with pretraining and consistency

Files
0764.pdf(483.78 KB)
Published version
Date
DOI
Authors
Saligrama, Venkatesh
Saenko, Kate
Mishra, Samarth
Version
Published version
OA Version
Citation
V. Saligrama, K. Saenko, S. Mishra. "Surprisingly simple semi-supervised domain adaptation with pretraining and consistency." BMVC
Abstract
Most modern unsupervised domain adaptation (UDA) approaches are rooted in domain alignment, i.e., learning to align source and target features to learn a target domain classifier using source labels. In semi-supervised domain adaptation (SSDA), when the learner can access few target domain labels, prior approaches have followed UDA theory to use domain alignment for learning. We show that the case of SSDA is different and a good target classifier can be learned without needing alignment. We use self-supervised pretraining (via rotation prediction) and consistency regularization to achieve well separated target clusters, aiding in learning a low error target classifier. With our Pretraining and Consistency (PAC) approach, we achieve state of the art target accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets. PAC, while using simple techniques, performs remarkably well on large and challenging SSDA benchmarks like DomainNet and Visda-17, often outperforming recent state of the art by sizeable margins. Code for our experiments can be found at https://github.com/venkatesh-saligrama/PAC.
Description
License
Copyright 2021. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.