One paper has been accepted by PRICAI2022!
Abstract:
Domain adaptation alleviates the performance drop when models are deployed in a target domain. Models assuming a close-set world fail in realistic open-set scenarios where novel classes not present in the source domain exist. Moreover, there are likely to be multiple source domains sharing the same label set but having different data distributions. These real situations make multi-source open-set domain adaptation (MSOSDA) a practical problem but have not been fully explored. The difficulty of MSOSDA is learning a common discriminative feature space among all domains while maximizing the separation between source classes and target-private ones. In this work, we propose a self-supervised vision transformer (ViT) based nearest neighbor classifier for MSOSDA. Our critical insight is to exploit the strong nearest neighbor classification property of self-supervised ViT along with supervised contrastive learning. Straightforward strategies and an adaptive data-driven threshold are adopted to explicitly align among domains and recognize open-set classes in the target domain. Extensive experiments on three popular benchmarks demonstrate the effectiveness of our approach.
Check the details if you are interested.