Semi-supervised Domain Adaptation via Mutual Alignment through Joint Error
Abstract
Most existing methods for unsupervised domain adaptation focus on learning domain-invariant representations. However, recent works have shown that the generalization on the target domain can fail due to the trade-off between marginal distribution alignment and joint error under a large domain shift. A few labeled target data points can enhance adaptation quality, but the distribution shift between labeled and unlabeled target data is often overlooked. Therefore, we propose a novel learning theory to address the joint error in semi-supervised domain adaptation that can reduce the mutual distribution shift between pairs from labeled and unlabeled domains. Furthermore, we introduce a discrepancy measurement between hypotheses to tackle the inconsistency of the loss functions in the algorithm and theory. Extensive experiments demonstrate that our method consistently outperforms baseline approaches, particularly in scenarios with large domain shifts and scarce labeled target data.