Logit-Adjusted Test-Time Adaptation under Partial Class Imbalance
Abstract
Test-Time Adaptation (TTA) enables deep neural networks to handle distribution shifts without requiring labels at inference. However, existing methods commonly assume complete class overlap between source and target domains, which rarely holds in practice. We study the challenging setting of \textbf{Partial Class Imbalance}, where the target domain contains only a subset of source classes. We show that entropy minimization--based TTA methods degrade over long test sequences because batch normalization updates bias feature representations toward visible classes, resulting in skewed predictions. To address this, we propose \textbf{Logit-Adjusted Entropy Minimization}, a simple yet effective strategy that integrates target class priors into the adaptation objective. Our method is model-agnostic and can be seamlessly applied to a wide range of TTA algorithms. Extensive experiments on CIFAR-100-C, ImageNet-C under diverse corruptions and severity levels, and the large-scale DomainNet-126 dataset demonstrate that our method consistently improves adaptation stability and accuracy for both CNNs and Vision Transformers. Compared to strong baselines, our approach reduces overfitting to visible classes and mitigates performance degradation in long-sequence adaptation. Code is available at \url{https://anonymous.4open.science/r/latte_2025}.