Enhancing Reverse Distillation with Core Exemplar Learning for Unified Multi-Class Anomaly Detection
Abstract
In electronics manufacturing, anomaly detection methods face significant challenges due to class distribution imbalance and training instability when handling multiple classes simultaneously under varying imaging conditions. To address these challenges, we propose Reverse Distillation with Core-Exemplar Learning (RDCEL), a unified anomaly detection framework incorporating domain adaptation and novel metric learning strategies. RDCEL integrates unsupervised domain adaptation to align the covariance of the source and target domains and uses soft label-based coreset learning to handle diverse class distributions. It also leverages a coreset repulsion loss to minimize redundancy among coreset representations, fostering a more stable and dispersed embedding space across multiple classes. By aligning spatial statistics across different classes, RDCEL effectively addresses inter-class discrepancies, enabling consistent anomaly scoring under a unified test setting. Extensive experiments show RDCEL significantly outperforms state-of-the-art methods on MVTec AD and VisA datasets, achieving superior accuracy, stable AUROC performance, and faster convergence.