NAPP: Noise-Adaptive Prototype Perturbation for Few-Shot Learning
Abstract
Few-shot learning aims to generalize deep models to novel categories with only a handful of labeled examples, but existing methods remain vulnerable to task-irrelevant noise, unstable prototype estimation, and limited adaptability under domain shift. To address these issues, we propose the Noise-Adaptive Prototype Perturbation Network (NAPP), a framework that enhances robustness and generalization for few-shot image classification. NAPP introduces three key innovations: (1) a Noise Cancellation Mechanism embedded in Vision Transformer self-attention layers that dynamically suppresses spurious, task-irrelevant features. (2) a MixPerturbation Module that perturbs class prototypes through augmented feature combinations, producing more stable and transferable prototype representations. (3) an Adaptive Noise-Conditioned Meta-Learning scheme that fine-tunes less than 0.02\% of noise-related parameters at meta-test time, enabling efficient and rapid adaptation to unseen classes without eroding pretrained knowledge. Extensive experiments demonstrate that NAPP achieves competitive and superior performance compared to state-of-the-art few-shot classification methods across both in-domain and challenging cross-domain benchmarks. These results highlight NAPP as a parameter-efficient and domain-robust framework, underscoring its practical effectiveness in real-world few-shot learning scenarios.