PoseAdapt: Sustainable Human Pose Estimation via Continual Learning Benchmarks and Toolkit
Abstract
Human pose estimation models are typically retrained from scratch to handle new keypoint definitions, sensing modalities, or deployment domains—a process that is inefficient, compute-intensive, and misaligned with real-world constraints. We present \textbf{ContinualPose}, the first open-source framework and benchmark suite designed for \emph{sustainable pose model adaptation} via continual learning (CL). At its core is \textbf{PoseAdapt}, a suite of domain- and class-incremental benchmarks that simulate realistic adaptation scenarios involving density, lighting, and modality shifts. The framework supports two primary workflows: (i) \textbf{Strategy Benchmarking}, enabling researchers to implement CL methods as plugins and evaluate them under standardized protocols, and (ii) \textbf{Model Adaptation}, allowing practitioners to adapt strong pretrained models to new tasks with minimal supervision. All benchmarks enforce a fixed lightweight backbone, no access to old data, and constrained per-step budgets, isolating the effect of the adaptation strategy. Through extensive experiments, we evaluate popular regularization-based methods under both single-step and sequential adaptation settings, highlighting the challenges of sustaining performance under tight constraints. By bridging modern CL research with the demands of pose estimation, ContinualPose lays the groundwork for adaptable models that evolve over time without repeated full retraining.