DF-Mamba: Deformable State Space Modeling for 3D Hand Pose Estimation in Interactions
Abstract
Reconstructing daily hand interactions often struggles with severe occlusions, such as when two hands overlap, which highlights the need for robust feature learning in 3D hand pose estimation (HPE).To handle such occluded hand images, it is vital to effectively learn the relationship between local image features (e.g., for occluded joints) and global context (e.g., cues from inter-joints, inter-hands, or the scene). However, most current HPE methods still rely on ResNet for feature extraction, and such CNN's inductive bias may not be optimal for 3D HPE due to its limited capability to model the global context. To address this limitation, we propose an effective and efficient framework for visual feature extraction in 3D HPE using recent state space modeling (i.e., Mamba), dubbed Deformable Mamba (DF-Mamba). DF-Mamba is designed to capture global context cues beyond standard convolution through Mamba's selective state modeling and the proposed deformable state scanning. Specifically, for local features after convolution, our deformable scanning aggregates these features within an image while selectively preserving useful cues that represent the global context. This approach significantly improves the accuracy of structured prediction tasks like 3D HPE, with improved inference speed over ResNet50. Our experiments involve extensive evaluations on five datasets that cover diverse scenarios, including single-hand and two-hands estimation, hand-only and hand-object interactions, as well as RGB and depth modalities. We demonstrate that DF-Mamba outperforms the latest image backbones, including VMamba and Spatial Mamba, on all datasets and achieves state-of-the-art performance.