SDT-6D: Fully Sparse Depth-Transformer for Staged End-to-End 6D Pose Estimation in Industrial Multi-View Bin Picking
Abstract
Accurately recovering 6D object poses in densely packed industrial bin-picking environments remain a significant challenge, owing to occlusions, specular reflections, and textureless parts. We introduce an holistic depth-only 6D pose estimation approach that fuses multi-view depth maps into either a fine-grained 3D point cloud, in its vanilla version, or a sparse Truncated Signed Distance Field (TSDF). At the core of our framework lies a staged heatmap mechanism that yields scene-adaptive attention priors across different resolutions, steering computation toward foreground regions, thus keeping memory requirements at high resolutions feasible. Along, we propose a density-aware sparse transformer block that dynamically attends to (self-) occlusions and the non-uniform distribution of 3D representations. While sparse 3D processing has proven effective for long-range perception, its potential in close-range robotic applications remains underexplored. The proposed framework operates fully sparse, enabling high-resolution volumetric representations to capture fine geometric details crucial for accurate pose estimation in cluttered scenes. Our method operates the entire scene integrally, predicting the 6D pose via a novel per-voxel voting strategy, allowing simultaneous pose predictions for an arbitrary amount of target objects. We validate our method on the recently published IPD and MV-YCB multi-view datasets, demonstrating competitive performance in heavily cluttered industrial and household bin picking scenarios.