Joint Modeling of Corruption-Driven and Information-Limited Uncertainty for Robust 3D Gaussian Splatting
Abstract
Real-time 3D Gaussian Splatting (3DGS) has emerged as an efficient, high-fidelity alternative to neural radiance fields for novel view synthesis, enabling second-scale training and rendering via GPU rasterization. However, when input image collections contain transient disturbances (e.g., dynamic objects, exposure variations, motion blur) or suffer from sparse view coverage at scene boundaries, 3DGS performance degrades significantly due to reconstruction artifacts such as ghosting, floating points, and blurred surfaces. In this work, we present a unified framework that jointly addresses two types of artifacts: (1) corruption-driven artifacts, caused by transient or occluded content; and (2) information-limited artifacts, caused by insufficient multi-view observations. Our method leverages the training gradient signal, as well as the shape and spatial distribution of Gaussians, to adaptively suppress unreliable splats through a soft-masking strategy, without relying on any pretrained segmentation or feature networks.Extensive experiments on two real-world datasets with dynamic scenes and sparse camera trajectories demonstrate that our approach outperforms state-of-the-art robust 3DGS and uncertainty-pruning techniques in artifact suppression and reconstruction fidelity, while preserving real-time performance.