SceneShine: Illumination-aware Human Scene Gaussian Re-Splatting from Mobile Device Video
Abstract
Realistic integration of humans into novel 3D scenes requires accurate relighting and shadow generation, yet current vanilla 3D Gaussian Splatting (3DGS) methods struggle with these challenges. We present SceneShine, an illumination-aware 3DGS framework designed for seamless human-scene integration, featuring physically-based avatar relighting and shadow generation for realistic scene composition. Relighting human surfaces in in-the-wild videos is challenging due to the inherent ambiguity in physics-based rendering, which struggles to simultaneously model scene lighting and BRDF properties accurately. To tackle this, we leverage a pseudo-global light map prior to guide BRDF parameter decomposition during training, effectively reducing relighting artifacts. We further incorporate point-based ray tracing to handle human-scene occlusions and dynamically recalculate scene colors, ensuring realistic shadow generation.We further propose a synthetic dataset for evaluation. Extensive experiments demonstrate that our method outperforms existing approaches in both reconstruction quality and identity preservation while achieving convincing illumination-aware composition.