D2Mamba: Dual Domain Guided Informed Search in State Space Model for Underwater Image Enhancement
Abstract
Underwater images suffer from color distortion, haziness, and low-contrast due to light absorption and scattering. Despite deep learning advances in enhancement, challenges persist in efficiency, global context modeling, spatial-spectral consistency, and perceptually accurate detail recovery. To address these challenges, we design a novel underwater image enhancement framework, D2Mamba, adopting a dual-domain information (spatial and frequency) with state space models (SSMs), enabling efficient global context modeling while preserving local details. Unlike conventional SSMs that rely on raster, bidirectional, cross or diagonal scans, D2Mamba uses an A* search guided by physics-based Geodesic Information-Field Heuristic (GIFH) scan for feature traversal based on input degradation characteristics. GIFH combines feature gradients, high-frequency heterogeneity, and low-frequency semantic distance to compute adaptive costs, enabling the capture of both spatial and spectral dependencies. Further, a Spectral Wasserstein Attenuation Loss (SWAL) is introduced to enforce distributional alignment in the spectral domain, enabling perceptually consistent and physically consistent color restoration in enhanced underwater images. Extensive experiments on benchmark datasets demonstrate that D2Mamba achieves state-of-the-art performance with only 788K parameters and 7.06 GFLOPs.