DTMIR-Pro: Domain Translation with Prompt-based Latent-Space Generalization for Multi-Weather Image Restoration
Abstract
Multi-weather image restoration seeks to recover scene visibility under rainy, snowy, and hazy conditions, thereby enhancing high-level vision tasks. Existing methods typically train on combined datasets with single-type weather degradations, limiting their generalization to real-world scenarios involving mixed degradations. Domain translation has emerged as a viable solution by generating diverse weather-degraded variants of the same scene. However, current approaches require separate models for each degradation type, resulting in increased system complexity. To address this, we propose DTMIR-Pro, a prompt-based domain translation framework with latent space generalization for multi-weather image restoration. A single trainable network performs multi-domain translation using domain-adaptive prompts and dynamic kernel selection via a proposed Dynamic Multi-Head Attention block to learn diverse degradation patterns. The restoration network takes translated outputs and employs a Multi-Weather Fusion Block with global-local feature streams to capture complex degradations. Furthermore, we introduce a Similarity-Based Encoder Routing mechanism to transfer domain-specific features from the translation encoder to the restoration stage. Extensive experiments on both synthetic and real-world weather-degraded datasets demonstrate the effectiveness and generalizability of the proposed method. Testing code is provided as a part of the supplementary material and will be publicly released upon acceptance of paper.