RobuMTL: Enhancing Multi-Task Learning Robustness Against Weather Conditions
Abstract
The Multi-Task Learning (MTL) paradigm has recently emerged as a promising approach to tackle complex problems across various domains, such as computer vision, reinforcement learning, and natural language processing, and has been successfully deployed in numerous applications, including on edge devices. Consequently, addressing the robustness of MTL has become a necessity to withstand diverse types of perturbations, including noise and environmental conditions such as weather. In this paper, we introduce RoboMTL, a novel architecture designed to adaptively address the degradation of visual input based on its characteristics by dynamically selecting task-specific hierarchal Low-Rank Adaptation (LoRA) modules and LoRA squad based on input perturbations in mixture-of-experts manner. Our framework enables adaptive specialization based on input perturbations, enhancing robustness across diverse conditions. To validate our approach, we evaluated it on the PASCAL and NYUD-v2 datasets and compared it against single-task models, traditional MTL approaches, and state-of-the-art methods. Our approach demonstrated superior performance, achieving a 2.8% increase under single perturbations and up to a 44.4% relative improvement under mixed weather conditions on PASCAL, as well as a 9.7% improvement on NYUD-v2, while maintaining a 3.6× parameter reduction and 3.52× lower computational cost with only a +3.6 ms latency overhead per image over the baseline, effectively enhancing robustness and task efficiency under adverse conditions. The code will be publicly available on GitHub.