RoadBench: A Vision-Language Foundation Model and Benchmark for Road Damage Understanding
Abstract
Accurate road damage detection is crucial for timely infrastructure maintenance and public safety, but existing vision-only datasets and models lack the rich contextual understanding that textual information can provide. To address this limitation, we introduce \textbf{RoadBench}, the first multimodal benchmark for comprehensive road damage understanding. This dataset pairs high-resolution images of road damages with detailed textual descriptions, providing a richer context for model training. We also present \textbf{RoadCLIP}, a novel vision-language model that builds upon CLIP by integrating domain-specific enhancements. It includes a disease-aware positional encoding that captures spatial patterns of road defects and a mechanism for injecting road-condition priors to refine the model’s understanding of road damages. We further employ a GPT-driven data generation pipeline to expand the image–text pairs in RoadBench, greatly increasing data diversity without exhaustive manual annotation. Experiments demonstrate that RoadCLIP achieves state-of-the-art performance on road damage recognition tasks, significantly outperforming existing vision-only models by 19.2%. These results highlight the advantages of integrating visual and textual information for enhanced road condition analysis, setting new benchmarks for the field and paving the way for more effective infrastructure monitoring through multimodal learning.