Large Language and Vision Models for Autonomous Driving
Abstract
The 5th LLVM-AD workshop invites submissions that contribute to the progression of LLMs and VLMs within the domain of autonomous driving. We are particularly interested in bridging the gap between the rich image and language data found within the context of autonomous driving. Our primary areas of interest are: a) Traffic Scene Understanding enhanced by VLMs and b) Human-Autonomy Teaming driven by LLMs. The topics include but not limited to• Large Language Models and Vision Language Models for Autonomous Driving• Multimodal Motion Planning and Prediction• New Dataset for Autonomous Driving• Semantics and Scene Understanding in Autonomous Driving• Language-Driven Sensor and Traffic Simulation• Domain Adaptation and Transfer Learning in Autonomous Driving• Multi-Modal Fusion for Autonomous Driving• Survey and Prospective Paper for Autonomous Driving• Other Applications of Language or Vision Models for Driving