Temporal Object Captioning for Street Scene Videos from LiDAR Tracks
Abstract
Video captioning models have seen notable advancements in recent years, especially with regard to their ability to capture temporal information. While many research efforts have focusedon architectural advancements, such as temporal attention mechanisms, there remains a notable gap in understanding how modelscapture and utilize temporal semantics for effective temporal featureextraction, especially in the context of Advanced Driver AssistanceSystems. We propose an automated LiDAR-based captioning procedure that focuses on the temporal dynamics of traffic participants.Our approach uses a rule-based system to extract essential detailssuch as lane position and relative motion from object tracks, followed by a template-based caption generation. Our findings showthat training SwinBERT, a video captioning model, using only frontcamera images and supervised with our template-based captions,specifically designed to encapsulate fine-grained temporal behavior,leads to improved temporal understanding consistently across threedatasets. In conclusion, our results clearly demonstrate that integrating LiDAR-based caption supervision significantly enhances temporal understanding, effectively addressing and reducing the inherentvisual/static biases prevalent in current state-of-the-art model architectures.