T2VWorldBench: A Benchmark for Evaluating World Knowledge in Text-to-Video Generation
Abstract
Text-to-video (T2V) models have demonstrated impressive capabilities in generating visually reasonable scenes, while their capability to leverage world knowledge for ensuring semantic consistency and factual accuracy remains largely unexplored. To address this challenge, we propose T2VWorldBench, the first systematic benchmark for evaluating the world knowledge generation capabilities of text-to-video models, covering 6 major categories, 60 subcategories, and 1,200 prompts across a wide range of domains, including physics, nature activity, culture, causality, and object. To address both human preference and scalable evaluation, our benchmark incorporates both human evaluation and automated evaluation using vision-language models (VLMs). We evaluated the 10 most advanced text-to-video models currently available, ranging from open source to commercial models, and found that most models are unable to understand world knowledge and generate truly correct videos. These findings highlight a critical gap in the ability of current text-to-video models to leverage world knowledge, providing valuable research opportunities and entry points for constructing models with robust capabilities for commonsense reasoning and factual generation.