FALCONEye: Finding Answers and Localizing Content in ONE-hour-long videos with multi-modal LLMs
Abstract
Finding information in hour-long videos is a challenging task even for top-performing Vision Language Models (VLMs), as encoding visual content quickly exceeds available context windows. To tackle this challenge, we present FALCONEye, a novel video agent based on a training-free, model-agnostic meta-architecture composed of a VLM and a Large Language Model (LLM). FALCONEye answers open-ended questions using an exploration-based search algorithm guided by calibrated confidence from the VLM’s answers. We also introduce FALCON-Bench, a benchmark that extends question answering problem to Video Answer Search—requiring models to return both the answer and its supporting temporal window for open-ended questions in hour‑long videos. With just a 7B VLM and a lightweight LLM, FALCONEye outscores all open‑source 7B VLMs and comparable agents in FALCONBench. It further demonstrates its generalization capability in MLVU benchmark with shorter videos and different tasks, surpassing GPT‑4o on single‑detail tasks while slashing inference cost by roughly an order of magnitude.