SeqFeedNet: Sequential Feature Feedback Network for Background Subtraction
Yu-Shun Huang · Jing-Ming Guo · Yi-Xiang Yang
Abstract
Background subtraction (BGS) is a fundamental task in computer vision with applications in video surveillance, object tracking, and recognition. Despite recent advancements, many deep learning-based BGS algorithms rely on large models to extract high-level representations, demanding significant computational resources and leading to inefficiencies in processing video streams. To address these limitations, we introduce the Sequential Feature Feedback Network (SeqFeedNet), a novel supervised algorithm for BGS in unseen videos that operates without additional pre-processing models. SeqFeedNet innovatively incorporates time-scale diverse sequential features and employs a feedback mechanism for each iteration. Moreover, we propose the Sequential Fit Training (SeqFiT) technique, enhancing model convergence during training. Evaluated on the CDNet 2014 dataset, SeqFeedNet not only achieves $\sim5$ times increase in inference speed but also outperforms F-Measure scores of the leading supervised algorithms, making it highly suitable for real-world applications. Our experiment demonstrates that SeqFeedNet surpasses state-of-the-art network without pre-trained segmentation model by 3.83\% F-Measure on the CDnet 2014 dataset. Leading the way to establish a new benchmark for efficient and effective BGS in unseen videos.
Successful Page Load