Frequency Is What You Need: Considering Word Frequency When Text Masking Benefits Vision-Language Model Pre-training
Abstract
Vision Language Models (VLMs) can be trained more efficiently if training sets can be reduced in size. Recent work has shown the benefits of masking text during VLM training using a variety of strategies (truncation, random masking, block masking and syntax masking) and has reported syntax masking as the top performer. In this paper, we analyze the impact of different text masking strategies on the word frequency distribution of the training data, and show that this impact is connected to model success. Motivated by this finding, we propose a new frequency-based text masking approach, Contrastive Language-Image Pre-training with Word Frequency Masking (CLIPF). Extensive experiments demonstrate the advantages of CLIPF over syntax masking and other existing approaches, particularly when the number of input tokens decreases. We show that not only CLIPF, but also other existing masking strategies, outperform syntax masking when enough epochs are used during training, a finding of practical importance for selecting a text masking method for VLM training. Our code is available online.