Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness
Abstract
The security and robustness of deep neural networks (DNNs) have become increasingly critical as these systems are deployed in sensitive applications. While introducing adversarial examples during training has proven effective for improving robustness, this approach imposes substantial computational burdens that many users cannot afford, and no certified models have been deployed commercially. More concerning, state-of-the-art methods that further enhance robustness by incorporating additional examples from external datasets or generative models increase training costs by orders of magnitude. In this paper, we propose a cost-efficient approach that achieves comparable or superior robustness by leveraging the theorem of Lipschitz continuity. Our technique remaps the input domain into a constrained range, effectively reducing the Lipschitz constant and enhancing model resilience against adversarial perturbations. Unlike conventional adversarial training, our method requires only a single scan of the dataset without gradient estimation, making it remarkably efficient. Our approach integrates seamlessly with existing adversarially trained models to further boost their robustness. Experiments demonstrate its generalizability across various model architectures and datasets. When combined with models trained without additional generative data, our method achieves robustness comparable to or exceeding that of models using extensive supplementary data. These results open a promising direction for significantly reducing computational costs while maintaining or improving defensive capabilities of robust neural networks.