WACV-2026 Workshop On Generative, Adversarial, Manipulation and Presentation Attacks In Biometrics
Abstract
Newer architectures like Generative Adversarial Networks (GANs) and Diffusion models can now produce ultra-realistic content with perceptually convincing geometry, texture, and motion, challenging human perception in distinguishing synthetic from authentic content. While such realism is highly beneficial in sectors like entertainment, media, and content creation, it also poses serious threats to secure access control systems, particularly those based on biometrics. Image and video manipulation attacks have significantly evolved, leveraging both traditional image processing techniques and advanced adversarial machine learning approaches (e.g., GANs, Diffusion). One particularly insidious attack is morphing, where a single manipulated image can compromise multiple identities, making biometric authentication highly vulnerable. Similarly, DeepFakes threaten the integrity of digital information channels, potentially enabling misinformation, identity fraud, and social engineering attacks at scale.Alongside visual manipulation, Large Language Models (LLMs) introduce a new dimension of synthetic content creation. LLMs can generate highly coherent text, persuasive narratives, and even phishing content that mimics human writing, which can be used maliciously for social engineering, spreading disinformation, or automating attacks on information systems. The convergence of visual and textual generative AI thus amplifies the risk landscape, making detection and verification more challenging.These developments have a dual impact: while they advance content generation, creative applications, education, and simulation-based training, they also threaten trust in digital information, compromise biometric security, and increase vulnerability to identity and information attacks. Expected outcomes include the development of robust multimodal detection methods for visual and textual synthetic content, creation of benchmark datasets and evaluation protocols for assessing manipulation detection systems under realistic scenarios, and the enhancement of ethical, legal, and societal frameworks for the responsible deployment of generative AI.We propose to conduct a eighth workshop on WACV-2026 - Workshop On Manipulation, Generative, Adversarial, and Presentation Attacks In Biometrics. The workshop is planned to report the advancements in creation, evaluation, impact and mitigation measures for adversarial attacks (soft and hard attacks) on biometrics systems. The workshop also targets submissions addressing the analyses and mitigation measures for function creep attacks. This half-day workshop is a seventh edition of the special session, previously held in conjunction with BTAS-2018, WACV-2020, WACV-2021, WACV-2022, WACV-2023, WACV-2024 and WACV-2025 respectively.