SAFE 2026 – Synthetic & Adversarial ForEnsics
Abstract
The rise of generative AI and foundation models presents new challenges for ensuring robustness against synthetic and adversarial media. Research in adversarial machine learning has shown that detection systems can be bypassed with subtle perturbations, enabling malicious content that undermines societal trust and national security. This workshop offers a venue for advancing work at the intersection of synthetic media forensics and adversarial robustness, with a focus on provenance analysis, fingerprinting, authenticity verification, and resilience across diverse generative architectures. Expected outcomes include a taxonomy of joint synthetic–adversarial threats, benchmark resources for evaluation, and stronger collaboration between technical, forensics, and policy communities.