Digital Forensic AI You Can Explain: A Case Study on Video Source Camera Identification
Abstract
In recent years, artificial intelligence (AI) has significantly impacted digital forensics, yet its broader deployment remains limited due to the difficulty of explaining AI decisions. Explainable AI (XAI) presents a potential solution to increasing transparency and trust, but its application in digital forensics is still underexplored. In this work, we present a practical and structured explainable digital forensics AI (xDFAI) approach tailored to the forensic task of video source camera identification (VSCI). Our method enables forensic examiners to interpret the behavior of AI models, assess whether decisions are driven by intended logic or arise from random or content-dependent artifacts, and establish the integrity and reliability of explanations. We implement and evaluate this approach on two state-of-the-art VSCI models, providing step-by-step analyses of explanation quality, spatial consistency of high-impact features, and content dependence. Our results reveal that while models achieve strong classification accuracy, their explanations lack spatial stability and are impacted by video content, raising concerns about forensic reliability. To support reproducibility and future research, we provide an open-source implementation. This work underscores the potential of XAI to improve transparency in digital forensics and highlights the challenges of interpretation and presentation of results. Our study takes an important step toward the operational deployment of xDFAI in multimedia forensics.