Published June 2024 | Version v1
Conference paper Open

Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection

Description

In this paper we propose a new framework for evaluating the performance of explanation methods on the decisions of a deepfake detector. This framework assesses the ability of an explanation method to spot the regions of a fake image with the biggest influence on the decision of the deepfake detector, by examining the extent to which these regions can be modified through a set of adversarial attacks, in order to flip the detector’s prediction or reduce its initial prediction; we anticipate a larger drop in deepfake detection accuracy and prediction, for methods that spot these regions more accurately. Based on this framework, we conduct a comparative study using a state-of-the-art model for deepfake detection that has been trained on the FaceForensics++ dataset, and five explanation methods from the literature. The findings of our quantitative and qualitative evaluations document the advanced performance of the LIME explanation method against the other compared ones, and indicate this method as the most appropriate for explaining the decisions of the utilized deepfake detector.

Files

mad24_tsigos_arxiv.pdf

Files (8.7 MB)

Name Size Download all
md5:2ae1ff5602a3faaa859c49bedacf5d67
8.7 MB Preview Download

Additional details

Funding

European Commission
AI4TRUST - AI-based-technologies for trustworthy solutions against disinformation 101070190
European Commission
AI4Media - A European Excellence Centre for Media, Society and Democracy 951911