Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models

Aghasanli, Agil and Kangin, Dmitry and Angelov, Plamen (2023) Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023 :. Computer Vision Foundation, FRA, pp. 467-474.

Full text not available from this repository.

Abstract

The process of recognizing and distinguishing between real content and content generated by deep learning algorithms, often referred to as deepfakes, is known as deepfake detection. In order to counter the rising threat of deepfakes and maintain the integrity of digital media, research is now being done to create more reliable and precise detection techniques. Deep learning models, such as Stable Diffusion, have been able to generate more detailed and less blurry images in recent years. In this paper, we develop a deepfake detection technique to distinguish original and fake images generated by various Diffusion Models. The developed methodology for deepfake detection takes advantage of features from fine-tuned Vision Transformers (ViTs), combined with existing classifiers such as Support Vector Machines (SVM). We demonstrate the proposed methodology's ability of interpretability-through-prototypes by analysing support vectors of the SVMs. Additionally, due to the novelty of the topic, there is a lack of open datasets for deepfake detection. Therefore, to evaluate the methodology, we have also created custom datasets based on various generative techniques of Diffusion Models on open datasets (ImageNet, FFHQ, Oxford-IIIT Pet). The code is available at https://github.com/lira-centre/

Item Type:
Contribution in Book/Report/Proceedings
Uncontrolled Keywords:
Research Output Funding/yes_externally_funded
Subjects:
?? yes - externally fundedyes ??
ID Code:
207252
Deposited By:
Deposited On:
01 Nov 2023 09:00
Refereed?:
Yes
Published?:
Published
Last Modified:
03 Feb 2024 02:05