SIDA: Social Media Image Deepfake Detection, Localization and Explanation with Large Multimodal Model

1 University of Liverpool, UK
2 Nanyang Technological University, SG
3 WMG, University of Warwick
4 The Chinese University of Hong Kong, Shenzhen, Guangdong, China
SIDA

The framework comparisons. Existing deepfake methods (a-b) are limited to detection, localization, or both. In contrast, SIDA (c) offers a more comprehensive solution, capable of handling detection, localization, and explanation tasks.

Abstract

The rapid advancement of generative models in creating highly realistic images poses substantial risks for misinformation dissemination. For instance, a synthetic image, when shared on social media, can mislead extensive audiences and erode trust in digital content, resulting in severe repercussions. Despite some progress, academia has not yet created a large and diversified deepfake detection dataset for social media, nor has it devised an effective solution to address this issue. In this paper, we introduce the Social media Image Detection dataSet (SID-Set), which offers three key advantages: (1) extensive volume, featuring 300K AIgenerated/tampered and authentic images with comprehensive annotations, (2) broad diversity, encompassing fully synthetic and tampered images across various classes, and (3) elevated realism, with images that are predominantly indistinguishable from genuine ones through mere visual inspection. Furthermore, leveraging the exceptional capabilities of large multimodal models, we propose a new image deepfake detection, localization, and explanation framework, named SIDA (Social media Image Detection, localization, and explanation Assistant). SIDA not only discerns the authenticity of images, but also delineates tampered regions through mask prediction and provides textual explanations of the model’s judgment criteria. Compared with state-of-the-art deepfake detection models on SID-Set and other benchmarks, extensive experiments demonstrate that SIDA achieves superior performance.

Comparison with Existing Image Deepfake Datasets

Comparison with existing image deepfake datasets

Comparison with existing image deepfake datasets. SID-Set addresses the challenges of limited diversity and outdated generative techniques by providing a more comprehensive set of high-quality and diverse images.

Comparison with Existing Related Works

Comparison with existing related works.

Comparison with existing related works. An (*) indicates methods that have created their own dataset. SIDA stands out by combining diverse datasets and providing a unified solution for deepfake detection, localization, and interpretation.

Tampered Image Generation Pipeline

Tampered Image Generation Pipeline.

Tampered image generation pipeline: It consists of four stages—extracting objects from captions using GPT-4o, obtaining object masks with Language-SAM, setting up replacement dictionaries for generating tampered images, and generating new images using Latent Diffusion. This figure illustrates an example of object replacement (e.g., “cat” to “dog”) and attribute modification

Generated Image Examples

Method: SIDA

Method of SIDA.

The pipeline of SIDA: Given an image Xi and the corresponding text input Xt, the last hidden layer for the <DET> token provides the detection result. If the detection result indicates a tampered image, SIDA extracts the <SEG> token to generate masks for the tampered regions. This figure shows an example where the man’s face has been manipulated.

Video Presentation

Experiments

Visual Examples of SIDA

Example Output from SIDA

BibTeX

@misc{huang2024sidasocialmediaimage,
        title={SIDA: Social Media Image Deepfake Detection, Localization and Explanation with Large Multimodal Model}, 
        author={Zhenglin Huang and Jinwei Hu and Xiangtai Li and Yiwei He and Xingyu Zhao and Bei Peng and Baoyuan Wu and Xiaowei Huang and Guangliang Cheng},
        year={2024},
        eprint={2412.04292},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2412.04292}, 
  }