Indexed by:
Abstract:
In order to prevent the impact of difficult to recognize genuine and fake face changing images on people's lives and social stability, it is urgent to study efficient deepfake detection algorithms for such images. However, most existing deepfake detection algorithms require extracting a large number of video frames from dataset videos for training. To address this issue, this paper proposes a new deepfake detection algorithm based on a multi-Attentional model. Firstly, this article adds and fuses channel information to multiple attention maps generated by multiple spatial attention heads, enabling the model to comprehensively utilize information from various different features and provide richer feature representations. Secondly, in the texture enhancement section, DenseBlock is used to expand the receptive field and enhance the ability to extract shallow features, enabling the model to capture more detailed information. This article uses single frame real faces extracted from the public dataset FaceForensics++, a few sample dataset created from forged faces tampered with by DeepFake, and a dataset created from forged faces tampered with by our own image editing software to verify the performance of the proposed method. The experimental results show that on both datasets, AUC and ACC have increased by 1.8%, 2.1%, and 1.7%, 3.9%, respectively, compared to the original model. © 2023 ACM.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
Year: 2023
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: