Indexed by:
Abstract:
It is widely acknowledged that Deep Neural Networks (DNNs) are vulnerable to backdoor attacks, wherein attackers inject poisoned samples into the training data, resulting in misclassification of these tainted inputs during model deployment. Given that DNNs typically utilize external training data from untrusted third parties, it becomes imperative to employ a robust defense strategy against backdoor attacks during the training phase. In this paper, we propose a novel framework for Dual-Model Backdoor Defense (DMBD) for defending against backdoor attacks , which enables learning of resilient representations and processing of poisoned samples for accurate classification. Our defense framework comprises two steps: Firstly, disrupting the established relationship between original triggers and target labels by introducing additional triggers to all images and applying multiple data enhancements; Secondly, employing a contrastive learning model to associate unlabeled latent variables in the Gaussian Mixture Model (GMM) with suspicious label annotations. By considering the underlying data distribution, mislabeled samples are identified as out-of-distribution instances using a two-component GMM and subsequently subjected to cross-supervised loss for re-labeling and further processing. Extensive experiments demonstrate that our proposed backdoor defense (DMBD) consistently delivers stable performance across various attack scenarios. © 2025 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
Year: 2025
Page: 25-34
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: