Indexed by:
Abstract:
Although federated learning can provide privacy protection for individual raw data, some studies have shown that the shared parameters or gradients under federated learning may still reveal user privacy. Differential privacy is a promising solution to the above problem due to its small computational overhead. At present, differential privacy-based federated learning generally focuses on the trade-off between privacy and model convergence. Even though differential privacy obscures sensitive information by adding a controlled amount of noise to the confidential data, it opens a new door for model poisoning attacks: attackers can use noise to escape anomaly detection. In this paper, we propose a novel model poisoning attack called Model Shuffle Attack (MSA), which designs a unique way to shuffle and scale the model parameters. If we treat the model as a black box, it behaves like a benign model on test set. Unlike other model poisoning attacks, the malicious model after MSA has high accuracy on test set while reducing the global model convergence speed and even causing the model to diverge. Extensive experiments show that under FedAvg and robust aggregation rules, MSA is able to significantly degrade performance of the global model while guaranteeing stealthiness. © 2023 Elsevier Inc.
Keyword:
Reprint 's Address:
Email:
Source :
Information Sciences
ISSN: 0020-0255
Year: 2023
Volume: 630
Page: 158-172
0 . 0
JCR@2023
0 . 0 0 0
JCR@2023
ESI HC Threshold:32
CAS Journal Grade:1
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 13
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: