Indexed by:
Abstract:
Federated learning not only realizes collaborative training of models, but also effectively maintains user privacy. However, with the widespread application of privacy-preserving federated learning, poisoning attacks threaten the model utility. Existing defense schemes suffer from a series of problems, including low accuracy, low robustness and reliance on strong assumptions, which limit the practicability of federated learning. To solve these problems, we propose a Robustness-enhanced privacy-preserving Federated learning with scaled dot-product attention (RFed) under dual-server model. Specifically, we design a highly robust defense mechanism that uses a dual-server model instead of traditional single-server model to significantly improve model accuracy and completely eliminate the reliance on strong assumptions. Formal security analysis proves that our scheme achieves convergence and provides privacy protection, and extensive experiments demonstrate that our scheme reduces high computational overhead while guaranteeing privacy preservation and model accuracy, and ensures that the failure rate of poisoning attacks is higher than 96%.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
ISSN: 1556-6013
Year: 2024
Volume: 19
Page: 5814-5827
6 . 3 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: