Indexed by:
Abstract:
Member Inference Attack (MIA) is a key measure for evaluating privacy leakage in Machine Learning (ML) models, aiming to distinguish private members from non-members by training the attack model. In addition to the traditional MIA, the recently proposed Generative Adversarial Network (GAN)-based MIA can help the adversary know the distribution of the victim's private dataset, thereby significantly improving attack accuracy. For traditional attacks and this new type of attack, previous defense schemes cannot handle the trade-off between privacy and utility well. To this end, we propose a defense solution using multi-model ensemble framework. Specifically, we train multiple submodels to hide membership signals and resist MIA, achieving reduced privacy leakage while guaranteeing the effectiveness of the target model. Our security analysis shows that our scheme can provide privacy protection while preserving model utility. Experimental results on widely used datasets show that our scheme can effectively resist MIAs with negligible utility loss.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON SERVICES COMPUTING
ISSN: 1939-1374
Year: 2023
Issue: 6
Volume: 16
Page: 4087-4101
5 . 5
JCR@2023
5 . 5 0 0
JCR@2023
JCR Journal Grade:1
CAS Journal Grade:2
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: