Indexed by:
Abstract:
The emerging Federated Graph Learning (FGL) offers promising collaborative training on distributed graph data. However, malicious actors may contaminate data streams by falsifying node relationships on clients or conduct adversarial attacks on edge servers, causing degraded inference and privacy leakage. Although some studies focus on privacy-protection FGL, they do not consider robustness and membership privacy amidst data pollution and adversarial attacks. Moreover, classic FGL commonly adopts FedAvg but neglects the impact of uneven information flow from distinct subtopologies. To address these important challenges, we propose GuardFGL, a novel similarity-driven FGL that extracts minimal-sufficient information from polluted data to maintain strong adversarial robustness and protect membership privacy. First, we incorporate structural-aware and feature-selection learning to explore target-relevant edges and features, avoiding privacy leakage from raw data. Next, we design an original Federated Graph Information Bottleneck (FGIB) principle to supervise extracting well-compressed information, mitigating the interference of polluted data streams. Finally, we develop a similarity-driven federated aggregation with auxiliary local information to alleviate the impact of uneven information flow. Using the real-world testbed and benchmark graph datasets, extensive experiments demonstrate that GuardFGL can achieve superior robust prediction and better protect membership privacy than state-of-the-art methods under adversarial attacks. © 2025 ACM.
Keyword:
Reprint 's Address:
Email:
Source :
ISSN: 2154-817X
Year: 2025
Volume: 2
Page: 4062-4073
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: