Indexed by:
Abstract:
With increasing popularity and larger real-world applicability, graph self-supervised learning (GSSL) can significantly reduce labeling costs by extracting implicit input supervision. As a promising example, graph masked auto-encoders (GMAE) can encode rich node knowledge by recovering the masked input components, e.g., features or edges. Despite their competitiveness, existing GMAEs focus only on neighboring information reconstruction, which totally ignores distant multi-hop semantics and thus fails to capture global knowledge. Furthermore, many GMAEs cannot scale on large-scale graphs since they suffer from memory bottlenecks with unavoidable full-batch training. To address these challenges and facilitate “high-level” discriminative semantics, we propose a simple yet effective framework (i.e., HopMAE) to encourage hop-perspective semantic interactions by adopting multi-hop input-rich reconstruction while supporting mini-batch training. Despite the rationales of the above designs, we still observe some limitations (e.g., sub-optimal generalizability and training instability), potentially due to the implicit gap between the task-triviality and input-richness of reconstruction. Therefore, to alleviate task-triviality and fully unleash the potential of our framework, we further propose a combined fine-grained loss function, which generalizes the existing ones and significantly improves the difficulties of reconstruction tasks, thus naturally alleviating over-fitting. Extensive experiments on eight benchmarks demonstrate that our method comprehensively outperforms many state-of-the-art counterparts. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
ISSN: 0302-9743
Year: 2024
Volume: 14876 LNAI
Page: 343-355
Language: English
0 . 4 0 2
JCR@2005
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: