Indexed by:
Abstract:
Burgeoning graph contrastive learning (GCL) stands out in the graph domain with low annotated costs and high model performance improvements, which is typically composed of three standard configurations: 1) graph data augmentation (GraphDA), 2) multi-branch graph neural network (GNN) encoders and projection heads, 3) and contrastive loss. Unfortunately, the diverse GraphDA may corrupt graph semantics to different extents and meanwhile greatly burdens the time complexity on hyperparameter search. Besides, the multi-branch contrastive framework also demands considerable training consumption on encoding and projecting. In this paper, we propose one simplified GCL model to simultaneously address these problems via the minimal components of a general graph contrastive framework, i.e., a GNN encoder and a projection head. The proposed model treats the node representations generated by the GNN encoder and the projection head as positive pairs while considering all other representations as negatives, which not only liberates the model from the dependency on GraphDA but also streamlines the traditional multi-branch contrastive learning framework into a more efficient single-streamlined one. Through the in-depth theoretical analysis on the objective function, the mystery of why the proposed model works is illustrated. Empirical experiments on multiple public datasets demonstrate that the proposed model still ensures performance to be comparative with current advanced self-supervised GNNs. © 1989-2012 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Transactions on Knowledge and Data Engineering
ISSN: 1041-4347
Year: 2025
Issue: 10
Volume: 37
Page: 6159-6172
8 . 9 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: