Indexed by:
Abstract:
As a kind of generalization of Transformers in the graph domain, Global Graph Transformers are good at learning distant knowledge by directly doing information interactions on complete graphs, which differs from Local Graph Transformers interacting on the original structures. However, we find that most prior works focus only on graph-level tasks (e.g., graph classification) and few Graph Transformer models can effectively solve node-level tasks, especially semi-supervised node classification, which obviously has important practical significance due to the limitation and expensiveness of these node labels. In order to fill this gap, this paper first summarizes the theoretical advantages of Graph Transformers. And based on some exploring experiments, we give some discussions on the main cause of their poor practical performance in semi-supervised node classifications. Secondly, based on this analysis, we design a three-stage homogeneity augmentation framework and propose a Semi-Global Graph Transformer. Considering both global and local perspectives, the proposed model combines various technologies including self-distillation, pseudo-label filtering, pre-training and fine-tuning, and metric learning. Furthermore, it simultaneously enhances the structure and the optimization, improving its effectiveness, scalability, and generalizability. Finally, extensive experiments on seven public homogeneous and heterophilous graph benchmarks show that the proposed method can achieve competitive or much better results compared to many baseline models including state-of-the-arts.
Keyword:
Reprint 's Address:
Email:
Source :
PARALLEL PROCESSING LETTERS
ISSN: 0129-6264
Year: 2023
0 . 5
JCR@2023
0 . 5 0 0
JCR@2023
JCR Journal Grade:4
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: