Indexed by:
Abstract:
With the growing diversity of data sources, multi-view learning methods have attracted considerable attention. Among these, by modeling the multi-view data as multi-view graphs, multi-view Graph Neural Networks (GNNs) have shown encouraging performance on various multi-view learning tasks. The message passing is the critical mechanism empowering GNNs with superior capacity to process complex graph data. However, most multi-view GNNs are designed on the well-established overall framework, overlooking the intrinsic challenges of the message passing on multi-view scenarios. To clarify this, we first revisit the message passing mechanism from a Laplacian smoothing perspective, revealing the key to designing a multi-view message passing. Following the analysis, in this paper, we propose an enhanced GNN framework termed Confluent Graph Neural Networks (CGNN), with Cross-view Confulent Message Pssing (CCMP) tailored for multi-view learning. Inspired by the optimization of an improved multi-view Laplacian smoothing problem, CCMP contains three sub-modules that enable the interaction between graph structures and consistent representations, which makes it aware of consistency and complementarity information across views. Extensive experiments on four types of data including multi-modality data demonstrate that our proposed model exhibits superior effectiveness and robustness. The code is available at https://github.com/shumanzhuang/CGNN. © 2024 ACM.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
Year: 2024
Page: 10065-10074
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: