Indexed by:
Abstract:
Deep multi-view representation learning focuses on training a unified low-dimensional representation for data with multiple sources or modalities. With the rapidly growing attention of graph neural networks, more and more researchers have introduced various graph models into multi-view learning. Although considerable achievements have been made, most existing methods usually propagate information in a single view and fuse multiple information only from the perspective of attributes or relationships. To solve the aforementioned problems, we propose an efficient model termed Dual Fusion-Propagation Graph Neural Network (DFP-GNN) and apply it to deep multi-view clustering tasks. The proposed method is designed with three submodules and has the following merits: a) The proposed view-specific and cross-view propagation modules can capture the consistency and complementarity information among multiple views; b) The designed fusion module performs multi-view information fusion with the attributes of nodes and the relationships among them simultaneously. Experiments on popular databases show that DFP-GNN achieves significant results compared with several state-of-the-art algorithms.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON MULTIMEDIA
ISSN: 1520-9210
Year: 2023
Volume: 25
Page: 9203-9215
8 . 4
JCR@2023
8 . 4 0 0
JCR@2023
JCR Journal Grade:1
CAS Journal Grade:1
Cited Count:
WoS CC Cited Count: 13
SCOPUS Cited Count: 15
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: