• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:王石平

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 16 >
Multi-View Incremental Learning with Structured Hebbian Plasticity for Enhanced Fusion Efficiency EI
会议论文 | 2025 , 39 (2) , 1265-1273 | 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Abstract&Keyword Cite

Abstract :

The rapid evolution of multimedia technology has revolutionized human perception, paving the way for multi-view learning. However, traditional multi-view learning approaches are tailored for scenarios with fixed data views, falling short of emulating the intricate cognitive procedures of the human brain processing signals sequentially. Our cerebral architecture seamlessly integrates sequential data through intricate feed-forward and feedback mechanisms. In stark contrast, traditional methods struggle to generalize effectively when confronted with data spanning diverse domains, highlighting the need for innovative strategies that can mimic the brain's adaptability and dynamic integration capabilities. In this paper, we propose a bio-neurologically inspired multiview incremental framework named MVIL aimed at emulating the brain's fine-grained fusion of sequentially arriving views. MVIL lies two fundamental modules: structured Hebbian plasticity and synaptic partition learning. The structured Hebbian plasticity reshapes the structure of weights to express the high correlation between view representations, facilitating a fine-grained fusion of view representations. Moreover, synaptic partition learning is efficient in alleviating drastic changes in weights and also retaining old knowledge by inhibiting partial synapses. These modules bionically play a central role in reinforcing crucial associations between newly acquired information and existing knowledge repositories, thereby enhancing the network's capacity for generalization. Experimental results on six benchmark datasets show MVIL's effectiveness over state-of-the-art methods. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Keyword :

Contrastive Learning Contrastive Learning Reinforcement learning Reinforcement learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Yuhong , Song, Ailin , Yin, Huifeng et al. Multi-View Incremental Learning with Structured Hebbian Plasticity for Enhanced Fusion Efficiency [C] . 2025 : 1265-1273 .
MLA Chen, Yuhong et al. "Multi-View Incremental Learning with Structured Hebbian Plasticity for Enhanced Fusion Efficiency" . (2025) : 1265-1273 .
APA Chen, Yuhong , Song, Ailin , Yin, Huifeng , Zhong, Shuai , Chen, Fuhai , Xu, Qi et al. Multi-View Incremental Learning with Structured Hebbian Plasticity for Enhanced Fusion Efficiency . (2025) : 1265-1273 .
Export to NoteExpress RIS BibTex

Version :

OpenViewer: Openness-Aware Multi-View Learning EI
会议论文 | 2025 , 39 (15) , 16389-16397 | 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Abstract&Keyword Cite Version(1)

Abstract :

Multi-view learning methods leverage multiple data sources to enhance perception by mining correlations across views, typically relying on predefined categories. However, deploying these models in real-world scenarios presents two primary openness challenges. 1) Lack of Interpretability: The integration mechanisms of multi-view data in existing black-box models remain poorly explained; 2) Insufficient Generalization: Most models are not adapted to multi-view scenarios involving unknown categories. To address these challenges, we propose OpenViewer, an openness-aware multi-view learning framework with theoretical support. This framework begins with a Pseudo-Unknown Sample Generation Mechanism to efficiently simulate open multi-view environments and previously adapt to potential unknown samples. Subsequently, we introduce an Expression-Enhanced Deep Unfolding Network to intuitively promote interpretability by systematically constructing functional prior-mapping modules and effectively providing a more transparent integration mechanism for multi-view data. Additionally, we establish a Perception-Augmented Open-Set Training Regime to significantly enhance generalization by precisely boosting confidences for known categories and carefully suppressing inappropriate confidences for unknown ones. Experimental results demonstrate that OpenViewer effectively addresses openness challenges while ensuring recognition performance for both known and unknown samples. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Keyword :

Deep learning Deep learning Multi-task learning Multi-task learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Du, Shide , Fang, Zihan , Tan, Yanchao et al. OpenViewer: Openness-Aware Multi-View Learning [C] . 2025 : 16389-16397 .
MLA Du, Shide et al. "OpenViewer: Openness-Aware Multi-View Learning" . (2025) : 16389-16397 .
APA Du, Shide , Fang, Zihan , Tan, Yanchao , Wang, Changwei , Wang, Shiping , Guo, Wenzhong . OpenViewer: Openness-Aware Multi-View Learning . (2025) : 16389-16397 .
Export to NoteExpress RIS BibTex

Version :

OpenViewer: Openness-Aware Multi-View Learning Scopus
其他 | 2025 , 39 (15) , 16389-16397 | Proceedings of the AAAI Conference on Artificial Intelligence
Low-rank tucker decomposition for multi-view outlier detection based on meta-learning EI
期刊论文 | 2025 , 123 | Information Fusion
Abstract&Keyword Cite Version(2)

Abstract :

The analysis and mining of multi-view data have gained widespread attention, making multi-view anomaly detection a prominent research area. Despite notable advancements in the performance of existing multi-view anomaly detection methods, they still face certain limitations. (1) The existing methods fail to fully leverage the low-rank structure of multi-view data, which results in a lack of necessary interpretability when uncovering the latent relationships between views. (2) In the recovery of the consensus structure, the current methods achieve this merely through a simple aggregation process, lacking in-depth exploration and interaction between the potential structures of each view. To address these challenges, we propose the Low-Rank Tucker Decomposition based on Meta-Learning (LRTDM) for multi-view outlier detection. First, the low-rank Tucker decomposition is employed to reveal the low-rank structure of the multi-view self-expressive tensor. The factor matrices and core tensor effectively preserve and encode the latent structure of each view. This structured representation can efficiently capture the potential shared features between views, allowing for a more refined analysis of each individual view. Furthermore, meta-learning is utilized to define the learning and fusion of view-specific latent features as a nested optimization problem, which is solved alternately using a two-layer optimization scheme. Finally, anomalies are detected through the consensus matrix recovered from the latent representations and the error matrix during the self-expressive tensor learning process. Extensive experiments conducted on five publicly available datasets demonstrate the effectiveness of our approach. The results show that our algorithm improves detection accuracy by 2% to 10% compared to state-of-the-art methods. © 2025 Elsevier B.V.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Wei , Xie, Kun , Li, Jiayin et al. Low-rank tucker decomposition for multi-view outlier detection based on meta-learning [J]. | Information Fusion , 2025 , 123 .
MLA Lin, Wei et al. "Low-rank tucker decomposition for multi-view outlier detection based on meta-learning" . | Information Fusion 123 (2025) .
APA Lin, Wei , Xie, Kun , Li, Jiayin , Wang, Shiping , Xu, Li . Low-rank tucker decomposition for multi-view outlier detection based on meta-learning . | Information Fusion , 2025 , 123 .
Export to NoteExpress RIS BibTex

Version :

Low-rank tucker decomposition for multi-view outlier detection based on meta-learning Scopus
期刊论文 | 2025 , 123 | Information Fusion
Low-rank tucker decomposition for multi-view outlier detection based on meta-learning SCIE
期刊论文 | 2025 , 123 | INFORMATION FUSION
Information-controlled graph convolutional network for multi-view semi-supervised classification SCIE
期刊论文 | 2025 , 184 | NEURAL NETWORKS
Abstract&Keyword Cite Version(2)

Abstract :

Graph convolutional networks have achieved remarkable success in the field of multi-view learning. Unfortunately, most graph convolutional network-based multi-view learning methods fail to capture long-range dependencies due to the over-smoothing problem. Many studies have attempted to mitigate this issue by decoupling graph convolution operations. However, these decoupled architectures lead to the absence of feature transformation module, thus limiting the expressive power of the model. To this end, we propose an information-controlled graph convolutional network for multi-view semi-supervised classification. In the proposed method, we maintain the paradigm of node embeddings during propagation by imposing orthogonality constraints on the feature transformation module. By further introducing a damping factor based on residual connections, we theoretically demonstrate that the proposed method can alleviate the over-smoothing problem while retaining the feature transformation module. Furthermore, we prove that the proposed model can stabilize both forward inference and backward propagation in graph convolutional networks. Extensive experimental results on benchmark datasets demonstrate the effectiveness of the proposed method.

Keyword :

Graph convolutional network Graph convolutional network Layer normalization Layer normalization Multi-view learning Multi-view learning Semi-supervised classification Semi-supervised classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shi, Yongquan , Pi, Yueyang , Liu, Zhanghui et al. Information-controlled graph convolutional network for multi-view semi-supervised classification [J]. | NEURAL NETWORKS , 2025 , 184 .
MLA Shi, Yongquan et al. "Information-controlled graph convolutional network for multi-view semi-supervised classification" . | NEURAL NETWORKS 184 (2025) .
APA Shi, Yongquan , Pi, Yueyang , Liu, Zhanghui , Zhao, Hong , Wang, Shiping . Information-controlled graph convolutional network for multi-view semi-supervised classification . | NEURAL NETWORKS , 2025 , 184 .
Export to NoteExpress RIS BibTex

Version :

Information-controlled graph convolutional network for multi-view semi-supervised classification Scopus
期刊论文 | 2025 , 184 | Neural Networks
Information-controlled graph convolutional network for multi-view semi-supervised classification EI
期刊论文 | 2025 , 184 | Neural Networks
Multi-view Representation Learning with Decoupled private and shared Propagation SCIE
期刊论文 | 2025 , 310 | KNOWLEDGE-BASED SYSTEMS
Abstract&Keyword Cite Version(2)

Abstract :

Multi-view learning has demonstrated strong potential in processing data from different sources or viewpoints. Despite the significant progress made by Multi-view Graph Neural Networks (MvGNNs) in exploiting graph structures, features, and representations, existing research generally lacks architectures specifically designed for the intrinsic properties of multi-view data. This leads to models that still have deficiencies in fully utilizing consistent and complementary information in multi-view data. Most of current research tends to simply extend the single-view GNN framework to multi-view data, lacking in-depth strategies to handle and leverage the unique properties of these data. To address this issue, we propose a simple yet effective MvGNN framework called Multi-view Representation Learning with Decoupled private and shared Propagation (MvRL-DP). This framework enables multi-view data to be effectively processed as a whole by alternating private and shared operations to integrate cross-view information. In addition, to address possible inconsistencies between views, we present a discriminative loss that promotes class separability and prevents the model from being misled by noise hidden in multi-view data. Experiments demonstrate that the proposed framework is superior to current state-of-the-art methods in the multi-view semi-supervised classification task.

Keyword :

Multi-view learning Multi-view learning Propagation decoupling Propagation decoupling Representation learning Representation learning Semi-supervised classification Semi-supervised classification Tensor operation Tensor operation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Xuzheng , Lan, Shiyang , Wu, Zhihao et al. Multi-view Representation Learning with Decoupled private and shared Propagation [J]. | KNOWLEDGE-BASED SYSTEMS , 2025 , 310 .
MLA Wang, Xuzheng et al. "Multi-view Representation Learning with Decoupled private and shared Propagation" . | KNOWLEDGE-BASED SYSTEMS 310 (2025) .
APA Wang, Xuzheng , Lan, Shiyang , Wu, Zhihao , Guo, Wenzhong , Wang, Shiping . Multi-view Representation Learning with Decoupled private and shared Propagation . | KNOWLEDGE-BASED SYSTEMS , 2025 , 310 .
Export to NoteExpress RIS BibTex

Version :

Multi-view Representation Learning with Decoupled private and shared Propagation EI
期刊论文 | 2025 , 310 | Knowledge-Based Systems
Multi-view Representation Learning with Decoupled private and shared Propagation Scopus
期刊论文 | 2025 , 310 | Knowledge-Based Systems
Optimization-oriented multi-view representation learning in implicit bi-topological spaces SCIE
期刊论文 | 2025 , 704 | INFORMATION SCIENCES
Abstract&Keyword Cite Version(2)

Abstract :

Many representation learning methods have gradually emerged to better exploit the properties of multi-view data. However, these existing methods still have the following areas to be improved: 1) Most of them overlook the ex-ante interpretability of the model, which renders the model more complex and more difficult for people to understand; 2) They underutilize the potential of the bi-topological spaces, which bring additional structural information to the representation learning process. This lack is detrimental when dealing with data that exhibits topological properties or has complex geometrical relationships between different views. Therefore, to address the above challenges, we propose an optimization-oriented multi-view representation learning framework in implicit bi-topological spaces. On one hand, we construct an intrinsically interpretability end-to-end white-box model that directly conducts the representation learning procedure while improving the transparency of the model. On the other hand, the integration of bi-topological spaces information within the network via manifold learning facilitates the comprehensive utilization of information from the data, ultimately enhancing representation learning and yielding superior performance for downstream tasks. Extensive experimental results demonstrate that the proposed method exhibits promising performance and is feasible in the downstream tasks.

Keyword :

Bi-topological spaces Bi-topological spaces Multi-view learning Multi-view learning Optimization-oriented network Optimization-oriented network Representation learning Representation learning White-box model White-box model

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lan, Shiyang , Du, Shide , Fang, Zihan et al. Optimization-oriented multi-view representation learning in implicit bi-topological spaces [J]. | INFORMATION SCIENCES , 2025 , 704 .
MLA Lan, Shiyang et al. "Optimization-oriented multi-view representation learning in implicit bi-topological spaces" . | INFORMATION SCIENCES 704 (2025) .
APA Lan, Shiyang , Du, Shide , Fang, Zihan , Cai, Zhiling , Huang, Wei , Wang, Shiping . Optimization-oriented multi-view representation learning in implicit bi-topological spaces . | INFORMATION SCIENCES , 2025 , 704 .
Export to NoteExpress RIS BibTex

Version :

Optimization-oriented multi-view representation learning in implicit bi-topological spaces Scopus
期刊论文 | 2025 , 704 | Information Sciences
Optimization-oriented multi-view representation learning in implicit bi-topological spaces EI
期刊论文 | 2025 , 704 | Information Sciences
Multi-scale graph diffusion convolutional network for multi-view learning SCIE
期刊论文 | 2025 , 58 (6) | ARTIFICIAL INTELLIGENCE REVIEW
Abstract&Keyword Cite Version(2)

Abstract :

Multi-view learning has attracted considerable attention owing to its capability to learn more comprehensive representations. Although graph convolutional networks have achieved encouraging results in multi-view research, their limitation to considering only nearest neighbors results in the decrease on the ability to obtain high-order information. Many existing methods acquire high-order correlation by stacking multiple layers onto the model, yet they could lead to the issue of over-smoothing. In this paper, we propose a framework termed multi-scale graph diffusion convolutional network, which aims to gather comprehensive higher-order information without stacking multiple convolutional layers. Specifically, in order to better expand the receptive field of the node and reduce the parameter complexity, the proposed framework utilizes a contractive mapping to transform features from multiple views on decoupled propagation rules. Our framework introduces a multi-scale graph-based diffusion mechanism to adaptively extract the abundant high-order knowledge embedded within multi-scale graphs. Experiments show that the proposed method outperforms other state-of-the-art methods in terms of multi-view semi-supervised classification.

Keyword :

Graph convolutional network Graph convolutional network Graph diffusion Graph diffusion Multi-scale fusion Multi-scale fusion Multi-view learning Multi-view learning Semi-supervised classification. Semi-supervised classification.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Shiping , Li, Jiacheng , Chen, Yuhong et al. Multi-scale graph diffusion convolutional network for multi-view learning [J]. | ARTIFICIAL INTELLIGENCE REVIEW , 2025 , 58 (6) .
MLA Wang, Shiping et al. "Multi-scale graph diffusion convolutional network for multi-view learning" . | ARTIFICIAL INTELLIGENCE REVIEW 58 . 6 (2025) .
APA Wang, Shiping , Li, Jiacheng , Chen, Yuhong , Wu, Zhihao , Huang, Aiping , Zhang, Le . Multi-scale graph diffusion convolutional network for multi-view learning . | ARTIFICIAL INTELLIGENCE REVIEW , 2025 , 58 (6) .
Export to NoteExpress RIS BibTex

Version :

Multi-scale graph diffusion convolutional network for multi-view learning EI
期刊论文 | 2025 , 58 (6) | Artificial Intelligence Review
Multi-scale graph diffusion convolutional network for multi-view learning Scopus
期刊论文 | 2025 , 58 (6) | Artificial Intelligence Review
Heterogeneous Graph Embedding with Dual Edge Differentiation SCIE
期刊论文 | 2025 , 183 | NEURAL NETWORKS
Abstract&Keyword Cite Version(2)

Abstract :

Recently, heterogeneous graphs have attracted widespread attention as a powerful and practical superclass of traditional homogeneous graphs, which reflect the multi-type node entities and edge relations in the real world. Most existing methods adopt meta-path construction as the mainstream to learn long-range heterogeneous semantic messages between nodes. However, such schema constructs the node-wise correlation by connecting nodes via pre-computed fixed paths, which neglects the diversities of meta-paths on the path type and path range. In this paper, we propose a meta-path-based semantic embedding schema, which is called Heterogeneous Graph Embedding with Dual Edge Differentiation (HGE-DED) to adequately construct flexible meta-path combinations thus learning the rich and discriminative semantic of target nodes. Concretely, HGEDED devises a Multi-Type and multi-Range Meta-Path Construction (MTR-MP Construction), which covers the comprehensive exploration of meta-path combinations from path type and path range, expressing the diversity of edges at more fine-grained scales. Moreover, HGE-DED designs the semantics and meta-path joint guidance, constructing a hierarchical short- and long-range relation adjustment, which constrains the path learning as well as minimizes the impact of edge heterophily on heterogeneous graphs. Experimental results on four benchmark datasets demonstrate the effectiveness of HGE-DED compared with state-of-the-art methods.

Keyword :

Graph neural network Graph neural network Heterogeneous information network Heterogeneous information network Meta-path combination Meta-path combination Semantic embedding Semantic embedding Semi-supervised classification Semi-supervised classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Yuhong , Chen, Fuhai , Wu, Zhihao et al. Heterogeneous Graph Embedding with Dual Edge Differentiation [J]. | NEURAL NETWORKS , 2025 , 183 .
MLA Chen, Yuhong et al. "Heterogeneous Graph Embedding with Dual Edge Differentiation" . | NEURAL NETWORKS 183 (2025) .
APA Chen, Yuhong , Chen, Fuhai , Wu, Zhihao , Chen, Zhaoliang , Cai, Zhiling , Tan, Yanchao et al. Heterogeneous Graph Embedding with Dual Edge Differentiation . | NEURAL NETWORKS , 2025 , 183 .
Export to NoteExpress RIS BibTex

Version :

Heterogeneous Graph Embedding with Dual Edge Differentiation Scopus
期刊论文 | 2025 , 183 | Neural Networks
Heterogeneous Graph Embedding with Dual Edge Differentiation EI
期刊论文 | 2025 , 183 | Neural Networks
JDC-GCN: joint diversity and consistency graph convolutional network EI
期刊论文 | 2025 , 37 (16) , 10407-10423 | Neural Computing and Applications
Abstract&Keyword Cite Version(1)

Abstract :

In real-world scenarios, multi-view data comprises heterogeneous features, with each feature corresponding to a specific view. The objective of multi-view semi-supervised classification is to enhance classification performance by leveraging the inherent complementary and consistent information present within diverse views. Nevertheless, many existing frameworks primarily focus on assigning suitable weights to different views while neglecting the importance of consistent information. In this paper, a multi-view semi-supervised classification framework called joint diversity and consistency graph convolutional network (JDC-GCN) is proposed. Firstly, the structure of graph convolutional network is introduced to the multi-view semi-supervised classification, capable of propagating the label information over the topological structure of multi-view data. Secondly, the proposed JDC-GCN captures the complementary and consistent information from multiple views through two indispensable sub-modules, Diversity-GCN and Consistency-GCN, respectively. Finally, the attention mechanism is leveraged to dynamically adjust the weights of various views, allowing us to measure the significance of heterogeneous features and the consistent graph without introducing additional parameters. Comprehensive experiments on eight multi-view datasets are conducted to validate the effectiveness of the JDC-GCN algorithm. The results show that the proposed method exhibits superior classification performance compared to other state-of-the-art methods. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025.

Keyword :

Adversarial machine learning Adversarial machine learning Contrastive Learning Contrastive Learning Convolutional neural networks Convolutional neural networks Graph algorithms Graph algorithms Network theory (graphs) Network theory (graphs) Self-supervised learning Self-supervised learning Semi-supervised learning Semi-supervised learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Renjie , Yao, Jie , Wang, Shiping et al. JDC-GCN: joint diversity and consistency graph convolutional network [J]. | Neural Computing and Applications , 2025 , 37 (16) : 10407-10423 .
MLA Lin, Renjie et al. "JDC-GCN: joint diversity and consistency graph convolutional network" . | Neural Computing and Applications 37 . 16 (2025) : 10407-10423 .
APA Lin, Renjie , Yao, Jie , Wang, Shiping , Guo, Wenzhong . JDC-GCN: joint diversity and consistency graph convolutional network . | Neural Computing and Applications , 2025 , 37 (16) , 10407-10423 .
Export to NoteExpress RIS BibTex

Version :

JDC-GCN: joint diversity and consistency graph convolutional network Scopus
期刊论文 | 2025 , 37 (16) , 10407-10423 | Neural Computing and Applications
Deep random walk inspired multi-view graph convolutional networks for semi-supervised classification SCIE
期刊论文 | 2025 , 55 (6) | APPLIED INTELLIGENCE
Abstract&Keyword Cite Version(2)

Abstract :

Recent studies highlight the growing appeal of multi-view learning due to its enhanced generalization. Semi-supervised classification, using few labeled samples to classify the unlabeled majority, is gaining popularity for its time and cost efficiency, particularly with high-dimensional and large-scale multi-view data. Existing graph-based methods for multi-view semi-supervised classification still have potential for improvement in further enhancing classification accuracy. Since deep random walk has demonstrated promising performance across diverse fields and shows potential for semi-supervised classification. This paper proposes a deep random walk inspired multi-view graph convolutional network model for semi-supervised classification tasks that builds signal propagation between connected vertices of the graph based on transfer probabilities. The learned representation matrices from different views are fused by an aggregator to learn appropriate weights, which are then normalized for label prediction. The proposed method partially reduces overfitting, and comprehensive experiments show it delivers impressive performance compared to other state-of-the-art algorithms, with classification accuracy improving by more than 5% on certain test datasets.

Keyword :

Deep random walk Deep random walk Graph convolutional networks Graph convolutional networks Multi-view learning Multi-view learning Semi-supervised classification Semi-supervised classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Zexi , Chen, Weibin , Yao, Jie et al. Deep random walk inspired multi-view graph convolutional networks for semi-supervised classification [J]. | APPLIED INTELLIGENCE , 2025 , 55 (6) .
MLA Chen, Zexi et al. "Deep random walk inspired multi-view graph convolutional networks for semi-supervised classification" . | APPLIED INTELLIGENCE 55 . 6 (2025) .
APA Chen, Zexi , Chen, Weibin , Yao, Jie , Li, Jinbo , Wang, Shiping . Deep random walk inspired multi-view graph convolutional networks for semi-supervised classification . | APPLIED INTELLIGENCE , 2025 , 55 (6) .
Export to NoteExpress RIS BibTex

Version :

Deep random walk inspired multi-view graph convolutional networks for semi-supervised classification EI
期刊论文 | 2025 , 55 (6) | Applied Intelligence
Deep random walk inspired multi-view graph convolutional networks for semi-supervised classification Scopus
期刊论文 | 2025 , 55 (6) | Applied Intelligence
10| 20| 50 per page
< Page ,Total 16 >

Export

Results:

Selected

to

Format:
Online/Total:605/10352151
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1