• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:陈晓云

Refining:

Source

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 13 >
Cluster-guided graph attention auto-encoder SCIE
期刊论文 | 2025 , 81 (3) | JOURNAL OF SUPERCOMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

Attribute graph clustering is an important tool to analyze and understand complex networks. In recent years, graph attention auto-encoder has been applied to attribute graph clustering as a learning method for unsupervised feature representation. However, the graph attention auto-encoder only learns the feature representation of the nodes, and needs to use traditional clustering algorithms such as k-means and spectral clustering to achieve the final clustering of nodes. During the optimization process, the clustering loss cannot be fed back to the auto-encoder and the extracted features are not necessarily suitable for downstream clustering tasks because the auto-encoder model for feature learning and the clustering model are mutually independent. To overcome this problem, we propose a cluster-guided graph attention auto-encoder (CGATAE), which introduces a cluster-guided pairwise feature relationship preservation-based non-negative matrix factorization model (FR-NMF) into the graph attention auto-encoder. The model CGATAE obtains the final clustering results while learning the cluster-oriented node feature representation. Experiments on five public attribute graph datasets verify the effectiveness of the CGATAE model, and its clustering quality is significantly better than the original graph attention auto-encoder model.

Keyword :

Feature relationship preservation Feature relationship preservation Feature representations Feature representations Graph auto-encoder Graph auto-encoder Graph clustering Graph clustering Matrix factorization Matrix factorization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zheng, Zhiwen , Chen, Xiaoyun , Huang, Musheng . Cluster-guided graph attention auto-encoder [J]. | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (3) .
MLA Zheng, Zhiwen 等. "Cluster-guided graph attention auto-encoder" . | JOURNAL OF SUPERCOMPUTING 81 . 3 (2025) .
APA Zheng, Zhiwen , Chen, Xiaoyun , Huang, Musheng . Cluster-guided graph attention auto-encoder . | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (3) .
Export to NoteExpress RIS BibTex

Version :

Cluster-guided graph attention auto-encoder Scopus
期刊论文 | 2025 , 81 (3) | Journal of Supercomputing
Cluster-guided graph attention auto-encoder EI
期刊论文 | 2025 , 81 (3) | Journal of Supercomputing
Elastic Deep Sparse Self-Representation Subspace Clustering Network SCIE
期刊论文 | 2024 , 56 (2) | NEURAL PROCESSING LETTERS
Abstract&Keyword Cite Version(2)

Abstract :

Subspace clustering model based on self-representation learning often use l1,l2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1, \ell _2$$\end{document} or kernel norm to constrain self-representation matrix of the dataset. In theory, l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document} norm can constrain the independence of subspaces, but which may lead to under-connection because the sparsity of the self-representation matrix. l2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _2$$\end{document} and nuclear norm regularization can improve the connectivity between clusters, but which may lead to over-connection of the self-representation matrix. Because a single regularization term may cause subspaces to be over or insufficiently divided, this paper proposes an elastic deep sparse self-representation subspace clustering network (EDS-SC), which imposes sparse constraints on deep features, and introduces the elastic network regularization mixed l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document} and l2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _2$$\end{document} norm to constraint self-representation matrix. The network can extract deep sparse features and provide a balance between subspace independence and connectivity. Experiments on human faces, objects, and medical imaging datasets prove the effectiveness of EDS-SC network.

Keyword :

Deep auto-encoder Deep auto-encoder Deep sparse feature Deep sparse feature Elastic net regularization Elastic net regularization Representation learning Representation learning Subspace clustering Subspace clustering

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Qiaoping , Chen, Xiaoyun , Li, Yan et al. Elastic Deep Sparse Self-Representation Subspace Clustering Network [J]. | NEURAL PROCESSING LETTERS , 2024 , 56 (2) .
MLA Wang, Qiaoping et al. "Elastic Deep Sparse Self-Representation Subspace Clustering Network" . | NEURAL PROCESSING LETTERS 56 . 2 (2024) .
APA Wang, Qiaoping , Chen, Xiaoyun , Li, Yan , Lin, Yanming . Elastic Deep Sparse Self-Representation Subspace Clustering Network . | NEURAL PROCESSING LETTERS , 2024 , 56 (2) .
Export to NoteExpress RIS BibTex

Version :

Elastic Deep Sparse Self-Representation Subspace Clustering Network EI
期刊论文 | 2024 , 56 (2) | Neural Processing Letters
Elastic Deep Sparse Self-Representation Subspace Clustering Network Scopus
期刊论文 | 2024 , 56 (2) | Neural Processing Letters
Attributed graph subspace clustering with residual compensation guided by adaptive dual manifold regularization SCIE
期刊论文 | 2024 , 255 | EXPERT SYSTEMS WITH APPLICATIONS
Abstract&Keyword Cite Version(2)

Abstract :

As an important technology for recommendation system, attributed graph clustering has received extensive attention recently. Attributed graph clustering methods based on graph neural networks are the mainstream methods . However, their assumption that the attributes of adjacent nodes are similar is often not satisfied in the real world, which influences the clustering performance. To this end, this paper proposes an attributed graph subspace clustering algorithm with residual compensation guided by adaptive dual manifold regularization (ADMRGC). On the basis of the low rank representation subspace clustering (LRR) model, ADMRGC introduces attributed manifold regularization, topological manifold regularization and residual compensation, which make ADMRGC possible to utilize attributed similarity and topological similarity at the same time, thus solving the problem that traditional subspace clustering only considers attributed information. In addition, ADMRGC balances the contribution of node attributes and topology by adaptively weighting the dual manifold regularization, and uses the residual representation matrix to describe the difference between node attributed similarity and topological neighbor relationship. Therefore, ADMRGC can avoid the limitation of graph neural network models assuming that the attributes of adjacent nodes are similar. Experimental results on 7 public graph datasets show that ADMRGC can achieve the best clustering performance on both high-homophily and low-homophily datasets. Especially, for datasets with low homophily, ADMRGC as a shallow model can also achieve better clustering performance than state-of-the-art deep neural network models.

Keyword :

Adaptive weight Adaptive weight Attributed graph clustering Attributed graph clustering Manifold regularization Manifold regularization Residual Residual Self-representation model Self-representation model

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Yan , Chen, Xiaoyun . Attributed graph subspace clustering with residual compensation guided by adaptive dual manifold regularization [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 255 .
MLA Li, Yan et al. "Attributed graph subspace clustering with residual compensation guided by adaptive dual manifold regularization" . | EXPERT SYSTEMS WITH APPLICATIONS 255 (2024) .
APA Li, Yan , Chen, Xiaoyun . Attributed graph subspace clustering with residual compensation guided by adaptive dual manifold regularization . | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 255 .
Export to NoteExpress RIS BibTex

Version :

Attributed graph subspace clustering with residual compensation guided by adaptive dual manifold regularization EI
期刊论文 | 2024 , 255 | Expert Systems with Applications
Attributed graph subspace clustering with residual compensation guided by adaptive dual manifold regularization Scopus
期刊论文 | 2024 , 255 | Expert Systems with Applications
Deep manifold matrix factorization autoencoder using global connectivity for link prediction SCIE
期刊论文 | 2023 , 53 (21) , 25816-25835 | APPLIED INTELLIGENCE
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Link prediction aims to predict missing links or eliminate spurious links by employing known complex network information. As an unsupervised linear feature representation method, matrix factorization (MF)-based autoencoder (AE) can project the high-dimensional data matrix into the low-dimensional latent space. However, most of the traditional link prediction methods based on MF or AE adopt shallow models and single adjacency matrices, which cannot adequately learn and represent network features and are susceptible to noise. In addition, because some methods require the input of symmetric data matrix, they can only be used in undirected networks. Therefore, we propose a deep manifold matrix factorization autoencoder model using global connectivity matrix, called DM-MFAE-G. The model utilizes PageRank algorithm to get the global connectivity matrix between nodes for the complex network. DM-MFAE-G performs deep matrix factorization on the local adjacency matrix and global connectivity matrix, respectively, to obtain global and local multi-layer feature representations, which contains the rich structural information. In this paper, the model is solved by alternating iterative optimization method, and the convergence of the algorithm is proved. Comprehensive experiments on different real networks demonstrate that the global connectivity matrix and manifold constraints introduced by DM-MFAE-G significantly improve the link prediction performance on directed and undirected networks.

Keyword :

Autoencoder Autoencoder Deep matrix factorization Deep matrix factorization Graph representation learning Graph representation learning Link prediction Link prediction Manifold regularization Manifold regularization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Xinyi , Chen, Xiaoyun , Zheng, Zhiwen . Deep manifold matrix factorization autoencoder using global connectivity for link prediction [J]. | APPLIED INTELLIGENCE , 2023 , 53 (21) : 25816-25835 .
MLA Lin, Xinyi et al. "Deep manifold matrix factorization autoencoder using global connectivity for link prediction" . | APPLIED INTELLIGENCE 53 . 21 (2023) : 25816-25835 .
APA Lin, Xinyi , Chen, Xiaoyun , Zheng, Zhiwen . Deep manifold matrix factorization autoencoder using global connectivity for link prediction . | APPLIED INTELLIGENCE , 2023 , 53 (21) , 25816-25835 .
Export to NoteExpress RIS BibTex

Version :

Deep manifold matrix factorization autoencoder using global connectivity for link prediction Scopus
期刊论文 | 2023 , 53 (21) , 25816-25835 | Applied Intelligence
Deep manifold matrix factorization autoencoder using global connectivity for link prediction EI
期刊论文 | 2023 , 53 (21) , 25816-25835 | Applied Intelligence
Auto-Weighted Graph Regularization and Residual Compensation for Multi-view Subspace Clustering SCIE
期刊论文 | 2022 , 54 (5) , 3851-3871 | NEURAL PROCESSING LETTERS
WoS CC Cited Count: 2
Abstract&Keyword Cite Version(2)

Abstract :

Multi-view clustering has attractive intensive attention and proved to be more effective than single-view clustering. The mining and effective utilization of information complementarity is core of multi-view clustering. In this paper, we propose a novel Auto-Weighted Graph Regularization and Residual Compensation for Multi-view Subspace Clustering (ARMSC) method, which adopts residual representation information to compensate the globally low rank consensus representation. The auto-weighted multi-view graph regularization term is constructed to preserve the local manifold structure of multiple features, which can automatically adjust the weights and the learned weights are adopt to measure the importance of each view's residual representation for final representation. Specifically, we formulate the graph regularized consistent representation using nuclear norm and a set of group effect residual compensation using Frobenius norm. Finally, we introduce a convex relaxation and alternating direction method to optimize the problem. Comprehensive and ablation experiments on seven real-world data sets illustrate the effectiveness and superiority of the proposed ARMSC over several state-of-the-art multi-view clustering approaches.

Keyword :

Auto-weighted Auto-weighted Graph regularization Graph regularization Multi-view clustering Multi-view clustering Residual representation Residual representation Subspace clustering Subspace clustering

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Qiaoping , Chen, Xiaoyun , Chen, Wenjian . Auto-Weighted Graph Regularization and Residual Compensation for Multi-view Subspace Clustering [J]. | NEURAL PROCESSING LETTERS , 2022 , 54 (5) : 3851-3871 .
MLA Wang, Qiaoping et al. "Auto-Weighted Graph Regularization and Residual Compensation for Multi-view Subspace Clustering" . | NEURAL PROCESSING LETTERS 54 . 5 (2022) : 3851-3871 .
APA Wang, Qiaoping , Chen, Xiaoyun , Chen, Wenjian . Auto-Weighted Graph Regularization and Residual Compensation for Multi-view Subspace Clustering . | NEURAL PROCESSING LETTERS , 2022 , 54 (5) , 3851-3871 .
Export to NoteExpress RIS BibTex

Version :

Auto-Weighted Graph Regularization and Residual Compensation for Multi-view Subspace Clustering Scopus
期刊论文 | 2022 , 54 (5) , 3851-3871 | Neural Processing Letters
Auto-Weighted Graph Regularization and Residual Compensation for Multi-view Subspace Clustering EI
期刊论文 | 2022 , 54 (5) , 3851-3871 | Neural Processing Letters
Homogeneous ensemble extreme learning machine autoencoder with mutual representation learning and manifold regularization for medical datasets SCIE
期刊论文 | 2022 , 53 (12) , 15476-15495 | APPLIED INTELLIGENCE
WoS CC Cited Count: 2
Abstract&Keyword Cite Version(2)

Abstract :

As a single learner, extreme learning machine autoencoder (ELM-AE) and generalized extreme learning machine autoencoder (GELM-AE) have limited ability to learn high-dimensional complex data features because high-dimensional data contains more complex and rich discriminative information. GELM-AE only pays attention to the internal relationship of each data subset for dimensionality reduction, ignoring the relationship between different subsets. This paper proposes the homogeneous ensemble extreme learning machine autoencoder (HeELM-AE) to extract high-dimensional complex data diversity features. This method combines the ideas of ensemble feature learning and mutual representation matrix learning. Multiple data subsets are constructed from the original high-dimensional complex dataset with feature learning methods. Generalized extreme learning machine autoencoder(GELM-AE) is used as a base dimension reducer to learn rich discriminative information from highly redundant features. Mutual representation learning methods can characterize correlations between different data subsets and the local manifold structure inherent in different data subsets is maintained through manifold regularization at the same time. Extensive comparative experiments on medical datasets show that compared with other ensemble feature learning models, HeELM-AE is an efficient and accurate model. Finally, visual analysis is used to explain the working mechanism of each stage of HeELM-AE and explore feature learning model interpretability.

Keyword :

Diverse features Diverse features Ensemble learning Ensemble learning Extreme learning machine autoencoder Extreme learning machine autoencoder Feature learning Feature learning Interpretability Interpretability

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Wenjian , Chen, Xiaoyun , Lin, Yanming . Homogeneous ensemble extreme learning machine autoencoder with mutual representation learning and manifold regularization for medical datasets [J]. | APPLIED INTELLIGENCE , 2022 , 53 (12) : 15476-15495 .
MLA Chen, Wenjian et al. "Homogeneous ensemble extreme learning machine autoencoder with mutual representation learning and manifold regularization for medical datasets" . | APPLIED INTELLIGENCE 53 . 12 (2022) : 15476-15495 .
APA Chen, Wenjian , Chen, Xiaoyun , Lin, Yanming . Homogeneous ensemble extreme learning machine autoencoder with mutual representation learning and manifold regularization for medical datasets . | APPLIED INTELLIGENCE , 2022 , 53 (12) , 15476-15495 .
Export to NoteExpress RIS BibTex

Version :

Homogeneous ensemble extreme learning machine autoencoder with mutual representation learning and manifold regularization for medical datasets EI
期刊论文 | 2023 , 53 (12) , 15476-15495 | Applied Intelligence
Homogeneous ensemble extreme learning machine autoencoder with mutual representation learning and manifold regularization for medical datasets Scopus
期刊论文 | 2023 , 53 (12) , 15476-15495 | Applied Intelligence
Multi-layer Extreme Learning Machine Autoencoder With Subspace Structure Preserving EI CSCD PKU
期刊论文 | 2022 , 48 (4) , 1091-1104 | Acta Automatica Sinica
Abstract&Keyword Cite

Abstract :

To deal with the clustering problem of high-dimensional complex data, it is usually reguired to reduce the dimensionality and then cluster, but the common dimensional reduction method does not consider the clustering characteristic of the data and the correlation between the samples, so it is difficult to ensure that the dimensional reduction method matches the clustering algorithm, which leads to the loss of clustering information. The nonlinear unsupervised dimensionality reduction method extreme learning machine autoencoder (ELM-AE) has been widely used in dimensionality reduction and denoising in recent years because of its fast learning speed and good generalization performance. In order to maintain the original subspace structure when high-dimensional data is projected into a low-dimensional space, the dimensional reduction method ML-SELM-AE is proposed. This method captures the deep features of the sample set by using the multi-layer extreme learning machine autoencoder while maintaining multi-subspace structure of clustered samples by self-representation model. Experimental results show that the method can effectively improve the clustering accuracy and achieve higher learning efficiency on UCI data, EEG data and gene expression data. Copyright ©2022 Acta Automatica Sinica. All rights reserved.

Keyword :

Clustering algorithms Clustering algorithms Data reduction Data reduction Gene expression Gene expression Knowledge acquisition Knowledge acquisition Machine learning Machine learning Reduction Reduction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Xiao-Yun , Chen, Yuan . Multi-layer Extreme Learning Machine Autoencoder With Subspace Structure Preserving [J]. | Acta Automatica Sinica , 2022 , 48 (4) : 1091-1104 .
MLA Chen, Xiao-Yun et al. "Multi-layer Extreme Learning Machine Autoencoder With Subspace Structure Preserving" . | Acta Automatica Sinica 48 . 4 (2022) : 1091-1104 .
APA Chen, Xiao-Yun , Chen, Yuan . Multi-layer Extreme Learning Machine Autoencoder With Subspace Structure Preserving . | Acta Automatica Sinica , 2022 , 48 (4) , 1091-1104 .
Export to NoteExpress RIS BibTex

Version :

Adaptive Weighted Multi-view Subspace Clustering Guided by Manifold Regularization EI CSCD PKU
期刊论文 | 2022 , 35 (11) , 965-976 | Pattern Recognition and Artificial Intelligence
Abstract&Keyword Cite

Abstract :

In most of the existing multi-view subspace clustering methods, the consistent shared information of the multi-view data is learned, and the contribution of each view is regarded as equally important to integrate the difference information of multiple views. However, possible noise or redundancy between different views is ignored due to the idea of treating each view as equally important, resulting in poor final clustering performance. Therefore, an algorithm of adaptive weighted multi-view subspace clustering guided by manifold regularization(MR-AWMSC) is proposed in this paper. The consistent global low-rank representation information for each view is learned by nuclear norm, and difference information from different views is described by group effect. According to the concept of manifold regularization, the weight of each view is adaptively learned, and the contribution degree to the difference information of each view is automatically assigned. The difference information is integrated by the adaptive weight and the consistent information is fused to obtain the final consensus representation. The consensus representation is constructed for clustering multi-view data. Experimental results on six public datasets demonstrate that MR-AWMSC effectively improves the multi-view clustering performance. © 2022 Journal of Pattern Recognition and Artificial Intelligence. All rights reserved.

Keyword :

Artificial intelligence Artificial intelligence Clustering algorithms Clustering algorithms Pattern recognition Pattern recognition

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Yanming , Chen, Xiaoyun . Adaptive Weighted Multi-view Subspace Clustering Guided by Manifold Regularization [J]. | Pattern Recognition and Artificial Intelligence , 2022 , 35 (11) : 965-976 .
MLA Lin, Yanming et al. "Adaptive Weighted Multi-view Subspace Clustering Guided by Manifold Regularization" . | Pattern Recognition and Artificial Intelligence 35 . 11 (2022) : 965-976 .
APA Lin, Yanming , Chen, Xiaoyun . Adaptive Weighted Multi-view Subspace Clustering Guided by Manifold Regularization . | Pattern Recognition and Artificial Intelligence , 2022 , 35 (11) , 965-976 .
Export to NoteExpress RIS BibTex

Version :

Federated Twin Support Vector Machine EI
会议论文 | 2022 , 13534 LNCS , 187-204 | 5th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2022
Abstract&Keyword Cite

Abstract :

TSVM is designed to solve binary classification problems with less computational overhead by finding two hyperplanes and has been widely used to solve real-world problems. However, in real scenarios, the data used for learning is always scattered in different institutions or users. At the same time, people pay more attention to the issue of personal privacy leakage. Due to the complex privacy protection issues, simply collecting all the data for model training is no longer acceptable. Federated learning has recently been proposed to solve this problem. It completes model training by sharing model parameter updates in the form of data that remains local. But there is still no algorithm for twin support vector machines under the framework of federated learning. Combining the characteristics of twin support vector machine and federated learning, this paper proposes a federated twin support vector machine algorithm (FTSVM) and extends the twin support vector machine based on stochastic gradient descent into a federated support vector machine. We propose a unique initialization algorithm and integration algorithm to ensure the accuracy of the algorithm and the effectiveness of privacy protection. Accuracy experiments are carried out on five datasets, and the accuracy is basically the same as that of the TSVM based on gradient descent. Ablation experiments show that as the number of participants increases, the accuracy of the FTSVM is significantly higher than the average accuracy of the Stand-alone TSVM. Time experiments show that the time overhead of FTSVM increases linearly with the number of participants. These experiments demonstrate the accuracy, effectiveness, and possibility of application in real-world scenarios of our proposed FTSVM. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Keyword :

Gradient methods Gradient methods Learning algorithms Learning algorithms Stochastic systems Stochastic systems Support vector machines Support vector machines Vectors Vectors

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Zhou , Chen, Xiaoyun . Federated Twin Support Vector Machine [C] . 2022 : 187-204 .
MLA Yang, Zhou et al. "Federated Twin Support Vector Machine" . (2022) : 187-204 .
APA Yang, Zhou , Chen, Xiaoyun . Federated Twin Support Vector Machine . (2022) : 187-204 .
Export to NoteExpress RIS BibTex

Version :

流形正则引导的自适应加权多视角子空间聚类 CSCD PKU
期刊论文 | 2022 , 35 (11) , 965-976 | 模式识别与人工智能
Abstract&Keyword Cite Version(2)

Abstract :

现有多视角子空间聚类方法大多学习多视角数据的一致共享信息,并将每个视角的贡献视为同等重要以集成多个视角的差异信息.然而此思想忽略不同视角间可能存在的噪声或冗余,导致最终聚类性能不佳.为此,文中提出流形正则引导的自适应加权多视角子空间聚类算法.算法采用核范数学习每个视角的一致性全局低秩表示信息并利用组效应刻画不同视角的差异信息.根据流形正则的思想,自适应学习每个视角的权重,自动为每个视角的差异信息分配贡献度.再根据自适应权重集成差异信息并融合一致信息,获得最终的共识表示.最后利用该共识表示实现聚类.在6个公开数据集上的实验表明文中算法能有效提升多视角聚类性能.

Keyword :

多视角聚类 多视角聚类 子空间聚类 子空间聚类 流形正则 流形正则 自适应加权 自适应加权

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 林燕铭 , 陈晓云 . 流形正则引导的自适应加权多视角子空间聚类 [J]. | 模式识别与人工智能 , 2022 , 35 (11) : 965-976 .
MLA 林燕铭 et al. "流形正则引导的自适应加权多视角子空间聚类" . | 模式识别与人工智能 35 . 11 (2022) : 965-976 .
APA 林燕铭 , 陈晓云 . 流形正则引导的自适应加权多视角子空间聚类 . | 模式识别与人工智能 , 2022 , 35 (11) , 965-976 .
Export to NoteExpress RIS BibTex

Version :

流形正则引导的自适应加权多视角子空间聚类 CSCD PKU
期刊论文 | 2022 , 35 (11) , 965-976 | 模式识别与人工智能
流形正则引导的自适应加权多视角子空间聚类 CSCD PKU
期刊论文 | 2022 , 35 (11) , 965-976 | 模式识别与人工智能
10| 20| 50 per page
< Page ,Total 13 >

Export

Results:

Selected

to

Format:
Online/Total:475/10112432
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1