Query:
学者姓名:郑清海
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
As a widely used method in signal processing, Principal Component Analysis (PCA) performs both the compression and the recovery of high dimensional data by leveraging the linear transformations. Considering the robustness of PCA, how to discriminate correct samples and outliers in PCA is a crucial and challenging issue. In this paper, we present a general model, which conducts PCA via a non-decreasing concave regularized minimization and is termed PCA-NCRM for short. Different from most existing PCA methods, which learn the linear transformations by minimizing the recovery errors between the recovered data and the original data in the least squared sense, our model adopts the monotonically non-decreasing concave function to enhance the ability of model in distinguishing correct samples and outliers. To be specific, PCA-NCRM enlarges the attention to samples with smaller recovery errors and diminishes the attention to samples with larger recovery errors at the same time. The proposed minimization problem can be efficiently addressed by employing an iterative re-weighting optimization. Experimental results on several datasets show the effectiveness of our model.
Keyword :
Adaptation models Adaptation models Dimensionality reduction Dimensionality reduction High dimensional data High dimensional data Iterative algorithms Iterative algorithms Iterative re-weighting optimization Iterative re-weighting optimization Lagrangian functions Lagrangian functions Minimization Minimization Optimization Optimization Principal component analysis Principal component analysis principal component analysis (PCA) principal component analysis (PCA) Robustness Robustness Signal processing algorithms Signal processing algorithms unsupervised dimensionality reduction unsupervised dimensionality reduction
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zheng, Qinghai , Zhuang, Yixin . Non-Decreasing Concave Regularized Minimization for Principal Component Analysis [J]. | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 : 486-490 . |
MLA | Zheng, Qinghai 等. "Non-Decreasing Concave Regularized Minimization for Principal Component Analysis" . | IEEE SIGNAL PROCESSING LETTERS 32 (2025) : 486-490 . |
APA | Zheng, Qinghai , Zhuang, Yixin . Non-Decreasing Concave Regularized Minimization for Principal Component Analysis . | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 , 486-490 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-view clustering learns consistent information from multi-view data, aiming to achieve more significant clustering characteristics. However, data in real-world scenarios often exhibit temporal or spatial asynchrony, leading to views with unaligned instances. Existing methods primarily address this issue by learning transformation matrices to align unaligned instances, but this process of learning differentiable transformation matrices is cumbersome. To address the challenge of partially unaligned instances, we propose P artially M ulti-view C lustering via R e-alignment (PMVCR). Our approach integrates representation learning and data alignment through a two-stage training and a re-alignment process. Specifically, our training process consists of three stages: (i) In the coarse-grained alignment stage, we construct negative instance pairs for unaligned instances and utilize contrastive learning to preliminarily learn the view representations of the instances. (ii) In there- alignment stage, we match unaligned instances based on the similarity of their view representations, aligning them with the primary view. (iii) In the fine-grained alignment stage, we further enhance the discriminative power of the view representations and the model's ability to differentiate between clusters. Compared to existing models, our method effectively leverages information between unaligned samples and enhances model generalization by constructing negative instance pairs. Clustering experiments on several popular multi-view datasets demonstrate the effectiveness and superiority of our method. Our code is publicly available at https://github.com/WenB777/PMVCR.git.
Keyword :
Contrastive learning Contrastive learning Multi-view clustering Multi-view clustering Partial view-aligned multi-view learning Partial view-aligned multi-view learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yan, Wenbiao , Zhu, Jihua , Chen, Jinqian et al. Partially multi-view clustering via re-alignment [J]. | NEURAL NETWORKS , 2025 , 182 . |
MLA | Yan, Wenbiao et al. "Partially multi-view clustering via re-alignment" . | NEURAL NETWORKS 182 (2025) . |
APA | Yan, Wenbiao , Zhu, Jihua , Chen, Jinqian , Cheng, Haozhe , Bai, Shunshun , Duan, Liang et al. Partially multi-view clustering via re-alignment . | NEURAL NETWORKS , 2025 , 182 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In real-world scenarios, missing views is common due to the complexity of data collection. Therefore, it is inevitable to classify incomplete multi-view data. Although substantial progress has been achieved, there are still two challenging problems with incomplete multi-view classification: (1) Simply ignoring these missing views is often ineffective, especially under high missing rates, which can lead to incomplete analysis and unreliable results. (2) Most existing multi-view classification models primarily focus on maximizing consistency between different views. However, neglecting specific-view information may lead to decreased performance. To solve the above problems, we propose a novel framework called Trusted Cross-View Completion (TCVC) for incomplete multi-view classification. Specifically, TCVC consists of three modules: Cross-view Feature Learning Module (CVFL), Imputation Module (IM) and Trusted Fusion Module (TFM). First, CVFL mines specific- view information to obtain cross-view reconstruction features. Then, IM restores the missing view by fusing cross-view reconstruction features with weights, guided by uncertainty-aware information. This information is the quality assessment of the cross-view reconstruction features in TFM. Moreover, the recovered views are supervised by cross-view neighborhood-aware. Finally, TFM effectively fuses complete data to generate trusted classification predictions. Extensive experiments show that our method is effective and robust.
Keyword :
Cross-view feature learning Cross-view feature learning Incomplete multi-view classification Incomplete multi-view classification Uncertainty-aware Uncertainty-aware
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhou, Liping , Chen, Shiyun , Song, Peihuan et al. Trusted Cross-view Completion for incomplete multi-view classification [J]. | NEUROCOMPUTING , 2025 , 629 . |
MLA | Zhou, Liping et al. "Trusted Cross-view Completion for incomplete multi-view classification" . | NEUROCOMPUTING 629 (2025) . |
APA | Zhou, Liping , Chen, Shiyun , Song, Peihuan , Zheng, Qinghai , Yu, Yuanlong . Trusted Cross-view Completion for incomplete multi-view classification . | NEUROCOMPUTING , 2025 , 629 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-view clustering has attracted significant attention in recent years because it can leverage the consistent and complementary information of multiple views to improve clustering performance. However, effectively fuse the information and balance the consistent and complementary information of multiple views are common challenges faced by multi-view clustering. Most existing multi-view fusion works focus on weighted-sum fusion and concatenating fusion, which unable to fully fuse the underlying information, and not consider balancing the consistent and complementary information of multiple views. To this end, we propose Cross-view Fusion for Multi-view Clustering (CFMVC). Specifically, CFMVC combines deep neural network and graph convolutional network for cross-view information fusion, which fully fuses feature information and structural information of multiple views. In order to balance the consistent and complementary information of multiple views, CFMVC enhances the correlation among the same samples to maximize the consistent information while simultaneously reinforcing the independence among different samples to maximize the complementary information. Experimental results on several multi-view datasets demonstrate the effectiveness of CFMVC for multi-view clustering task.
Keyword :
Cross-view Cross-view deep neural network deep neural network graph convolutional network graph convolutional network multi-view clustering multi-view clustering multi-view fusion multi-view fusion
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Zhijie , Huang, Binqiang , Zheng, Qinghai et al. Cross-View Fusion for Multi-View Clustering [J]. | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 : 621-625 . |
MLA | Huang, Zhijie et al. "Cross-View Fusion for Multi-View Clustering" . | IEEE SIGNAL PROCESSING LETTERS 32 (2025) : 621-625 . |
APA | Huang, Zhijie , Huang, Binqiang , Zheng, Qinghai , Yu, Yuanlong . Cross-View Fusion for Multi-View Clustering . | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 , 621-625 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Recently, multi-view clustering methods have garnered considerable attention and have been applied in various domains. However, in practical scenarios, some samples may lack specific views, giving rise to the challenge of incomplete multi-view clustering. While some methods focus on completing missing data, incorrect completion can negatively affect representation learning. Moreover, separating completion and representation learning prevents the attainment of an optimal representation. Other methods eschew completion but singularly concentrate on either feature information or graph information, thus failing to achieve comprehensive representations. To address these challenges, we propose a graph-guided, imputation-free method for incomplete multi-view clustering. Unlike completion-based methods, our approach aims to maximize the utilization of existing information by simultaneously considering feature and graph information. This is realized through the feature learning component and the graph learning component. Introducing a degradation network, the former reconstructs view-specific representations proximate to available samples from a unified representation, seamlessly integrating feature information into the unified representation. Leveraging the semi-supervised idea, the latter utilizes reliable graph information from available samples to guide the learning of the unified representation. These two components collaborate to acquire a comprehensive unified representation for multi-view clustering. Extensive experiments conducted on real datasets demonstrate the effectiveness and competitiveness of the proposed method when compared with other state-of-the-art methods. Our code will be released on https://github.com/yff-java/GIMVC/. © 2024 Elsevier Ltd
Keyword :
Graph information Graph information Incomplete multi-view clustering Incomplete multi-view clustering Representation learning Representation learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Bai, S. , Zheng, Q. , Ren, X. et al. Graph-guided imputation-free incomplete multi-view clustering [J]. | Expert Systems with Applications , 2024 , 258 . |
MLA | Bai, S. et al. "Graph-guided imputation-free incomplete multi-view clustering" . | Expert Systems with Applications 258 (2024) . |
APA | Bai, S. , Zheng, Q. , Ren, X. , Zhu, J. . Graph-guided imputation-free incomplete multi-view clustering . | Expert Systems with Applications , 2024 , 258 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
With the extensive use of multi-view data in practice, multi-view spectral clustering has received a lot of attention. In this work, we focus on the following two challenges, namely, how to deal with the partially contradictory graph information among different views and how to conduct clustering without the parameter selection. To this end, we establish a novel graph learning framework, which avoids the linear combination of the partially contradictory graph information among different views and learns a unified graph for clustering without the parameter selection. Specifically, we introduce a flexible graph degeneration with a structured graph constraint to address the aforementioned challenging issues. Besides, our method can be employed to deal with large-scale data by using the bipartite graph. Experimental results show the effectiveness and competitiveness of our method, compared to several state-of-the-art methods. IEEE
Keyword :
Bipartite graph Bipartite graph Circuits and systems Circuits and systems graph degeneration graph degeneration Laplace equations Laplace equations Multi-view data Multi-view data Optimization Optimization structured graph constraint structured graph constraint Task analysis Task analysis Time complexity Time complexity Vectors Vectors
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zheng, Q. . Flexible and Parameter-free Graph Learning for Multi-view Spectral Clustering [J]. | IEEE Transactions on Circuits and Systems for Video Technology , 2024 , 34 (9) : 1-1 . |
MLA | Zheng, Q. . "Flexible and Parameter-free Graph Learning for Multi-view Spectral Clustering" . | IEEE Transactions on Circuits and Systems for Video Technology 34 . 9 (2024) : 1-1 . |
APA | Zheng, Q. . Flexible and Parameter-free Graph Learning for Multi-view Spectral Clustering . | IEEE Transactions on Circuits and Systems for Video Technology , 2024 , 34 (9) , 1-1 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In multi-view representation learning (MVRL), the challenge of category uncertainty is significant. Existing methods excel at deriving shared representations across multiple views, but often neglect the uncertainty associated with cluster assignments from each view, thereby leading to increased ambiguity in the category determination. Additionally, methods like kernel-based or neural network-based approaches, while revealing nonlinear relationships, lack attention to category uncertainty. To address these limitations, this paper proposes a method leveraging the uncertainty of label distributions to enhance MVRL. Specifically, our approach combines uncertainty reduction based on label distribution with view representation learning to improve clustering accuracy and robustness. It initially computes the within-view representation of the sample and semantic labels. Then, we introduce a novel constraint based on either variance or information entropy to mitigate class uncertainty, thereby improving the discriminative power of the learned representations. Extensive experiments conducted on diverse multi-view datasets demonstrate that our method consistently outperforms existing approaches, producing more accurate and reliable class assignments. The experimental results highlight the effectiveness of our method in enhancing MVRL by reducing category uncertainty and improving overall classification performance. This method is not only very interpretable but also enhances the model's ability to learn multi-view consistent information.
Keyword :
Multi-view clustering Multi-view clustering Multi-view label distribution Multi-view label distribution Multi-view representation learning Multi-view representation learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yan, Wenbiao , Wu, Minghong , Zhou, Yiyang et al. Label distribution-driven multi-view representation learning [J]. | INFORMATION FUSION , 2024 , 115 . |
MLA | Yan, Wenbiao et al. "Label distribution-driven multi-view representation learning" . | INFORMATION FUSION 115 (2024) . |
APA | Yan, Wenbiao , Wu, Minghong , Zhou, Yiyang , Zheng, Qinghai , Chen, Jinqian , Cheng, Haozhe et al. Label distribution-driven multi-view representation learning . | INFORMATION FUSION , 2024 , 115 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-view Representation Learning (MRL) has recently attracted widespread attention because it can integrate information from diverse data sources to achieve better performance. However, existing MRL methods still have two issues: (1) They typically perform various consistency objectives within the feature space, which might discard complementary information contained in each view. (2) Some methods only focus on handling inter-view relationships while ignoring inter-sample relationships that are also valuable for downstream tasks. To address these issues, we propose a novel Multi-view representation learning method with Dual-label Collaborative Guidance (MDCG). Specifically, we fully excavate and utilize valuable semantic and graph information hidden in multi-view data to collaboratively guide the learning process of MRL. By learning consistent semantic labels from distinct views, our method enhances intrinsic connections across views while preserving view-specific information, which contributes to learning the consistent and complementary unified representation. Moreover, we integrate similarity matrices of multiple views to construct graph labels that indicate inter-sample relationships. With the idea of self-supervised contrastive learning, graph structure information implied in graph labels is effectively captured by the unified representation, thus enhancing its discriminability. Extensive experiments on diverse real-world datasets demonstrate the effectiveness and superiority of MDCG compared with nine state-of-the-art methods. Our code will be available at https: //github.com/Bin1Chen/MDCG.
Keyword :
Contrastive learning Contrastive learning Graph information Graph information Multi-view representation learning Multi-view representation learning Semantic information Semantic information
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Bin , Ren, Xiaojin , Bai, Shunshun et al. Multi-view representation learning with dual-label collaborative guidance [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 305 . |
MLA | Chen, Bin et al. "Multi-view representation learning with dual-label collaborative guidance" . | KNOWLEDGE-BASED SYSTEMS 305 (2024) . |
APA | Chen, Bin , Ren, Xiaojin , Bai, Shunshun , Chen, Ziyuan , Zheng, Qinghai , Zhu, Jihua . Multi-view representation learning with dual-label collaborative guidance . | KNOWLEDGE-BASED SYSTEMS , 2024 , 305 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Recently, multi-view clustering methods have garnered considerable attention and have been applied in various domains. However, in practical scenarios, some samples may lack specific views, giving rise to the challenge of incomplete multi-view clustering. While some methods focus on completing missing data, incorrect completion can negatively affect representation learning. Moreover, separating completion and representation learning prevents the attainment of an optimal representation. Other methods eschew completion but singularly concentrate on either feature information or graph information, thus failing to achieve comprehensive representations. To address these challenges, we propose a graph-guided, imputation-free method for incomplete multi-view clustering. Unlike completion-based methods, our approach aims to maximize the utilization of existing information by simultaneously considering feature and graph information. This is realized through the feature learning component and the graph learning component. Introducing a degradation network, the former reconstructs view-specific representations proximate to available samples from a unified representation, seamlessly integrating feature information into the unified representation. Leveraging the semi-supervised idea, the latter utilizes reliable graph information from available samples to guide the learning of the unified representation. These two components collaborate to acquire a comprehensive unified representation for multi-view clustering. Extensive experiments conducted on real datasets demonstrate the effectiveness and competitiveness of the proposed method when compared with other state-of-the-art methods. Our code will be released on https://github.com/yff-java/GIMVC/.
Keyword :
Graph information Graph information Incomplete multi-view clustering Incomplete multi-view clustering Representation learning Representation learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Bai, Shunshun , Zheng, Qinghai , Ren, Xiaojin et al. Graph-guided imputation-free incomplete multi-view clustering [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 258 . |
MLA | Bai, Shunshun et al. "Graph-guided imputation-free incomplete multi-view clustering" . | EXPERT SYSTEMS WITH APPLICATIONS 258 (2024) . |
APA | Bai, Shunshun , Zheng, Qinghai , Ren, Xiaojin , Zhu, Jihua . Graph-guided imputation-free incomplete multi-view clustering . | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 258 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi -view clustering has attracted widespread attention because it can improve clustering performance by integrating information from various views of samples. However, many existing methods either neglect graph information entirely or only partially incorporate it, leading to information loss and non -comprehensive representation. Besides, they usually make use of graph information by determining a fixed number of neighbors based on prior knowledge, which limits the exploration of graph information contained in data. To address these issues, we propose a novel method, termed Graph -Driven deep Multi -View Clustering with self -paced learning (GDMVC), which integrates both feature information and graph information to better explore information within the data. Additionally, based on the idea of self -paced learning, this method gradually increases the number of neighbors and updates the similarity matrix, progressively providing more graph information to guide representation learning. By this way, we avoid issues associated with a fixed number of neighbors and ensure a thorough exploration of graph information contained in the original data. Furthermore, this method not only ensures the consistency among views but also leverages graph information to further enhance the unified representation, aiming to obtain more separable cluster structures. Extensive experiments on real datasets demonstrate its effectiveness for multi -view clustering. Our code will be released on https://github.com/yff-java/GDMVC/.
Keyword :
Graph information Graph information Multi-View Clustering Multi-View Clustering Representation learning Representation learning Self-paced learning Self-paced learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Bai, Shunshun , Ren, Xiaojin , Zheng, Qinghai et al. Graph-Driven deep Multi-View Clustering with self-paced learning [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 296 . |
MLA | Bai, Shunshun et al. "Graph-Driven deep Multi-View Clustering with self-paced learning" . | KNOWLEDGE-BASED SYSTEMS 296 (2024) . |
APA | Bai, Shunshun , Ren, Xiaojin , Zheng, Qinghai , Zhu, Jihua . Graph-Driven deep Multi-View Clustering with self-paced learning . | KNOWLEDGE-BASED SYSTEMS , 2024 , 296 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |