• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:余春艳

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 13 >
Vision Transformer with Progressive Tokenization for CT Metal Artifact Reduction EI
会议论文 | 2023 , 2023-June | 48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
Abstract&Keyword Cite

Abstract :

High-quality Computed Tomography(CT) plays a vital role in clinical diagnosis, but the presence of metallic implants will introduce severe metal artifacts on CT images and obstruct doctors' decision-making. Many prior researches on Metal Artifact Reduction(MAR) are based on Convolutional Neural Network(CNN). Recently, Transformer has demonstrated phenomenal potential in computer vision. Also, transformer-based methods have been harnessed in CT image denoising. Nevertheless, these methods have been little explored in MAR. To fill the gap, we put forth, to the best of our knowledge, the first transformer-based architecture for MAR. Our method relies on a standard Vision Transformer(ViT). Furthermore, we tap into the progressive tokenization to refrain from the simple tokenization of ViT which gives rise to inability to model the local anatomical information. Additionally, for the sake of facilitating the interaction among tokens, we take advantage of cyclic shift from Swin Transformer. Finally, many experiment results reveal that the transformer-based technique is superior to those on the basis of CNN to some degree. © 2023 IEEE.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zheng, Songwei , Zhang, Dong , Yu, Chunyan et al. Vision Transformer with Progressive Tokenization for CT Metal Artifact Reduction [C] . 2023 .
MLA Zheng, Songwei et al. "Vision Transformer with Progressive Tokenization for CT Metal Artifact Reduction" . (2023) .
APA Zheng, Songwei , Zhang, Dong , Yu, Chunyan , Zhu, Danhong , Zhu, Longlong , Liu, Hao et al. Vision Transformer with Progressive Tokenization for CT Metal Artifact Reduction . (2023) .
Export to NoteExpress RIS BibTex

Version :

SRFS-NET: Few Shot Learning Combined with the Salient Region EI
会议论文 | 2021 , 309-315 | 4th International Conference on Artificial Intelligence and Pattern Recognition, AIPR 2021
Abstract&Keyword Cite

Abstract :

Few shot learning aims to recognize novel categories with only few labeled data in each class. We can utilize it to solve the problem of insufficient samples during training. Recently, many methods based on meta-learning have been proposed in few shot learning and have achieved excellent results. However, unlike the human visual attention mechanism, these methods are weak in filtering critical regions automatically. The main reason is meta-learning usually treats images as black boxes. Therefore, inspired by the human visual attention mechanism, we introduce the salient region into the few shot learning and propose the SRFS-Net. In addition, considering the introduction of the salient region, we also modify the embedding function to improve the feature extraction capabilities of the network. Finally, the experimental results in miniImagenet dataset show that our model performs better in 5-way 1-shot than few shot learning models in recent years. © 2021 ACM.

Keyword :

Behavioral research Behavioral research Embeddings Embeddings

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Ying , Huang, RenJie , Chen, YuJie et al. SRFS-NET: Few Shot Learning Combined with the Salient Region [C] . 2021 : 309-315 .
MLA Li, Ying et al. "SRFS-NET: Few Shot Learning Combined with the Salient Region" . (2021) : 309-315 .
APA Li, Ying , Huang, RenJie , Chen, YuJie , Kang, Da , Yu, ChunYan , Wang, Xiu . SRFS-NET: Few Shot Learning Combined with the Salient Region . (2021) : 309-315 .
Export to NoteExpress RIS BibTex

Version :

Melody Generation with Emotion Constraint EI
会议论文 | 2021 , 1598-1603 | 5th International Conference on Electronic Information Technology and Computer Engineering, EITCE 2021
Abstract&Keyword Cite

Abstract :

At present, most of the melody generation models consider the introduction of chord, rhythm and other constraints in the melody generation process to ensure the quality of the melody generation. While all of them ignore the importance of emotion in melody generation. Music is an emotional art. As the primary part of a piece of music, melody usually has a clear emotional expression. Therefore, it is necessary to introduce emotion information and constraints to generate a melody with clear emotional expression, which means the model should have the ability to learn the relevant characteristics of emotions according to the given information and constraints. To this end, we propose a melody generation model ECMG with emotion constraints. The model takes Generative Adversarial Network (GAN) as the main body, and adds emotion encoder and emotion classifier to introduce emotion information and emotional constraints. We conducted quality evaluation and emotion evaluation of the melody generated by ECMG. In the evaluation of quality, the quality score difference between the melody generated by ECMG and the real melody in the training set is within 0.2, and the quality score of the melody generated by PopMNet is also relatively close. In the evaluation of emotion, the accuracy of emotion classification for both four-category and two-category is much higher than that of completely random probability. These evaluation results show that ECMG can generate melody with specific emotions while ensuring a high quality of generation. © 2021 ACM.

Keyword :

Classification (of information) Classification (of information) Generative adversarial networks Generative adversarial networks Music Music Quality control Quality control

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Renjie , Li, Yin , Kang, Da et al. Melody Generation with Emotion Constraint [C] . 2021 : 1598-1603 .
MLA Huang, Renjie et al. "Melody Generation with Emotion Constraint" . (2021) : 1598-1603 .
APA Huang, Renjie , Li, Yin , Kang, Da , Chen, Yujie , Yu, Chunyan , Wang, Xiu . Melody Generation with Emotion Constraint . (2021) : 1598-1603 .
Export to NoteExpress RIS BibTex

Version :

元结构下的文献网络关系预测 CSCD PKU
期刊论文 | 2020 , 33 (03) , 277-286 | 模式识别与人工智能
Abstract&Keyword Cite Version(1)

Abstract :

针对文献网络节点间的关系预测问题,将节点相似度作为节点间关系概率,采用网络表示学习的方法将文献网络中的节点嵌入到低维空间后计算节点相似度,同时提出基于元结构的网络表示学习模型.根据节点间基于不同元结构的相关性,融合相应的特征表示,将网络映射到低维的特征空间.在低维特征空间内进行距离度量,实现文献网络中的关系预测.实验表明文中模型在文献网络中可得到良好的关系预测结果.

Keyword :

元结构 元结构 关系预测 关系预测 文献网络 文献网络 网络表示学习 网络表示学习

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 王秀 , 陈璐 , 余春艳 . 元结构下的文献网络关系预测 [J]. | 模式识别与人工智能 , 2020 , 33 (03) : 277-286 .
MLA 王秀 et al. "元结构下的文献网络关系预测" . | 模式识别与人工智能 33 . 03 (2020) : 277-286 .
APA 王秀 , 陈璐 , 余春艳 . 元结构下的文献网络关系预测 . | 模式识别与人工智能 , 2020 , 33 (03) , 277-286 .
Export to NoteExpress RIS BibTex

Version :

元结构下的文献网络关系预测 CQVIP CSCD PKU
期刊论文 | 2020 , 33 (3) , 277-286 | 模式识别与人工智能
元元结构下的文献网络关系预测 CSCD PKU
期刊论文 | 2020 , 33 (3) , 277-286 | 模式识别与人工智能
Abstract&Keyword Cite Version(1)

Abstract :

针对文献网络节点间的关系预测问题,将节点相似度作为节点间关系概率,采用网络表示学习的方法将文献网络中的节点嵌入到低维空间后计算节点相似度,同时提出基于元结构的网络表示学习模型.根据节点间基于不同元结构的相关性,融合相应的特征表示,将网络映射到低维的特征空间.在低维特征空间内进行距离度量,实现文献网络中的关系预测.实验表明文中模型在文献网络中可得到良好的关系预测结果.

Keyword :

元结构 元结构 关系预测 关系预测 文献网络 文献网络 网络表示学习 网络表示学习

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 王秀 , 陈璐 , 余春艳 . 元元结构下的文献网络关系预测 [J]. | 模式识别与人工智能 , 2020 , 33 (3) : 277-286 .
MLA 王秀 et al. "元元结构下的文献网络关系预测" . | 模式识别与人工智能 33 . 3 (2020) : 277-286 .
APA 王秀 , 陈璐 , 余春艳 . 元元结构下的文献网络关系预测 . | 模式识别与人工智能 , 2020 , 33 (3) , 277-286 .
Export to NoteExpress RIS BibTex

Version :

Relationship Prediction for Literature Network under Meta-Structure [元结构下的文献网络关系预测] Scopus CSCD PKU
期刊论文 | 2020 , 33 (3) , 277-286 | Pattern Recognition and Artificial Intelligence
鉴别性特征学习模型实现跨摄像头下行人即时对齐 CSCD PKU
期刊论文 | 2019 , 31 (04) , 602-611 | 计算机辅助设计与图形学学报
Abstract&Keyword Cite Version(1)

Abstract :

为解决由于采用延后的关联算法而造成目标错误匹配和子序列漏匹配的问题,提出一种使用鉴别性特征学习模型实现跨摄像头下行人即时对齐的方法.首先基于孪生网络模型整合行人分类和行人身份鉴别模型,仅通过目标行人的单帧信息就可习得具有良好鉴别性的行人外观特征,完成行人相似性值计算;其次提出跨摄像头行人即时对齐模型,根据行人外观、时序和空间3个方面的关联适配度实时建立最小费用流图并求解.实验结果表明,在行人重识别数据集Market-1501和CUHK03上,行人分类和身份鉴别模型的融合能显著提升特征提取的有效性且泛化能力良好,性能全面优于Gate-SCNN与S-LSTM方法;进一步地,在非重叠区域的跨摄像头行...

Keyword :

卷积孪生网络 卷积孪生网络 行人即时对齐 行人即时对齐 鉴别性特征学习模型 鉴别性特征学习模型

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 余春艳 , 钟诗俊 . 鉴别性特征学习模型实现跨摄像头下行人即时对齐 [J]. | 计算机辅助设计与图形学学报 , 2019 , 31 (04) : 602-611 .
MLA 余春艳 et al. "鉴别性特征学习模型实现跨摄像头下行人即时对齐" . | 计算机辅助设计与图形学学报 31 . 04 (2019) : 602-611 .
APA 余春艳 , 钟诗俊 . 鉴别性特征学习模型实现跨摄像头下行人即时对齐 . | 计算机辅助设计与图形学学报 , 2019 , 31 (04) , 602-611 .
Export to NoteExpress RIS BibTex

Version :

鉴别性特征学习模型实现跨摄像头下行人即时对齐 CQVIP CSCD PKU
期刊论文 | 2019 , 31 (4) , 602-611 | 计算机辅助设计与图形学学报
多模态图像的自适应特征融合方法 incoPat
专利 | 2019/6/21 | CN201910539848.4
Abstract&Keyword Cite

Abstract :

本发明提供了一种多模态图像的自适应特征融合方法,主要解决针对深度网络提取的高层特征的融合存在的冗余性问题。本发明的具体步骤如下:首先,构建编码器,分别获得多种模态的特征;其次,利用典型性相关的特征筛选策略对多种模态的特征进行筛选,获得多种模态的新特征;再次,构建解码器,所获的新特征作为输入,分别获得新的模态图像;然后,构建一个分类器,利用标签一致损失,更新自适应特征融合模型;最后,所获的多种模态的新特征,进行级联操作,获得融合特征。本发明能够自适应的学习不同模态的高层特征,具有更好的判别性。

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 余春艳 , 杨素琼 . 多模态图像的自适应特征融合方法 : CN201910539848.4[P]. | 2019/6/21 .
MLA 余春艳 et al. "多模态图像的自适应特征融合方法" : CN201910539848.4. | 2019/6/21 .
APA 余春艳 , 杨素琼 . 多模态图像的自适应特征融合方法 : CN201910539848.4. | 2019/6/21 .
Export to NoteExpress RIS BibTex

Version :

一种腹部CT图像的分割方法 incoPat
专利 | 2019/6/21 | CN201910540017.9
Abstract&Keyword Cite

Abstract :

本发明涉及一种腹部CT图像的分割方法,包括步骤S1 : 构建脏器图像分割模型并利用源域数据进行预训练;步骤S2 : 将源域数据与目标域数据输入到脏器分割模型中,获得预测结果;步骤S3 : 根据源域数据的到的预测结果,获得分割的损失,并训练脏器图像分割模型;步骤S4 : 将脏器图像分割模型的预测结果作为判别模型的输入,获得分类损失,训练判别模型,并通过梯度反转层反向传播到脏器图像分割模型中;步骤S5 : 最大化分割损失和最小化分类损失,形成对抗损失,并用于训练脏器分割模型与判别模型,形成脏器图像分割模型。本发明将域适应方法结合脏器图像分割模型,分割医学图像中不同的脏器区域,解决医学图像中数据量少无标签且数据来源不同造成域偏移的问题。

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 余春艳 , 杨素琼 . 一种腹部CT图像的分割方法 : CN201910540017.9[P]. | 2019/6/21 .
MLA 余春艳 et al. "一种腹部CT图像的分割方法" : CN201910540017.9. | 2019/6/21 .
APA 余春艳 , 杨素琼 . 一种腹部CT图像的分割方法 : CN201910540017.9. | 2019/6/21 .
Export to NoteExpress RIS BibTex

Version :

Clustering stability-based Evolutionary K-Means SCIE
期刊论文 | 2019 , 23 (1) , 305-321 | SOFT COMPUTING
WoS CC Cited Count: 25
Abstract&Keyword Cite Version(2)

Abstract :

Evolutionary K-Means (EKM), which combines K-Means and genetic algorithm, solves K-Means' initiation problem by selecting parameters automatically through the evolution of partitions. Currently, EKM algorithms usually choose silhouette index as cluster validity index, and they are effective in clustering well-separated clusters. However, their performance of clustering noisy data is often disappointing. On the other hand, clustering stability-based approaches are more robust to noise; yet, they should start intelligently to find some challenging clusters. It is necessary to join EKM with clustering stability-based analysis. In this paper, we present a novel EKM algorithm that uses clustering stability to evaluate partitions. We firstly introduce two weighted aggregated consensus matrices, positive aggregated consensus matrix (PA) and negative aggregated consensus matrix (NA), to store clustering tendency for each pair of instances. Specifically, PA stores the tendency of sharing the same label and NA stores that of having different labels. Based upon the matrices, clusters and partitions can be evaluated from the view of clustering stability. Then, we propose a clustering stability-based EKM algorithm CSEKM that evolves partitions and the aggregated matrices simultaneously. To evaluate the algorithm's performance, we compare it with an EKM algorithm, two consensus clustering algorithms, a clustering stability-based algorithm and a multi-index-based clustering approach. Experimental results on a series of artificial datasets, two simulated datasets and eight UCI datasets suggest CSEKM is more robust to noise.

Keyword :

Clustering Clustering Clustering stability Clustering stability Consensus clustering Consensus clustering Genetic algorithm Genetic algorithm K-Means algorithm K-Means algorithm

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 He, Zhenfeng , Yu, Chunyan . Clustering stability-based Evolutionary K-Means [J]. | SOFT COMPUTING , 2019 , 23 (1) : 305-321 .
MLA He, Zhenfeng et al. "Clustering stability-based Evolutionary K-Means" . | SOFT COMPUTING 23 . 1 (2019) : 305-321 .
APA He, Zhenfeng , Yu, Chunyan . Clustering stability-based Evolutionary K-Means . | SOFT COMPUTING , 2019 , 23 (1) , 305-321 .
Export to NoteExpress RIS BibTex

Version :

Clustering stability-based Evolutionary K-Means EI
期刊论文 | 2019 , 23 (1) , 305-321 | Soft Computing
Clustering stability-based Evolutionary K-Means Scopus
期刊论文 | 2019 , 23 (1) , 305-321 | Soft Computing
Non-parallel Many-to-many Singing Voice Conversion by Adversarial Learning CPCI-S
会议论文 | 2019 , 125-132 | Annual Summit and Conference of the Asia-Pacific-Signal-and-Information-Processing-Association (APSIPA ASC)
Abstract&Keyword Cite

Abstract :

With the rapid development of deep learning, although speech conversion had made great progress, there are still rare researches in deep learning to model on singing voice conversion, which is mainly based on statistical methods at present and can only achieve one-to-one conversion with parallel training datasets. So far, its application is limited This paper proposes a generative adversarial learning model, MSVC-GAN, for many-to-many singing voice conversion using non-parallel datasets. First, the generator of our model is concatenated by the singer label, which denotes domain constraint Furthermore, the model integrates self-attention mechanism to capture long-term dependence on the spectral features. Finally, switchable normalization is employed to stabilize network training. Both the objective and subjective evaluation results show that our model achieves the highest similarity and naturalness not only on the parallel speech dataset but also on the non-parallel singing dataset.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hu, Jinsen , Yu, Chunyan , Guan, Faqian . Non-parallel Many-to-many Singing Voice Conversion by Adversarial Learning [C] . 2019 : 125-132 .
MLA Hu, Jinsen et al. "Non-parallel Many-to-many Singing Voice Conversion by Adversarial Learning" . (2019) : 125-132 .
APA Hu, Jinsen , Yu, Chunyan , Guan, Faqian . Non-parallel Many-to-many Singing Voice Conversion by Adversarial Learning . (2019) : 125-132 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 13 >

Export

Results:

Selected

to

Format:
Online/Total:1101/6814296
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1