• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:陈羽中

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 20 >
面向分布式数据安全共享的高速公路路网拥堵监测
期刊论文 | 2025 , 41 (1) , 11-20 | 福建师范大学学报(自然科学版)
Abstract&Keyword Cite

Abstract :

应用人工智能技术对高速公路路网道路状态进行监测已成为热点,然而,数据孤岛及隐私保护是高速路网智能决策面临的挑战.为实现分布式数据安全共享及智能决策,以拥堵问题为例,提出基于联邦学习的高速路网道路拥堵状态监测策略.利用摄像头实时数据,在密态可计算的同态加密联邦学习智能决策架构下,建立基于道路区间优化的拥堵状态监测模型.结果表明,在确保分布式数据安全共享的前提下,能够有效实现高速路网道路拥堵状态监测.

Keyword :

同态加密 同态加密 数据安全共享 数据安全共享 智能决策 智能决策 联邦学习 联邦学习 道路拥堵状态 道路拥堵状态 高速公路路网 高速公路路网

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李林锋 , 陈羽中 , 姚毅楠 et al. 面向分布式数据安全共享的高速公路路网拥堵监测 [J]. | 福建师范大学学报(自然科学版) , 2025 , 41 (1) : 11-20 .
MLA 李林锋 et al. "面向分布式数据安全共享的高速公路路网拥堵监测" . | 福建师范大学学报(自然科学版) 41 . 1 (2025) : 11-20 .
APA 李林锋 , 陈羽中 , 姚毅楠 , 邵伟杰 . 面向分布式数据安全共享的高速公路路网拥堵监测 . | 福建师范大学学报(自然科学版) , 2025 , 41 (1) , 11-20 .
Export to NoteExpress RIS BibTex

Version :

Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis SCIE
期刊论文 | 2025 , 81 (1) | JOURNAL OF SUPERCOMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

Aspect-level multimodal sentiment analysis aims to ascertain the sentiment polarity of a given aspect from a text review and its accompanying image. Despite substantial progress made by existing research, aspect-level multimodal sentiment analysis still faces several challenges: (1) Inconsistency in feature granularity between the text and image modalities poses difficulties in capturing corresponding visual representations of aspect words. This inconsistency may introduce irrelevant or redundant information, thereby causing noise and interference in sentiment analysis. (2) Traditional aspect-level sentiment analysis predominantly relies on the fusion of semantic and syntactic information to determine the sentiment polarity of a given aspect. However, introducing image modality necessitates addressing the semantic gap in jointly understanding sentiment features in different modalities. To address these challenges, a multi-granularity visual-textual feature fusion model (MG-VTFM) is proposed to enable deep sentiment interactions among semantic, syntactic, and image information. First, the model introduces a multi-granularity hierarchical graph attention network that controls the granularity of semantic units interacting with images through constituent tree. This network extracts image sentiment information relevant to the specific granularity, reduces noise from images and ensures sentiment relevance in single-granularity cross-modal interactions. Building upon this, a multilayered graph attention module is employed to accomplish multi-granularity sentiment fusion, ranging from fine to coarse. Furthermore, a progressive multimodal attention fusion mechanism is introduced to maximize the extraction of abstract sentiment information from images. Lastly, a mapping mechanism is proposed to align cross-modal information based on aspect words, unifying semantic spaces across different modalities. Our model demonstrates excellent overall performance on two datasets.

Keyword :

Aspect-level sentiment analysis Aspect-level sentiment analysis Constituent tree Constituent tree Multi-granularity Multi-granularity Multimodal data Multimodal data Visual-textual feature fusion Visual-textual feature fusion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Yuzhong , Shi, Liyuan , Lin, Jiali et al. Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis [J]. | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (1) .
MLA Chen, Yuzhong et al. "Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis" . | JOURNAL OF SUPERCOMPUTING 81 . 1 (2025) .
APA Chen, Yuzhong , Shi, Liyuan , Lin, Jiali , Chen, Jingtian , Zhong, Jiayuan , Dong, Chen . Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis . | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (1) .
Export to NoteExpress RIS BibTex

Version :

Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis Scopus
期刊论文 | 2025 , 81 (1) | Journal of Supercomputing
Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis EI
期刊论文 | 2025 , 81 (1) | Journal of Supercomputing
Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement Scopus
期刊论文 | 2024 , 26 , 1-13 | IEEE Transactions on Multimedia
Abstract&Keyword Cite

Abstract :

Low-light image enhancement is a challenging task due to the limited visibility in dark environments. While recent advances have shown progress in integrating CNNs and Transformers, the inadequate local-global perceptual interactions still impedes their application in complex degradation scenarios. To tackle this issue, we propose BiFormer, a lightweight framework that facilitates local-global collaborative perception via bilateral interaction. Specifically, our framework introduces a core CNN-Transformer collaborative perception block (CPB) that combines local-aware convolutional attention (LCA) and global-aware recursive transformer (GRT) to simultaneously preserve local details and ensure global consistency. To promote perceptual interaction, we adopt bilateral interaction strategy for both local and global perception, which involves local-to-global second-order interaction (SoI) in the dual-domain, as well as a mixed-channel fusion (MCF) module for global-to-local interaction. The MCF is also a highly efficient feature fusion module tailored for degraded features. Extensive experiments conducted on low-level and high-level tasks demonstrate that BiFormer achieves state-of-the-art performance. Furthermore, it exhibits a significant reduction in model parameters and computational cost compared to existing Transformer-based low-light image enhancement methods. IEEE

Keyword :

bilateral interaction bilateral interaction Collaboration Collaboration Convolutional neural networks Convolutional neural networks hybrid CNN-Transformer hybrid CNN-Transformer Image enhancement Image enhancement Lighting Lighting Low-light image enhancement Low-light image enhancement mixed-channel fusion mixed-channel fusion Task analysis Task analysis Transformers Transformers Visualization Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, R. , Li, Y. , Niu, Y. et al. Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement [J]. | IEEE Transactions on Multimedia , 2024 , 26 : 1-13 .
MLA Xu, R. et al. "Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement" . | IEEE Transactions on Multimedia 26 (2024) : 1-13 .
APA Xu, R. , Li, Y. , Niu, Y. , Xu, H. , Chen, Y. , Zhao, T. . Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement . | IEEE Transactions on Multimedia , 2024 , 26 , 1-13 .
Export to NoteExpress RIS BibTex

Version :

领域数据增强与多粒度语义理解的多轮对话模型
期刊论文 | 2024 , 45 (7) , 1585-1591 | 小型微型计算机系统
Abstract&Keyword Cite Version(1)

Abstract :

检索式多轮对话是多轮对话中一个重要的分支,如何从众多的候选回复中选择出最适合当前上下文的答复是检索式多轮对话的关键问题.近年来,深度神经网络模型在多轮回复选择问题上取得了较大进展.然而,现有模型依然存在对上下文语义理解不准确,缺乏对上下文内部、话语内部蕴含的时序语义关系的学习等问题.针对上述问题,本文提出了一种基于预训练语言模型的多辅助任务优化的学习方法MSE-BERT.首先,通过区间掩码生成任务优化预训练模型,使其更好地适应当前领域的数据集.提出一种辅助任务是token乱序插入任务,该任务通过随机选择上下文中的一句话语并将其内部的token进行随机打乱,然后预测这句话在上下文中原本的位置,多粒度的学习蕴含在上下文之间的时序语义关系.最后,利用BERT特有的位置嵌入和深层注意力机制,提出了一种双向特征融合机制,将所有的局部信息进行融合,进一步优化模型进行回复选择的能力.在Ubuntu和E-commerce数据集上的实验结果表明,MSE-BERT模型的总体性能优于对比模型.

Keyword :

双向特征融合 双向特征融合 回复选择 回复选择 多轮对话 多轮对话 语义关系 语义关系 辅助任务 辅助任务

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 刘律民 , 陈羽中 , 陈敬添 . 领域数据增强与多粒度语义理解的多轮对话模型 [J]. | 小型微型计算机系统 , 2024 , 45 (7) : 1585-1591 .
MLA 刘律民 et al. "领域数据增强与多粒度语义理解的多轮对话模型" . | 小型微型计算机系统 45 . 7 (2024) : 1585-1591 .
APA 刘律民 , 陈羽中 , 陈敬添 . 领域数据增强与多粒度语义理解的多轮对话模型 . | 小型微型计算机系统 , 2024 , 45 (7) , 1585-1591 .
Export to NoteExpress RIS BibTex

Version :

领域数据增强与多粒度语义理解的多轮对话模型
期刊论文 | 2024 , 45 (07) , 1585-1591 | 小型微型计算机系统
基于边缘辅助和多尺度Transformer的无参考屏幕内容图像质量评估 CSCD PKU
期刊论文 | 2024 | 电子学报
Abstract&Keyword Cite Version(2)

Abstract :

与从现实场景中拍摄的自然图像不同,屏幕内容图像是一种合成图像,通常由计算机生成的文本、图形和动画等各种多媒体形式组合而成. 现有评估方法通常未能充分考虑图像边缘结构信息和全局上下文信息对屏幕内容图像质量感知的影响. 为解决上述问题,本文提出一种基于边缘辅助和多尺度Transformer的无参考屏幕内容图像质量评估模型. 首先,使用高斯拉普拉斯算子构造由失真屏幕内容图像高频信息组成的边缘结构图,然后通过卷积神经网络对输入的失真屏幕内容图像和相应的边缘结构图进行多尺度的特征提取与融合,以图像的边缘结构信息为模型训练提供额外的信息增益. 此外,本文进一步构建了基于Transformer的多尺度特征编码模块,从而在CNN获得的局部特征基础上更好地建模不同尺度图像和边缘特征的全局上下文信息. 实验结果表明,本文提出的方法在指标上优于其他现有的无参考和全参考屏幕内容图像质量评估方法,能够取得更高的主客观视觉感知一致性.

Keyword :

Transformer Transformer 卷积神经网络 卷积神经网络 多尺度特征 多尺度特征 无参考屏幕内容图像质量评估 无参考屏幕内容图像质量评估 高斯拉普拉斯算子 高斯拉普拉斯算子

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 陈羽中 , 陈友昆 , 林闽沪 et al. 基于边缘辅助和多尺度Transformer的无参考屏幕内容图像质量评估 [J]. | 电子学报 , 2024 .
MLA 陈羽中 et al. "基于边缘辅助和多尺度Transformer的无参考屏幕内容图像质量评估" . | 电子学报 (2024) .
APA 陈羽中 , 陈友昆 , 林闽沪 , 牛玉贞 . 基于边缘辅助和多尺度Transformer的无参考屏幕内容图像质量评估 . | 电子学报 , 2024 .
Export to NoteExpress RIS BibTex

Version :

基于边缘辅助和多尺度Transformer的无参考屏幕内容图像质量评估
期刊论文 | 2024 , 52 (7) , 2242-2256 | 电子学报
基于边缘辅助和多尺度Transformer的无参考屏幕内容图像质量评估
期刊论文 | 2024 , 52 (07) , 2242-2256 | 电子学报
Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation SCIE
期刊论文 | 2024 , 305 | KNOWLEDGE-BASED SYSTEMS
Abstract&Keyword Cite Version(2)

Abstract :

The recommendation system aims to recommend items to users by capturing their personalized interests. Traditional recommendation systems typically focus on modeling target behaviors between users and items. However, in practical application scenarios, various types of behaviors (e.g., click, favorite, purchase, etc.) occur between users and items. Despite recent efforts in modeling various behavior types, multi-behavior recommendation still faces two significant challenges. The first challenge is how to comprehensively capture the complex relationships between various types of behaviors, including their interest differences and interest commonalities. The second challenge is how to solve the sparsity of target behaviors while ensuring the authenticity of information from various types of behaviors. To address these issues, a multi-behavior recommendation framework based on Multi-View Multi-Behavior Interest Learning Network and Contrastive Learning (MMNCL) is proposed. This framework includes a multi-view multi-behavior interest learning module that consists of two submodules: the behavior difference aware submodule, which captures intra-behavior interests for each behavior type and the correlations between various types of behaviors, and the behavior commonality aware submodule, which captures the information of interest commonalities between various types of behaviors. Additionally, a multi-view contrastive learning module is proposed to conduct node self- discrimination, ensuring the authenticity of information integration among various types of behaviors, and facilitating an effective fusion of interest differences and interest commonalities. Experimental results on three real-world benchmark datasets demonstrate the effectiveness of MMNCL and its advantages over other state-of-the-art recommendation models. Our code is available at https://github.com/sujieyang/MMNCL.

Keyword :

Contrastive learning Contrastive learning Interest learning network Interest learning network Meta learning Meta learning Multi-behavior recommendation Multi-behavior recommendation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Su, Jieyang , Chen, Yuzhong , Lin, Xiuqiang et al. Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 305 .
MLA Su, Jieyang et al. "Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation" . | KNOWLEDGE-BASED SYSTEMS 305 (2024) .
APA Su, Jieyang , Chen, Yuzhong , Lin, Xiuqiang , Zhong, Jiayuan , Dong, Chen . Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation . | KNOWLEDGE-BASED SYSTEMS , 2024 , 305 .
Export to NoteExpress RIS BibTex

Version :

Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation Scopus
期刊论文 | 2024 , 305 | Knowledge-Based Systems
Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation EI
期刊论文 | 2024 , 305 | Knowledge-Based Systems
A numerical magnitude aware multi-channel hierarchical encoding network for math word problem solving Scopus
期刊论文 | 2024 , 37 (3) , 1651-1672 | Neural Computing and Applications
Abstract&Keyword Cite Version(1)

Abstract :

Math word problem (MWP) represents a critical research area within reading comprehension, where accurate comprehension of math problem text is crucial for generating math expressions. However, current approaches still grapple with unresolved challenges in grasping the sensitivity of math problem text and delineating distinct roles across various clause types, and enhancing numerical representation. To address these challenges, this paper proposes a Numerical Magnitude Aware Multi-Channel Hierarchical Encoding Network (NMA-MHEA) for math expression generation. Firstly, NMA-MHEA implements a multi-channel hierarchical context encoding module to learn context representations at three different channels: intra-clause channel, inter-clause channel, and context-question interaction channel. NMA-MHEA constructs hierarchical constituent-dependency graphs for different levels of sentences and employs a Hierarchical Graph Attention Neural Network (HGAT) to learn syntactic and semantic information within these graphs at the intra-clause and inter-clause channels. NMA-MHEA then refines context clauses using question information at the context-question interaction channel. Secondly, NMA-MHEA designs a number encoding module to enhance the relative magnitude information among numerical values and type information of numerical values. Experimental results on two public benchmark datasets demonstrate that NMA-MHEA outperforms other state-of-the-art models. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.

Keyword :

Graph2Tree model Graph2Tree model Hierarchical constituent-dependency graph Hierarchical constituent-dependency graph Math word problem Math word problem Number encoding Number encoding

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, J. , Chen, Y. , Xiao, L. et al. A numerical magnitude aware multi-channel hierarchical encoding network for math word problem solving [J]. | Neural Computing and Applications , 2024 , 37 (3) : 1651-1672 .
MLA Xu, J. et al. "A numerical magnitude aware multi-channel hierarchical encoding network for math word problem solving" . | Neural Computing and Applications 37 . 3 (2024) : 1651-1672 .
APA Xu, J. , Chen, Y. , Xiao, L. , Liao, H. , Zhong, J. , Dong, C. . A numerical magnitude aware multi-channel hierarchical encoding network for math word problem solving . | Neural Computing and Applications , 2024 , 37 (3) , 1651-1672 .
Export to NoteExpress RIS BibTex

Version :

A numerical magnitude aware multi-channel hierarchical encoding network for math word problem solving EI
期刊论文 | 2025 , 37 (3) , 1651-1672 | Neural Computing and Applications
Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement SCIE
期刊论文 | 2024 , 26 , 10792-10804 | IEEE TRANSACTIONS ON MULTIMEDIA
Abstract&Keyword Cite Version(2)

Abstract :

Low-light image enhancement is a challenging task due to the limited visibility in dark environments. While recent advances have shown progress in integrating CNNs and Transformers, the inadequate local-global perceptual interactions still impedes their application in complex degradation scenarios. To tackle this issue, we propose BiFormer, a lightweight framework that facilitates local-global collaborative perception via bilateral interaction. Specifically, our framework introduces a core CNN-Transformer collaborative perception block (CPB) that combines local-aware convolutional attention (LCA) and global-aware recursive Transformer (GRT) to simultaneously preserve local details and ensure global consistency. To promote perceptual interaction, we adopt bilateral interaction strategy for both local and global perception, which involves local-to-global second-order interaction (SoI) in the dual-domain, as well as a mixed-channel fusion (MCF) module for global-to-local interaction. The MCF is also a highly efficient feature fusion module tailored for degraded features. Extensive experiments conducted on low-level and high-level tasks demonstrate that BiFormer achieves state-of-the-art performance. Furthermore, it exhibits a significant reduction in model parameters and computational cost compared to existing Transformer-based low-light image enhancement methods.

Keyword :

bilateral interaction bilateral interaction hybrid CNN- Transformer hybrid CNN- Transformer Low-light image enhancement Low-light image enhancement mixed-channel fusion mixed-channel fusion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Rui , Li, Yuezhou , Niu, Yuzhen et al. Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 : 10792-10804 .
MLA Xu, Rui et al. "Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement" . | IEEE TRANSACTIONS ON MULTIMEDIA 26 (2024) : 10792-10804 .
APA Xu, Rui , Li, Yuezhou , Niu, Yuzhen , Xu, Huangbiao , Chen, Yuzhong , Zhao, Tiesong . Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement . | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 , 10792-10804 .
Export to NoteExpress RIS BibTex

Version :

Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement Scopus
期刊论文 | 2024 , 26 , 1-13 | IEEE Transactions on Multimedia
Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement EI
期刊论文 | 2024 , 26 , 10792-10804 | IEEE Transactions on Multimedia
Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring SCIE
期刊论文 | 2024 , 139 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Abstract&Keyword Cite Version(2)

Abstract :

Defocus deblurring using dual-pixel sensors has gathered significant attention in recent years. However, current methodologies have not adequately addressed the challenge of defocus disparity between dual views, resulting in suboptimal performance in recovering details from severely defocused pixels. To counteract this limitation, we introduce in this paper a parallax-aware dual-view feature enhancement and adaptive detail compensation network (PA-Net), specifically tailored for dual-pixel defocus deblurring task. Our proposed PA- Net leverages an encoder-decoder architecture augmented with skip connections, designed to initially extract distinct features from the left and right views. A pivotal aspect of our model lies at the network's bottleneck, where we introduce a parallax-aware dual-view feature enhancement based on Transformer blocks, which aims to align and enhance extracted dual-pixel features, aggregating them into a unified feature. Furthermore, taking into account the disparity and the rich details embedded in encoder features, we design an adaptive detail compensation module to adaptively incorporate dual-view encoder features into image reconstruction, aiding in restoring image details. Experimental results demonstrate that our proposed PA-Net exhibits superior performance and visual effects on the real-world dataset.

Keyword :

Defocus deblurring Defocus deblurring Defocus disparity Defocus disparity Detail restoration Detail restoration Dual-pixel Dual-pixel Image restoration Image restoration

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Yuzhen , He, Yuqi , Xu, Rui et al. Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2024 , 139 .
MLA Niu, Yuzhen et al. "Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 139 (2024) .
APA Niu, Yuzhen , He, Yuqi , Xu, Rui , Li, Yuezhou , Chen, Yuzhong . Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2024 , 139 .
Export to NoteExpress RIS BibTex

Version :

Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring EI
期刊论文 | 2025 , 139 | Engineering Applications of Artificial Intelligence
Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring Scopus
期刊论文 | 2025 , 139 | Engineering Applications of Artificial Intelligence
一种用于方面级情感分析的知识增强双图卷积网络 CSCD PKU
期刊论文 | 2024 , 45 (1) , 37-44 | 小型微型计算机系统
Abstract&Keyword Cite Version(1)

Abstract :

近年来,深度神经网络特别是图神经网络在方面级情感分析任务上取得了较大进展,但是仍存在未充分利用外部知识信息、句法依赖树的边关系信息以及知识图谱结构信息的缺陷.针对上述问题,本文提出了一种知识增强的双图卷积网络BGCN-KE(Knowledge-enhanced Bi-Graph Convolutional Network).首先,提出一种融合句法依赖关系与外部知识的子图构造算法,得到节点间语义关系更紧密的知识子图.其次,提出了双图卷积网络,分别利用两个图卷积网络在句法依赖知识子图中引导评论文本的节点学习邻接节点的外部知识,以及在评论文本的句法依赖图中融合特定方面相关的语义信息,从而增强评论文本的特定方面知识表示和语义表示.再次,BGCN-KE引入边关系注意力机制,更好地捕获特定方面和上下文词语间的语义关系.最后,提出了一种多级特征融合机制,充分融合特定方面相关的外部知识、语义信息和边关系特征.多个公共数据集上的实验证明,BGCN-KE的性能优于最新的对比模型.

Keyword :

图卷积网络 图卷积网络 多级特征融合 多级特征融合 方面级情感分析 方面级情感分析 知识图谱 知识图谱 边关系注意力 边关系注意力

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 万宇杰 , 陈羽中 . 一种用于方面级情感分析的知识增强双图卷积网络 [J]. | 小型微型计算机系统 , 2024 , 45 (1) : 37-44 .
MLA 万宇杰 et al. "一种用于方面级情感分析的知识增强双图卷积网络" . | 小型微型计算机系统 45 . 1 (2024) : 37-44 .
APA 万宇杰 , 陈羽中 . 一种用于方面级情感分析的知识增强双图卷积网络 . | 小型微型计算机系统 , 2024 , 45 (1) , 37-44 .
Export to NoteExpress RIS BibTex

Version :

一种用于方面级情感分析的知识增强双图卷积网络 CSCD PKU
期刊论文 | 2024 , 45 (01) , 37-44 | 小型微型计算机系统
10| 20| 50 per page
< Page ,Total 20 >

Export

Results:

Selected

to

Format:
Online/Total:1252/9718421
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1