• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:陈羽中

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 17 >
Skeleton-Boundary-Guided Network for Camouflaged Object Detection SCIE
期刊论文 | 2025 , 21 (3) | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
Abstract&Keyword Cite Version(1)

Abstract :

Camouflaged object detection (COD) aims to resolve the tough issue of accurately segmenting objects hidden in the surroundings. However, the existing methods suffer from two major problems: the incomplete interior and the inaccurate boundary of the object. To address these difficulties, we propose a three-stage skeletonboundary-guided network (SBGNet) for the COD task. Specifically, we design a novel skeleton-boundary label to be complementary to the typical pixel-wise mask annotation, emphasizing the interior skeleton and the boundary of the camouflaged object. Furthermore, the proposed feature guidance module (FGM) leverages the skeleton-boundary feature to guide the model to focus on both the interior and the boundary of the camouflaged object. Besides, we design a bidirectional feature flow path with the information interaction module (IIM) to propagate and integrate the semantic and texture information. Finally, we propose the dual feature distillation module (DFDM) to progressively refine the segmentation results in a fine-grained manner. Comprehensive experiments demonstrate that our SBGNet outperforms 20 state-of-the-art methods on three benchmarks in both qualitative and quantitative comparisons. CCS Concepts: center dot Computing methodologies -> Scene understanding;

Keyword :

Bidirectional feature flow path Bidirectional feature flow path Camouflaged object detection Camouflaged object detection Feature distillation Feature distillation Skeleton-boundary guidance Skeleton-boundary guidance

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Yuzhen , Xu, Yeyuan , Li, Yuezhou et al. Skeleton-Boundary-Guided Network for Camouflaged Object Detection [J]. | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS , 2025 , 21 (3) .
MLA Niu, Yuzhen et al. "Skeleton-Boundary-Guided Network for Camouflaged Object Detection" . | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS 21 . 3 (2025) .
APA Niu, Yuzhen , Xu, Yeyuan , Li, Yuezhou , Zhang, Jiabang , Chen., Yuzhong . Skeleton-Boundary-Guided Network for Camouflaged Object Detection . | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS , 2025 , 21 (3) .
Export to NoteExpress RIS BibTex

Version :

Skeleton-Boundary–Guided Network for Camouflaged Object Detection EI
期刊论文 | 2025 , 21 (3) | ACM Transactions on Multimedia Computing, Communications and Applications
Hierarchical fine-grained state-aware graph attention network for dialogue state tracking SCIE
期刊论文 | 2025 , 81 (5) | JOURNAL OF SUPERCOMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

The objective of dialogue state tracking (DST) is to dynamically track information within dialogue states by populating predefined state slots, which enhances the comprehension capabilities of task-oriented dialogue systems in processing user requests. Recently, there has been a growing popularity in using graph neural networks to model the relationships between slots and slots as well as between dialogue and slots. However, these models overlook the relationships between words and phrases in the current turn dialogue and dialogue history. Specific syntactic dependencies (e.g., the object of a preposition) and constituents (e.g., noun phrases) have a higher probability of being the slot values that need to be retrieved at current moment. Neglecting these syntactic dependency and constituent information may cause the loss of potential candidate slot values, thereby limiting the overall performance of DST models. To address this issue, we propose a Hierarchical Fine-grained State Aware Graph Attention Network for Dialogue State Tracking (HFSG-DST). HFSG-DST exploits the syntactic dependency and constituent tree information, such as phrase segmentation and hierarchical structure in dialogue utterances, to construct a relational graph between entities. It then employs a hierarchical graph attention network to facilitate the extraction of fine-grained candidate dialogue state information. Additionally, HFSG-DST designs a Schema-enhanced Dialogue History Selector to select the most relevant turn of dialogue history for current turn and incorporates schema description information for dialogue state tracking. Consequently, HFSG-DST is capable of constructing the dependency tree and constituent tree on noise-free utterances. Experimental results on two public benchmark datasets demonstrate that HFSG-DST outperforms other state-of-the-art models.

Keyword :

Dialogue state tracking Dialogue state tracking Hierarchical graph attention network Hierarchical graph attention network Schema enhancement Schema enhancement Syntactic information Syntactic information

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liao, Hongmiao , Chen, Yuzhong , Chen, Deming et al. Hierarchical fine-grained state-aware graph attention network for dialogue state tracking [J]. | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (5) .
MLA Liao, Hongmiao et al. "Hierarchical fine-grained state-aware graph attention network for dialogue state tracking" . | JOURNAL OF SUPERCOMPUTING 81 . 5 (2025) .
APA Liao, Hongmiao , Chen, Yuzhong , Chen, Deming , Xu, Junjie , Zhong, Jiayuan , Dong, Chen . Hierarchical fine-grained state-aware graph attention network for dialogue state tracking . | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (5) .
Export to NoteExpress RIS BibTex

Version :

Hierarchical fine-grained state-aware graph attention network for dialogue state tracking EI
期刊论文 | 2025 , 81 (5) | Journal of Supercomputing
Hierarchical fine-grained state-aware graph attention network for dialogue state tracking Scopus
期刊论文 | 2025 , 81 (5) | Journal of Supercomputing
Collaboratively enhanced and integrated detail-context information for low enhancement SCIE
期刊论文 | 2025 , 162 | PATTERN RECOGNITION
Abstract&Keyword Cite Version(2)

Abstract :

Low-light image enhancement (LLIE) is a challenging task, due to the multiple degradation problems involved, such as low brightness, color distortion, heavy noise, and detail degradation. Existing deep learning-based LLIE methods mainly use encoder-decoder networks or full-resolution networks, which excel at extracting context or detail information, respectively. Since detail and context information are both required for LLIE, existing methods cannot solve all the degradation problems. To solve the above problem, we propose an LLIE method based on collaboratively enhanced and integrated detail-context information (CoEIDC). Specifically, we propose a full-resolution network with two collaborative subnetworks, namely the detail extraction and enhancement subnetwork (DE2-Net) and context extraction and enhancement subnetwork (CE2-Net). CE2-Net extracts context information from the features of DE2-Net at different stages through large receptive field convolutions. Moreover, a collaborative attention module (CAM) and a detail-context integration module are proposed to enhance and integrate detail and context information. CAM is reused to enhance the detail features from multi-receptive fields and the context features from multiple stages. Extensive experimental results demonstrate that our method outperforms the state-of-the-art LLIE methods, and is applicable to other image enhancement tasks, such as underwater image enhancement.

Keyword :

Collaborative enhancement and integration Collaborative enhancement and integration Color/brightness correction Color/brightness correction Detail reconstruction Detail reconstruction Low-light image enhancement Low-light image enhancement

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Yuzhen , Lin, Xiaofeng , Xu, Huangbiao et al. Collaboratively enhanced and integrated detail-context information for low enhancement [J]. | PATTERN RECOGNITION , 2025 , 162 .
MLA Niu, Yuzhen et al. "Collaboratively enhanced and integrated detail-context information for low enhancement" . | PATTERN RECOGNITION 162 (2025) .
APA Niu, Yuzhen , Lin, Xiaofeng , Xu, Huangbiao , Xu, Rui , Chen, Yuzhong . Collaboratively enhanced and integrated detail-context information for low enhancement . | PATTERN RECOGNITION , 2025 , 162 .
Export to NoteExpress RIS BibTex

Version :

Collaboratively enhanced and integrated detail-context information for low-light image enhancement EI
期刊论文 | 2025 , 162 | Pattern Recognition
Collaboratively enhanced and integrated detail-context information for low-light image enhancement Scopus
期刊论文 | 2025 , 162 | Pattern Recognition
Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis SCIE
期刊论文 | 2025 , 81 (1) | JOURNAL OF SUPERCOMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

Aspect-level multimodal sentiment analysis aims to ascertain the sentiment polarity of a given aspect from a text review and its accompanying image. Despite substantial progress made by existing research, aspect-level multimodal sentiment analysis still faces several challenges: (1) Inconsistency in feature granularity between the text and image modalities poses difficulties in capturing corresponding visual representations of aspect words. This inconsistency may introduce irrelevant or redundant information, thereby causing noise and interference in sentiment analysis. (2) Traditional aspect-level sentiment analysis predominantly relies on the fusion of semantic and syntactic information to determine the sentiment polarity of a given aspect. However, introducing image modality necessitates addressing the semantic gap in jointly understanding sentiment features in different modalities. To address these challenges, a multi-granularity visual-textual feature fusion model (MG-VTFM) is proposed to enable deep sentiment interactions among semantic, syntactic, and image information. First, the model introduces a multi-granularity hierarchical graph attention network that controls the granularity of semantic units interacting with images through constituent tree. This network extracts image sentiment information relevant to the specific granularity, reduces noise from images and ensures sentiment relevance in single-granularity cross-modal interactions. Building upon this, a multilayered graph attention module is employed to accomplish multi-granularity sentiment fusion, ranging from fine to coarse. Furthermore, a progressive multimodal attention fusion mechanism is introduced to maximize the extraction of abstract sentiment information from images. Lastly, a mapping mechanism is proposed to align cross-modal information based on aspect words, unifying semantic spaces across different modalities. Our model demonstrates excellent overall performance on two datasets.

Keyword :

Aspect-level sentiment analysis Aspect-level sentiment analysis Constituent tree Constituent tree Multi-granularity Multi-granularity Multimodal data Multimodal data Visual-textual feature fusion Visual-textual feature fusion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Yuzhong , Shi, Liyuan , Lin, Jiali et al. Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis [J]. | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (1) .
MLA Chen, Yuzhong et al. "Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis" . | JOURNAL OF SUPERCOMPUTING 81 . 1 (2025) .
APA Chen, Yuzhong , Shi, Liyuan , Lin, Jiali , Chen, Jingtian , Zhong, Jiayuan , Dong, Chen . Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis . | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (1) .
Export to NoteExpress RIS BibTex

Version :

Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis Scopus
期刊论文 | 2025 , 81 (1) | Journal of Supercomputing
Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis EI
期刊论文 | 2025 , 81 (1) | Journal of Supercomputing
面向分布式数据安全共享的高速公路路网拥堵监测
期刊论文 | 2025 , 41 (1) , 11-20 | 福建师范大学学报(自然科学版)
Abstract&Keyword Cite

Abstract :

应用人工智能技术对高速公路路网道路状态进行监测已成为热点,然而,数据孤岛及隐私保护是高速路网智能决策面临的挑战.为实现分布式数据安全共享及智能决策,以拥堵问题为例,提出基于联邦学习的高速路网道路拥堵状态监测策略.利用摄像头实时数据,在密态可计算的同态加密联邦学习智能决策架构下,建立基于道路区间优化的拥堵状态监测模型.结果表明,在确保分布式数据安全共享的前提下,能够有效实现高速路网道路拥堵状态监测.

Keyword :

同态加密 同态加密 数据安全共享 数据安全共享 智能决策 智能决策 联邦学习 联邦学习 道路拥堵状态 道路拥堵状态 高速公路路网 高速公路路网

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李林锋 , 陈羽中 , 姚毅楠 et al. 面向分布式数据安全共享的高速公路路网拥堵监测 [J]. | 福建师范大学学报(自然科学版) , 2025 , 41 (1) : 11-20 .
MLA 李林锋 et al. "面向分布式数据安全共享的高速公路路网拥堵监测" . | 福建师范大学学报(自然科学版) 41 . 1 (2025) : 11-20 .
APA 李林锋 , 陈羽中 , 姚毅楠 , 邵伟杰 . 面向分布式数据安全共享的高速公路路网拥堵监测 . | 福建师范大学学报(自然科学版) , 2025 , 41 (1) , 11-20 .
Export to NoteExpress RIS BibTex

Version :

Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation SCIE
期刊论文 | 2024 , 305 | KNOWLEDGE-BASED SYSTEMS
Abstract&Keyword Cite Version(2)

Abstract :

The recommendation system aims to recommend items to users by capturing their personalized interests. Traditional recommendation systems typically focus on modeling target behaviors between users and items. However, in practical application scenarios, various types of behaviors (e.g., click, favorite, purchase, etc.) occur between users and items. Despite recent efforts in modeling various behavior types, multi-behavior recommendation still faces two significant challenges. The first challenge is how to comprehensively capture the complex relationships between various types of behaviors, including their interest differences and interest commonalities. The second challenge is how to solve the sparsity of target behaviors while ensuring the authenticity of information from various types of behaviors. To address these issues, a multi-behavior recommendation framework based on Multi-View Multi-Behavior Interest Learning Network and Contrastive Learning (MMNCL) is proposed. This framework includes a multi-view multi-behavior interest learning module that consists of two submodules: the behavior difference aware submodule, which captures intra-behavior interests for each behavior type and the correlations between various types of behaviors, and the behavior commonality aware submodule, which captures the information of interest commonalities between various types of behaviors. Additionally, a multi-view contrastive learning module is proposed to conduct node self- discrimination, ensuring the authenticity of information integration among various types of behaviors, and facilitating an effective fusion of interest differences and interest commonalities. Experimental results on three real-world benchmark datasets demonstrate the effectiveness of MMNCL and its advantages over other state-of-the-art recommendation models. Our code is available at https://github.com/sujieyang/MMNCL.

Keyword :

Contrastive learning Contrastive learning Interest learning network Interest learning network Meta learning Meta learning Multi-behavior recommendation Multi-behavior recommendation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Su, Jieyang , Chen, Yuzhong , Lin, Xiuqiang et al. Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 305 .
MLA Su, Jieyang et al. "Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation" . | KNOWLEDGE-BASED SYSTEMS 305 (2024) .
APA Su, Jieyang , Chen, Yuzhong , Lin, Xiuqiang , Zhong, Jiayuan , Dong, Chen . Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation . | KNOWLEDGE-BASED SYSTEMS , 2024 , 305 .
Export to NoteExpress RIS BibTex

Version :

Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation Scopus
期刊论文 | 2024 , 305 | Knowledge-Based Systems
Multi-view multi-behavior interest learning network and contrastive learning for multi-behavior recommendation EI
期刊论文 | 2024 , 305 | Knowledge-Based Systems
MiNet: Weakly-Supervised Camouflaged Object Detection through Mutual Interaction between Region and Edge Cues EI
会议论文 | 2024 , 6316-6325 | 32nd ACM International Conference on Multimedia, MM 2024
Abstract&Keyword Cite Version(1)

Abstract :

Existing weakly-supervised camouflaged object detection (WSCOD) methods have much difficulty in detecting accurate object boundaries due to insufficient and imprecise boundary supervision in scribble annotations. Drawing inspiration from human perception that discerns camouflaged objects by incorporating both object region and boundary information, we propose a novel Mutual Interaction Network (MiNet) for scribble-based WSCOD to alleviate the detection difficulty caused by insufficient scribbles. The proposed MiNet facilitates mutual reinforcement between region and edge cues, thereby integrating more robust priors to enhance detection accuracy. In this paper, we first construct an edge cue refinement net, featuring a core region-aware guidance module (RGM) aimed at leveraging the extracted region feature as a prior to generate the discriminative edge map. By considering both object semantic and positional relationships between edge feature and region feature, RGM highlights the areas associated with the object in the edge feature. Subsequently, to tackle the inherent similarity between camouflaged objects and the surroundings, we devise a region-boundary refinement net. This net incorporates a core edge-aware guidance module (EGM), which uses the enhanced edge map from the edge cue refinement net as guidance to refine the object boundaries in an iterative and multi-level manner. Experiments on CAMO, CHAMELEON, COD10K, and NC4K datasets demonstrate that the proposed MiNet outperforms the state-of-the-art methods. © 2024 ACM.

Keyword :

Edge detection Edge detection Feature extraction Feature extraction Object detection Object detection Object recognition Object recognition Semantic Segmentation Semantic Segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Yuzhen , Yang, Lifen , Xu, Rui et al. MiNet: Weakly-Supervised Camouflaged Object Detection through Mutual Interaction between Region and Edge Cues [C] . 2024 : 6316-6325 .
MLA Niu, Yuzhen et al. "MiNet: Weakly-Supervised Camouflaged Object Detection through Mutual Interaction between Region and Edge Cues" . (2024) : 6316-6325 .
APA Niu, Yuzhen , Yang, Lifen , Xu, Rui , Li, Yuezhou , Chen, Yuzhong . MiNet: Weakly-Supervised Camouflaged Object Detection through Mutual Interaction between Region and Edge Cues . (2024) : 6316-6325 .
Export to NoteExpress RIS BibTex

Version :

MiNet: Weakly-Supervised Camouflaged Object Detection through Mutual Interaction between Region and Edge Cues Scopus
其他 | 2024 , 6316-6325 | MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring SCIE
期刊论文 | 2024 , 139 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Abstract&Keyword Cite Version(2)

Abstract :

Defocus deblurring using dual-pixel sensors has gathered significant attention in recent years. However, current methodologies have not adequately addressed the challenge of defocus disparity between dual views, resulting in suboptimal performance in recovering details from severely defocused pixels. To counteract this limitation, we introduce in this paper a parallax-aware dual-view feature enhancement and adaptive detail compensation network (PA-Net), specifically tailored for dual-pixel defocus deblurring task. Our proposed PA- Net leverages an encoder-decoder architecture augmented with skip connections, designed to initially extract distinct features from the left and right views. A pivotal aspect of our model lies at the network's bottleneck, where we introduce a parallax-aware dual-view feature enhancement based on Transformer blocks, which aims to align and enhance extracted dual-pixel features, aggregating them into a unified feature. Furthermore, taking into account the disparity and the rich details embedded in encoder features, we design an adaptive detail compensation module to adaptively incorporate dual-view encoder features into image reconstruction, aiding in restoring image details. Experimental results demonstrate that our proposed PA-Net exhibits superior performance and visual effects on the real-world dataset.

Keyword :

Defocus deblurring Defocus deblurring Defocus disparity Defocus disparity Detail restoration Detail restoration Dual-pixel Dual-pixel Image restoration Image restoration

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Yuzhen , He, Yuqi , Xu, Rui et al. Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2024 , 139 .
MLA Niu, Yuzhen et al. "Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 139 (2024) .
APA Niu, Yuzhen , He, Yuqi , Xu, Rui , Li, Yuezhou , Chen, Yuzhong . Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2024 , 139 .
Export to NoteExpress RIS BibTex

Version :

Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring EI
期刊论文 | 2025 , 139 | Engineering Applications of Artificial Intelligence
Parallax-aware dual-view feature enhancement and adaptive detail compensation for dual-pixel defocus deblurring Scopus
期刊论文 | 2025 , 139 | Engineering Applications of Artificial Intelligence
Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement SCIE
期刊论文 | 2024 , 26 , 10792-10804 | IEEE TRANSACTIONS ON MULTIMEDIA
Abstract&Keyword Cite Version(2)

Abstract :

Low-light image enhancement is a challenging task due to the limited visibility in dark environments. While recent advances have shown progress in integrating CNNs and Transformers, the inadequate local-global perceptual interactions still impedes their application in complex degradation scenarios. To tackle this issue, we propose BiFormer, a lightweight framework that facilitates local-global collaborative perception via bilateral interaction. Specifically, our framework introduces a core CNN-Transformer collaborative perception block (CPB) that combines local-aware convolutional attention (LCA) and global-aware recursive Transformer (GRT) to simultaneously preserve local details and ensure global consistency. To promote perceptual interaction, we adopt bilateral interaction strategy for both local and global perception, which involves local-to-global second-order interaction (SoI) in the dual-domain, as well as a mixed-channel fusion (MCF) module for global-to-local interaction. The MCF is also a highly efficient feature fusion module tailored for degraded features. Extensive experiments conducted on low-level and high-level tasks demonstrate that BiFormer achieves state-of-the-art performance. Furthermore, it exhibits a significant reduction in model parameters and computational cost compared to existing Transformer-based low-light image enhancement methods.

Keyword :

bilateral interaction bilateral interaction hybrid CNN- Transformer hybrid CNN- Transformer Low-light image enhancement Low-light image enhancement mixed-channel fusion mixed-channel fusion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Rui , Li, Yuezhou , Niu, Yuzhen et al. Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 : 10792-10804 .
MLA Xu, Rui et al. "Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement" . | IEEE TRANSACTIONS ON MULTIMEDIA 26 (2024) : 10792-10804 .
APA Xu, Rui , Li, Yuezhou , Niu, Yuzhen , Xu, Huangbiao , Chen, Yuzhong , Zhao, Tiesong . Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement . | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 , 10792-10804 .
Export to NoteExpress RIS BibTex

Version :

Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement Scopus
期刊论文 | 2024 , 26 , 1-13 | IEEE Transactions on Multimedia
Bilateral Interaction for Local-Global Collaborative Perception in Low-Light Image Enhancement EI
期刊论文 | 2024 , 26 , 10792-10804 | IEEE Transactions on Multimedia
A knowledge-enhanced interest segment division attention network for click-through rate prediction EI
期刊论文 | 2024 , 36 (34) , 21817-21837 | Neural Computing and Applications
Abstract&Keyword Cite Version(1)

Abstract :

Click-through rate (CTR) prediction aims to estimate the probability of a user clicking on a particular item, making it one of the core tasks in various recommendation platforms. In such systems, user behavior data are crucial for capturing user interests, which has garnered significant attention from both academia and industry, leading to the development of various user behavior modeling methods. However, existing models still face unresolved issues, as they fail to capture the complex diversity of user interests at the semantic level, refine user interests effectively, and uncover users’ potential interests. To address these challenges, we propose a novel model called knowledge-enhanced Interest segment division attention network (KISDAN), which can effectively and comprehensively model user interests. Specifically, to leverage the semantic information within user behavior sequences, we employ the structure of a knowledge graph to divide user behavior sequence into multiple interest segments. To provide a comprehensive representation of user interests, we further categorize user interests into strong and weak interests. By leveraging both the knowledge graph and the item co-occurrence graph, we explore users’ potential interests from two perspectives. This methodology allows KISDAN to better understand the diversity of user interests. Finally, we extensively evaluate KISDAN on three benchmark datasets, and the experimental results consistently demonstrate that the KISDAN model outperforms state-of-the-art models across various evaluation metrics, which validates the effectiveness and superiority of KISDAN. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.

Keyword :

Contrastive Learning Contrastive Learning Knowledge graph Knowledge graph Prediction models Prediction models Semantic Segmentation Semantic Segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Zhanghui , Chen, Shijie , Chen, Yuzhong et al. A knowledge-enhanced interest segment division attention network for click-through rate prediction [J]. | Neural Computing and Applications , 2024 , 36 (34) : 21817-21837 .
MLA Liu, Zhanghui et al. "A knowledge-enhanced interest segment division attention network for click-through rate prediction" . | Neural Computing and Applications 36 . 34 (2024) : 21817-21837 .
APA Liu, Zhanghui , Chen, Shijie , Chen, Yuzhong , Su, Jieyang , Zhong, Jiayuan , Dong, Chen . A knowledge-enhanced interest segment division attention network for click-through rate prediction . | Neural Computing and Applications , 2024 , 36 (34) , 21817-21837 .
Export to NoteExpress RIS BibTex

Version :

A knowledge-enhanced interest segment division attention network for click-through rate prediction Scopus
期刊论文 | 2024 , 36 (34) , 21817-21837 | Neural Computing and Applications
10| 20| 50 per page
< Page ,Total 17 >

Export

Results:

Selected

to

Format:
Online/Total:470/10112427
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1