• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Chen, Y. (Chen, Y..) [1] | Shi, L. (Shi, L..) [2] | Lin, J. (Lin, J..) [3] | Chen, J. (Chen, J..) [4] | Zhong, J. (Zhong, J..) [5] | Dong, C. (Dong, C..) [6]

Indexed by:

Scopus

Abstract:

Aspect-level multimodal sentiment analysis aims to ascertain the sentiment polarity of a given aspect from a text review and its accompanying image. Despite substantial progress made by existing research, aspect-level multimodal sentiment analysis still faces several challenges: (1) Inconsistency in feature granularity between the text and image modalities poses difficulties in capturing corresponding visual representations of aspect words. This inconsistency may introduce irrelevant or redundant information, thereby causing noise and interference in sentiment analysis. (2) Traditional aspect-level sentiment analysis predominantly relies on the fusion of semantic and syntactic information to determine the sentiment polarity of a given aspect. However, introducing image modality necessitates addressing the semantic gap in jointly understanding sentiment features in different modalities. To address these challenges, a multi-granularity visual-textual feature fusion model (MG-VTFM) is proposed to enable deep sentiment interactions among semantic, syntactic, and image information. First, the model introduces a multi-granularity hierarchical graph attention network that controls the granularity of semantic units interacting with images through constituent tree. This network extracts image sentiment information relevant to the specific granularity, reduces noise from images and ensures sentiment relevance in single-granularity cross-modal interactions. Building upon this, a multilayered graph attention module is employed to accomplish multi-granularity sentiment fusion, ranging from fine to coarse. Furthermore, a progressive multimodal attention fusion mechanism is introduced to maximize the extraction of abstract sentiment information from images. Lastly, a mapping mechanism is proposed to align cross-modal information based on aspect words, unifying semantic spaces across different modalities. Our model demonstrates excellent overall performance on two datasets. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.

Keyword:

Aspect-level sentiment analysis Constituent tree Multi-granularity Multimodal data Visual-textual feature fusion

Community:

  • [ 1 ] [Chen Y.]College of Computer and Data Science, Fuzhou University, Fujian, Fuzhou, 350108, China
  • [ 2 ] [Chen Y.]Engineering Research Center of Big Data Intelligence, Ministry of Education, Fuzhou, China
  • [ 3 ] [Chen Y.]Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fujian, Fuzhou, 350108, China
  • [ 4 ] [Shi L.]College of Computer and Data Science, Fuzhou University, Fujian, Fuzhou, 350108, China
  • [ 5 ] [Shi L.]Engineering Research Center of Big Data Intelligence, Ministry of Education, Fuzhou, China
  • [ 6 ] [Shi L.]Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fujian, Fuzhou, 350108, China
  • [ 7 ] [Lin J.]College of Computer and Data Science, Fuzhou University, Fujian, Fuzhou, 350108, China
  • [ 8 ] [Lin J.]Engineering Research Center of Big Data Intelligence, Ministry of Education, Fuzhou, China
  • [ 9 ] [Lin J.]Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fujian, Fuzhou, 350108, China
  • [ 10 ] [Chen J.]Fujian Media Group, Fujian, Fuzhou, 350002, China
  • [ 11 ] [Zhong J.]College of Computer and Data Science, Fuzhou University, Fujian, Fuzhou, 350108, China
  • [ 12 ] [Zhong J.]Engineering Research Center of Big Data Intelligence, Ministry of Education, Fuzhou, China
  • [ 13 ] [Zhong J.]Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fujian, Fuzhou, 350108, China
  • [ 14 ] [Dong C.]College of Computer and Data Science, Fuzhou University, Fujian, Fuzhou, 350108, China
  • [ 15 ] [Dong C.]Engineering Research Center of Big Data Intelligence, Ministry of Education, Fuzhou, China
  • [ 16 ] [Dong C.]Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fujian, Fuzhou, 350108, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Source :

Journal of Supercomputing

ISSN: 0920-8542

Year: 2025

Issue: 1

Volume: 81

2 . 5 0 0

JCR@2023

CAS Journal Grade:3

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 3

Affiliated Colleges:

Online/Total:166/10874373
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1