• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Yao, Zhaojian (Yao, Zhaojian.) [1] | Gao, Wei (Gao, Wei.) [2] | Li, Ge (Li, Ge.) [3] | Zhao, Tiesong (Zhao, Tiesong.) [4] (Scholars:赵铁松)

Indexed by:

Scopus SCIE

Abstract:

The key challenge of cross-modal salient object detection lies in the representational discrepancy between different modal inputs. Existing methods typically employ only one encoding mode, either constrained encoding to extract modality-shared characteristics, or unconstrained encoding to capture modality-specific traits. However, the use of a single paradigm limits the capability of capturing salient cues, thus leading to poor generalization of existing methods. We propose a novel learning paradigm named "Collaborating Constrained and Unconstrained Encodings" (CCUE) that integrates constrained and unconstrained feature extraction to discover richer salient cues. Accordingly, we establish a CCUE network (CCUENet) consisting of a constrained branch and an unconstrained branch. The representations at each level from these two branches are integrated in an Information Selection and Fusion (ISF) module. The novelty of this module lies in its selective fusion of the important information from each feature primarily based on the response degree, which enables the network to aggregate effective cues for saliency detection. In the network training stage, we propose a Multi-scale Boundary Information (MBI) loss, which can constrain the detection results to retain clear region boundaries and boost the model's robustness to variations in object scale. Under the supervision of MBI loss, CCUENet is able to output high-quality saliency maps. The experimental results show that CCUENet exhibits superior performance on RGB-T and RGB-D datasets.

Keyword:

Adaptation models constrained features Encoding Feature extraction Fuses Imaging information selection and fusion Lighting multi-scale boundary information loss Object detection Object recognition Saliency detection Salient object detection Training unconstrained features

Community:

  • [ 1 ] [Yao, Zhaojian]Peking Univ, Sch Elect & Comp Engn, Guangdong Prov Key Lab Ultra High Definit Immers M, Shenzhen 518055, Peoples R China
  • [ 2 ] [Gao, Wei]Peking Univ, Sch Elect & Comp Engn, Guangdong Prov Key Lab Ultra High Definit Immers M, Shenzhen 518055, Peoples R China
  • [ 3 ] [Li, Ge]Peking Univ, Sch Elect & Comp Engn, Guangdong Prov Key Lab Ultra High Definit Immers M, Shenzhen 518055, Peoples R China
  • [ 4 ] [Gao, Wei]Peng Cheng Lab, Shenzhen 518066, Peoples R China
  • [ 5 ] [Zhao, Tiesong]Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China

Reprint 's Address:

  • [Gao, Wei]Peking Univ, Sch Elect & Comp Engn, Guangdong Prov Key Lab Ultra High Definit Immers M, Shenzhen 518055, Peoples R China;;[Gao, Wei]Peng Cheng Lab, Shenzhen 518066, Peoples R China

Show more details

Version:

Related Keywords:

Related Article:

Source :

IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE

ISSN: 2471-285X

Year: 2025

5 . 3 0 0

JCR@2023

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Online/Total:966/13825138
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1