• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship
Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 2 >
DFINet: Dynamic feedback iterative network for infrared small target detection SCIE
期刊论文 | 2026 , 169 | PATTERN RECOGNITION
Abstract&Keyword Cite Version(2)

Abstract :

Recently, deep learning-based methods have made impressive progress in infrared small target detection (IRSTD). However, the weak and variable nature of small targets constrains the feature extraction and scene adaptation of existing methods, leading to low data utilization and poor robustness. To address this issue, we innovatively introduce the feedback mechanism into IRSTD and propose the dynamic feedback iterative network (DFINet). The main motivation is to guide the model training and prediction utilizing the history prediction mask (HPMK) of previous rounds. On the one hand, in the training phase, DFINet can further mine the key features of real targets by training in multiple iterations with limited data; on the other hand, in the prediction phase, DFINet can correct the wrong results through feedback iterative to improve the model robustness. Specifically, we first propose the dynamic feedback feature fusion module (DFFFM), which dynamically interacts HPMK with feature maps through a hard attention mechanism to guide feature mining and error correction. Then, for better feature extraction, the cascaded hybrid pyramid pooling module (CHPP) is devised to capture both global and local information. Finally, we propose the dynamic semantic fusion module (DSFM), which innovatively utilizes feedback information to guide the fusion of high-level and low-level features for better feature representation in different scenarios. Extensive experimental results on publicly available datasets of NUDT-SIRST, IRSTD-1k, and SIRST Aug show that DFINet outperforms several state-of-the-art methods and achieves superior detection performance. Our code will be publicly available at https://github.com/uisdu/DFINet.

Keyword :

Error correction Error correction Feature mining Feature mining Feedback iteration Feedback iteration Infrared small target detection Infrared small target detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Luo, Changhai , Qiu, Zhaobing et al. DFINet: Dynamic feedback iterative network for infrared small target detection [J]. | PATTERN RECOGNITION , 2026 , 169 .
MLA Wu, Jing et al. "DFINet: Dynamic feedback iterative network for infrared small target detection" . | PATTERN RECOGNITION 169 (2026) .
APA Wu, Jing , Luo, Changhai , Qiu, Zhaobing , Chen, Liqiong , Ni, Rixiang , Li, Yunxiang et al. DFINet: Dynamic feedback iterative network for infrared small target detection . | PATTERN RECOGNITION , 2026 , 169 .
Export to NoteExpress RIS BibTex

Version :

DFINet: Dynamic feedback iterative network for infrared small target detection Scopus
期刊论文 | 2026 , 169 | Pattern Recognition
DFINet: Dynamic feedback iterative network for infrared small target detection EI
期刊论文 | 2026 , 169 | Pattern Recognition
基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法
期刊论文 | 2025 , 38 (1) , 36-50 | 模式识别与人工智能
Abstract&Keyword Cite Version(1)

Abstract :

针对现有图像超分辨率重建方法存在模型复杂度过高和参数量过大等问题,文中提出基于多尺度空间自适应注意力网络(Multi-scale Spatial Adaptive Attention Network,MSAAN)的轻量级图像超分辨率重建方法.首先,设计全局特征调制模块(Global Feature Modulation Module,GFM),学习全局纹理特征.同时,设计轻量级的多尺度特征聚合模块(Multi-scale Feature Aggregation Module,MFA),自适应聚合局部至全局的高频空间特征.然后,融合GFM和MFA,提出多尺度空间自适应注意力模块(Multi-scale Spatial Adaptive Attention Module,MSAA).最后,通过特征交互门控前馈模块(Feature Interactive Gated Feed-Forward Module,FIGFF)增强局部信息提取能力,同时减少通道冗余.大量实验表明,MSAAN能捕捉更全面、更精细的特征,在保证轻量化的同时显著提升图像的重建效果.

Keyword :

Transformer Transformer 卷积神经网络 卷积神经网络 多尺度空间自适应注意力 多尺度空间自适应注意力 轻量级图像超分辨率重建 轻量级图像超分辨率重建

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 黄峰 , 刘鸿伟 , 沈英 et al. 基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法 [J]. | 模式识别与人工智能 , 2025 , 38 (1) : 36-50 .
MLA 黄峰 et al. "基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法" . | 模式识别与人工智能 38 . 1 (2025) : 36-50 .
APA 黄峰 , 刘鸿伟 , 沈英 , 裘兆炳 , 陈丽琼 . 基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法 . | 模式识别与人工智能 , 2025 , 38 (1) , 36-50 .
Export to NoteExpress RIS BibTex

Version :

基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法
期刊论文 | 2025 , 38 (01) , 36-50 | 模式识别与人工智能
Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation SCIE
期刊论文 | 2025 , 63 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite Version(3)

Abstract :

Infrared small target detection (IRSTD) plays a vital role in various fields, especially in military early warning and maritime rescue. Its main goal is to accurately locate targets at long distances. Current deep learning (DL)-based methods mainly rely on mask-to-mask or box-to-box regression training approaches, making considerable progress in detection accuracy. However, these methods rely on large amounts of training data with expensive manual annotation. Although some researchers attempt to reduce the cost using single-point weak supervision (SPWS), the limited labeling accuracy significantly degrades the detection performance. To address these issues, we propose a novel point-to-point regression high-resolution dynamic network (P2P-HDNet), which can accurately locate the target center using only single-point annotation. Specifically, we first devise the high-resolution cross-feature extraction module (HCEM) to provide richer target detail information for the deep feature maps. Notably, HCEM maintains high resolution throughout the feature extraction process to minimize information loss. Then, the dynamic coordinate fusion module (DCFM) is devised to fully fuse the multidimensional features and enhance the positional sensitivity. Finally, we devise an adaptive target localization detection head (ATLDH) to further suppress clutter and improve the localization accuracy by regressing the Gaussian heatmap and adaptive nonmaximal suppression strategy. Extensive experimental results show that P2P-HDNet can achieve better detection accuracy than the state-of-the-art (SOTA) methods with only single-point annotation. In addition, our code and datasets will be available at: https://github.com/Anton-Nrx/P2P-HDNet.

Keyword :

Dynamic feature attention mechanism Dynamic feature attention mechanism high-resolution feature extraction high-resolution feature extraction infrared small target detection (IRSTD) infrared small target detection (IRSTD) point-to-point regression (P2PR) point-to-point regression (P2PR) single-point supervision single-point supervision

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ni, Rixiang , Wu, Jing , Qiu, Zhaobing et al. Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
MLA Ni, Rixiang et al. "Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) .
APA Ni, Rixiang , Wu, Jing , Qiu, Zhaobing , Chen, Liqiong , Luo, Changhai , Huang, Feng et al. Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
Export to NoteExpress RIS BibTex

Version :

Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation Scopus
期刊论文 | 2025 , 63 | IEEE Transactions on Geoscience and Remote Sensing
Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation Scopus
期刊论文 | 2025 , 63 | IEEE Transactions on Geoscience and Remote Sensing
Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation EI
期刊论文 | 2025 , 63 | IEEE Transactions on Geoscience and Remote Sensing
Uncertainty and diversity-based active learning for UAV tracking SCIE
期刊论文 | 2025 , 639 | NEUROCOMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

Unmanned aerial vehicles (UAVs) are increasingly utilized in target tracking scenarios due to their compact size and agile movement. With the rapid advancement of artificial intelligence, an increasing number of deep learning algorithms, particularly Transformer-based trackers, are being employed in UAV tracking. However, these algorithms typically have substantial data requirements. This paper investigates the integration of active learning and UAV tracking to mitigate the dataset demands of these models. We propose an active learning framework tailored for UAV target tracking scenarios. Our method, called the Uncertainty and Diversity-based Active Learning for UAV Tracking (UDALT), aims to develop a high-performance tracking model by selecting the most informative samples based on video-level uncertainty and diversity. Specifically, for the uncertainty of the tracked object, we introduce a new entropy-based evaluation formula to assess unlabeled samples and identify more challenging ones. For diversity, we first represent object types by leveraging intermediate features from the model, then apply the K-means clustering algorithm to determine the cluster centers of known object types. By calculating the distances of unlabeled samples from these centers, we ensure a balanced distribution of object types when selecting new samples. This combination of uncertainty and diversity effectively reduces labeling costs while maintaining high tracking accuracy. To further enhance tracking performance in challenging UAV scenarios, we replace the traditional Intersection over Union (IoU) computation with Foacler-IoU. This adjustment allows the trained model to better align with difficult samples, thereby improving model robustness. Finally, we evaluate our algorithm on the UAV123, UAVDT, and DTB70 datasets. The results demonstrate that our UDALT method outperforms several existing active learning methods, validating the effectiveness of our proposed tracking method.

Keyword :

Active learning Active learning Deep learning Deep learning Diversity Diversity UAV tracking UAV tracking Uncertainty Uncertainty

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liang, Yingqin , Huang, Feng , Qiu, Zhaobing et al. Uncertainty and diversity-based active learning for UAV tracking [J]. | NEUROCOMPUTING , 2025 , 639 .
MLA Liang, Yingqin et al. "Uncertainty and diversity-based active learning for UAV tracking" . | NEUROCOMPUTING 639 (2025) .
APA Liang, Yingqin , Huang, Feng , Qiu, Zhaobing , Shu, Xiu , Liu, Qiao , Yuan, Di . Uncertainty and diversity-based active learning for UAV tracking . | NEUROCOMPUTING , 2025 , 639 .
Export to NoteExpress RIS BibTex

Version :

Uncertainty and diversity-based active learning for UAV tracking Scopus
期刊论文 | 2025 , 639 | Neurocomputing
Uncertainty and diversity-based active learning for UAV tracking EI
期刊论文 | 2025 , 639 | Neurocomputing
PFAN: progressive feature aggregation network for lightweight image super-resolution SCIE
期刊论文 | 2025 , 41 (11) , 8431-8450 | VISUAL COMPUTER
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Image super-resolution (SR) has recently gained traction in various fields, including remote sensing, biomedicine, and video surveillance. Nonetheless, the majority of advancements in SR have been achieved by scaling the architecture of convolutional neural networks, which inevitably increases computational complexity. In addition, most existing SR models struggle to effectively capture high-frequency information, resulting in overly smooth reconstructed images. To address this issue, we propose a lightweight Progressive Feature Aggregation Network (PFAN), which leverages Progressive Feature Aggregation Block to enhance different features through a progressive strategy. Specifically, we propose a Key Information Perception Module for capturing high-frequency details from cross-spatial-channel dimension to recover edge features. Besides, we design a Local Feature Enhancement Module, which effectively combines multi-scale convolutions for local feature extraction and Transformer for long-range dependencies modeling. Through the progressive fusion of rich edge details and texture features, our PFAN successfully achieves better reconstruction performance. Extensive experiments on five benchmark datasets demonstrate that PFAN outperforms state-of-the-art methods and strikes a better balance across SR performance, parameters, and computational complexity. Code can be available at https://github.com/handsomeyxk/PFAN.

Keyword :

CNN CNN Key information perception Key information perception Local feature enhancement Local feature enhancement Progressive feature aggregation network Progressive feature aggregation network Super-resolution Super-resolution Transformer Transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Liqiong , Yang, Xiangkun , Wang, Shu et al. PFAN: progressive feature aggregation network for lightweight image super-resolution [J]. | VISUAL COMPUTER , 2025 , 41 (11) : 8431-8450 .
MLA Chen, Liqiong et al. "PFAN: progressive feature aggregation network for lightweight image super-resolution" . | VISUAL COMPUTER 41 . 11 (2025) : 8431-8450 .
APA Chen, Liqiong , Yang, Xiangkun , Wang, Shu , Shen, Ying , Wu, Jing , Huang, Feng et al. PFAN: progressive feature aggregation network for lightweight image super-resolution . | VISUAL COMPUTER , 2025 , 41 (11) , 8431-8450 .
Export to NoteExpress RIS BibTex

Version :

PFAN: progressive feature aggregation network for lightweight image super-resolution EI
期刊论文 | 2025 , 41 (11) , 8431-8450 | Visual Computer
PFAN: progressive feature aggregation network for lightweight image super-resolution Scopus
期刊论文 | 2025 , 41 (11) , 8431-8450 | Visual Computer
Hierarchical Attention Siamese Network for Thermal Infrared Target Tracking SCIE
期刊论文 | 2024 , 73 | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Thermal infrared (TIR) target tracking is an important topic in the computer vision area. The TIR images are not affected by ambient light and have strong environmental adaptability, making them widely used in battlefield perception, video surveillance, and assisted driving. However, TIR target tracking faces problems such as relatively insufficient information and lack of target texture information, which significantly affects the tracking accuracy of the TIR tracking methods. To solve the above problems, we propose a TIR target tracking method based on a Siamese network with a hierarchical attention mechanism (called: SiamHAN). Specifically, the CIoU Loss is introduced to make full use of the regression box information to calculate the loss function more accurately. The global context network (GCNet) attention mechanism is introduced to reconstruct the feature extraction structure of fine-grained information for the fine-grained information of TIR images. Meanwhile, for the feature information of the hierarchical backbone network of the Siamese network, the ECANet attention mechanism is used for hierarchical feature fusion, so that it can fully utilize the feature information of the multilayer backbone network to represent the target. On the LSOTB-TIR, the hierarchical attention Siamese network achieved a 2.9% increase in success rate and a 4.3% increase in precision relative to the baseline tracker. Experiments show that the proposed SiamHAN method has achieved competitive tracking results on the TIR testing datasets.

Keyword :

Accuracy Accuracy Attention mechanism Attention mechanism Convolution Convolution feature extraction feature extraction Feature extraction Feature extraction feature fusion feature fusion Interference Interference Siamese network Siamese network Support vector machines Support vector machines Target tracking Target tracking thermal infrared (TIR) target tracking thermal infrared (TIR) target tracking Training Training

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yuan, Di , Liao, Donghai , Huang, Feng et al. Hierarchical Attention Siamese Network for Thermal Infrared Target Tracking [J]. | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT , 2024 , 73 .
MLA Yuan, Di et al. "Hierarchical Attention Siamese Network for Thermal Infrared Target Tracking" . | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 73 (2024) .
APA Yuan, Di , Liao, Donghai , Huang, Feng , Qiu, Zhaobing , Shu, Xiu , Tian, Chunwei et al. Hierarchical Attention Siamese Network for Thermal Infrared Target Tracking . | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT , 2024 , 73 .
Export to NoteExpress RIS BibTex

Version :

Hierarchical Attention Siamese Network for Thermal Infrared Target Tracking Scopus
期刊论文 | 2024 , 73 | IEEE Transactions on Instrumentation and Measurement
Hierarchical Attention Siamese Network for Thermal Infrared Target Tracking EI
期刊论文 | 2024 , 73 | IEEE Transactions on Instrumentation and Measurement
Robust Unsupervised Multifeature Representation for Infrared Small Target Detection SCIE
期刊论文 | 2024 , 17 , 10306-10323 | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING
Abstract&Keyword Cite Version(2)

Abstract :

Infrared small target detection is critical to infrared search and tracking systems. However, accurate and robust detection remains challenging due to the scarcity of target information and the complexity of clutter interference. Existing methods have some limitations in feature representation, leading to poor detection performance in complex scenes. Especially when there are sharp edges near the target or in cluster multitarget detection, the "target suppression" phenomenon tends to occur. To address this issue, we propose a robust unsupervised multifeature representation (RUMFR) method for infrared small target detection. On the one hand, robust unsupervised spatial clustering (RUSC) is designed to improve the accuracy of feature extraction; on the other hand, pixel-level multiple feature representation is proposed to fully utilize the target detail information. Specifically, we first propose the center-weighted interclass difference measure (CWIDM) with a trilayer design for fast candidate target extraction. Note that CWIDM also guides the parameter settings of RUSC. Then, the RUSC-based model is constructed to accurately extract target features in complex scenes. By designing the parameter adaptive strategy and iterative clustering strategy, RUSC can robustly segment cluster multitargets from complex backgrounds. Finally, RUMFR that fuses pixel-level contrast, distribution, and directional gradient features is proposed for better target representation and clutter suppression. Extensive experimental results show that our method has stronger feature representation capability and achieves better detection performance than several state-of-the-art methods.

Keyword :

Clutter Clutter Feature extraction Feature extraction Fuses Fuses Image edge detection Image edge detection Infrared small target detection Infrared small target detection Noise Noise Object detection Object detection pixel-level multifeature representation pixel-level multifeature representation robust unsupervised spatial clustering (RUSC) robust unsupervised spatial clustering (RUSC) Sparse matrices Sparse matrices "target suppression" phenomenon "target suppression" phenomenon

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Liqiong , Wu, Tong , Zheng, Shuyuan et al. Robust Unsupervised Multifeature Representation for Infrared Small Target Detection [J]. | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2024 , 17 : 10306-10323 .
MLA Chen, Liqiong et al. "Robust Unsupervised Multifeature Representation for Infrared Small Target Detection" . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 17 (2024) : 10306-10323 .
APA Chen, Liqiong , Wu, Tong , Zheng, Shuyuan , Qiu, Zhaobing , Huang, Feng . Robust Unsupervised Multifeature Representation for Infrared Small Target Detection . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2024 , 17 , 10306-10323 .
Export to NoteExpress RIS BibTex

Version :

Robust Unsupervised Multifeature Representation for Infrared Small Target Detection EI
期刊论文 | 2024 , 17 , 10306-10323 | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Robust Unsupervised Multi-Feature Representation for Infrared Small Target Detection Scopus
期刊论文 | 2024 , 17 , 1-18 | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Infrared low-altitude and slow-speed small target detection via fusion of target sparsity and motion saliency SCIE
期刊论文 | 2024 , 142 | INFRARED PHYSICS & TECHNOLOGY
Abstract&Keyword Cite Version(2)

Abstract :

Infrared (IR) small target detection exerts a significant role in IR early warning and UAV surveillance. However, in the low-altitude slow-speed small (LSS) target detection scene, the existing algorithms cannot effectively suppress high-contrast corners and sparse edges in the low-altitude background, resulting in many false alarms. To solve this problem, we propose an IR LSS target detection method based on fusion of target sparsity and motion saliency (TSMS). In the low-rank sparse model, we introduce a robust dual-window gradient operator to construct a fine local prior, which avoids the influence of highlighted edges and corners; The Geman norm is used to approximate the background rank to accurately estimate the background and effectively extract sparse targets. Then, a motion saliency model based on inter-frame local matching is constructed to accurately extract the inter frame features of small target. Finally, the real LSS target is obtained by fusing target sparsity and motion saliency. Experiments indicate that, compared with existing advanced methods, the proposed method has stronger robustness and can effectively detect LSS targets under complex low-altitude background.

Keyword :

Infrared (IR) image Infrared (IR) image Low-rank sparse Low-rank sparse Motion saliency Motion saliency Prior weight Prior weight Small target detection Small target detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Lang , Ma, Yong , Huang, Jun et al. Infrared low-altitude and slow-speed small target detection via fusion of target sparsity and motion saliency [J]. | INFRARED PHYSICS & TECHNOLOGY , 2024 , 142 .
MLA Wu, Lang et al. "Infrared low-altitude and slow-speed small target detection via fusion of target sparsity and motion saliency" . | INFRARED PHYSICS & TECHNOLOGY 142 (2024) .
APA Wu, Lang , Ma, Yong , Huang, Jun , Qiu, Zhaobing , Fan, Fan . Infrared low-altitude and slow-speed small target detection via fusion of target sparsity and motion saliency . | INFRARED PHYSICS & TECHNOLOGY , 2024 , 142 .
Export to NoteExpress RIS BibTex

Version :

Infrared low-altitude and slow-speed small target detection via fusion of target sparsity and motion saliency Scopus
期刊论文 | 2024 , 142 | Infrared Physics and Technology
Infrared low-altitude and slow-speed small target detection via fusion of target sparsity and motion saliency EI
期刊论文 | 2024 , 142 | Infrared Physics and Technology
Search region updating with hierarchical feature fusion for accurate thermal infrared tracking SCIE
期刊论文 | 2024 , 361 (18) | JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS
Abstract&Keyword Cite Version(2)

Abstract :

Due to their resilience against lighting variations, thermal infrared (TIR) images demonstrate robust adaptability in diverse environments, enabling effective object tracking even in intricate scenarios. Nevertheless, TIR target tracking encounters challenges such as fast target motion and interference from visually similar objects, substantially compromising the tracking precision of TIR trackers. To surmount these challenges, we propose a method grounded in the strategy of search region updating and hierarchical feature fusion, tailored for the precise TIR target- tracking task. Specifically, to address the issue of fast motion causing the target to depart from the search region, we propose to update the current search region by leveraging historical frame information. Additionally, we employ a hierarchical feature fusion strategy to contend with interference from visually similar objects in the tracking scenario. This strategy enhances the ability to model and represent the target more accurately, thereby elevating the tracker's capacity to discriminate between the target and similar objects. Furthermore, to tackle the challenge of inaccurate estimation of target bounding boxes, we introduce an enhanced Intersection over Union (IoU) loss function, which improvement facilitates a more precise prediction of target bounding boxes, resulting in superior target localization. Extensive experiments substantiate that our tracker exhibits a commendable level of competitiveness when compared to other trackers.

Keyword :

Hierarchical feature fusion Hierarchical feature fusion IoU loss IoU loss Search region updating Search region updating TIR target tracking TIR target tracking

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shu, Xiu , Huang, Feng , Qiu, Zhaobing et al. Search region updating with hierarchical feature fusion for accurate thermal infrared tracking [J]. | JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS , 2024 , 361 (18) .
MLA Shu, Xiu et al. "Search region updating with hierarchical feature fusion for accurate thermal infrared tracking" . | JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS 361 . 18 (2024) .
APA Shu, Xiu , Huang, Feng , Qiu, Zhaobing , Tian, Chunwei , Liu, Qiao , Yuan, Di . Search region updating with hierarchical feature fusion for accurate thermal infrared tracking . | JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS , 2024 , 361 (18) .
Export to NoteExpress RIS BibTex

Version :

Search region updating with hierarchical feature fusion for accurate thermal infrared tracking Scopus
期刊论文 | 2024 , 361 (18) | Journal of the Franklin Institute
Search region updating with hierarchical feature fusion for accurate thermal infrared tracking EI
期刊论文 | 2024 , 361 (18) | Journal of the Franklin Institute
Learning Unsupervised Cross-Domain Model for TIR Target Tracking SCIE
期刊论文 | 2024 , 12 (18) | MATHEMATICS
Abstract&Keyword Cite Version(1)

Abstract :

The limited availability of thermal infrared (TIR) training samples leads to suboptimal target representation by convolutional feature extraction networks, which adversely impacts the accuracy of TIR target tracking methods. To address this issue, we propose an unsupervised cross-domain model (UCDT) for TIR tracking. Our approach leverages labeled training samples from the RGB domain (source domain) to train a general feature extraction network. We then employ a cross-domain model to adapt this network for effective target feature extraction in the TIR domain (target domain). This cross-domain strategy addresses the challenge of limited TIR training samples effectively. Additionally, we utilize an unsupervised learning technique to generate pseudo-labels for unlabeled training samples in the source domain, which helps overcome the limitations imposed by the scarcity of annotated training data. Extensive experiments demonstrate that our UCDT tracking method outperforms existing tracking approaches on the PTB-TIR and LSOTB-TIR benchmarks.

Keyword :

cross-domain model cross-domain model feature extraction feature extraction thermal infrared tracking thermal infrared tracking unsupervised learning unsupervised learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shu, Xiu , Huang, Feng , Qiu, Zhaobing et al. Learning Unsupervised Cross-Domain Model for TIR Target Tracking [J]. | MATHEMATICS , 2024 , 12 (18) .
MLA Shu, Xiu et al. "Learning Unsupervised Cross-Domain Model for TIR Target Tracking" . | MATHEMATICS 12 . 18 (2024) .
APA Shu, Xiu , Huang, Feng , Qiu, Zhaobing , Zhang, Xinming , Yuan, Di . Learning Unsupervised Cross-Domain Model for TIR Target Tracking . | MATHEMATICS , 2024 , 12 (18) .
Export to NoteExpress RIS BibTex

Version :

Learning Unsupervised Cross-Domain Model for TIR Target Tracking Scopus
期刊论文 | 2024 , 12 (18) | Mathematics
10| 20| 50 per page
< Page ,Total 2 >

Export

Results:

Selected

to

Format:
Online/Total:943/13821864
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1