• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship
Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 2 >
Three-Band Spectral Camera Structure Design Based on the Topology Optimization Method SCIE
期刊论文 | 2025 , 15 (6) | APPLIED SCIENCES-BASEL
Abstract&Keyword Cite

Abstract :

The housing and bracket structure are critical components of multispectral cameras; the mechanical properties significantly affect the stability of the optical system and the imaging quality. At the same time, their weight directly impacts the overall load capacity and functional expansion of the device. In this study, the housing and bracket structure of a three-band camera were optimized based on the initial design. Using a combination of density-based topology optimization and multi-objective genetic algorithms in parametric optimization, redundant structures were removed to achieve a lightweight design. As a result, the total weight of the housing and bracket was reduced from 9.56 kg to 5.51 kg, achieving a 42.4% weight reduction. In the optimized structure, under gravity conditions, the maximum deformation along the z-axis did not exceed 7 nm, and the maximum amplification factor in the dynamic analysis was 1.42. The analysis demonstrates that the optimized housing and bracket exhibit excellent dynamic and static performance, meeting all testing requirements, and, under gravitational conditions, the spot diagram and modulation transfer function effect are negligible. Furthermore, in a static environment, the detection range across all spectral bands reaches 18.5 km, satisfying the mission requirements. This optimization design provides a strong reference for the lightweight design of future optical equipment.

Keyword :

finite element analysis finite element analysis multispectral camera multispectral camera structural optimization structural optimization structure structure topology optimization topology optimization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hu, Kai , Wan, Yuzhu , Guo, Jialong et al. Three-Band Spectral Camera Structure Design Based on the Topology Optimization Method [J]. | APPLIED SCIENCES-BASEL , 2025 , 15 (6) .
MLA Hu, Kai et al. "Three-Band Spectral Camera Structure Design Based on the Topology Optimization Method" . | APPLIED SCIENCES-BASEL 15 . 6 (2025) .
APA Hu, Kai , Wan, Yuzhu , Guo, Jialong , Zou, Chunbo , Zheng, Xiangtao . Three-Band Spectral Camera Structure Design Based on the Topology Optimization Method . | APPLIED SCIENCES-BASEL , 2025 , 15 (6) .
Export to NoteExpress RIS BibTex

Version :

Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval SCIE
期刊论文 | 2025 , 63 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Remote sensing image-text retrieval (RSITR) is a cross-modal task that integrates visual and textual information, attracting significant attention in remote sensing research. Remote sensing images typically contain complex scenes with abundant details, presenting significant challenges for accurate semantic alignment between images and texts. Despite advances in the field, achieving precise alignment in such intricate contexts remains a major hurdle. To address this challenge, this article introduces a novel context-aware local-global semantic alignment (CLGSA) method. The proposed method consists of two key modules: the local key feature alignment (LKFA) module and the cross-sample global semantic alignment (CGSA) module. The LKFA module incorporates a local image masking and reconstruction task to improve the alignment between image and text features. Specifically, this module masks certain regions of the image and uses text context information to guide the reconstruction of the masked areas, enhancing the alignment of local semantics and ensuring more accurate retrieval of region-specific content. The CGSA module employs a hard sample triplet loss to improve global semantic consistency. By prioritizing difficult samples during training, this module refines feature space distributions, helping the model better capture global semantics across the entire image-text pair. A series of extensive experiments demonstrates the effectiveness of the proposed method. The method achieves an mR score of 32.07% on the RSICD dataset and 46.63% on the RSITMD dataset, outperforming baseline methods and confirming the robustness and accuracy of the approach.

Keyword :

Accuracy Accuracy Cross modal retrieval Cross modal retrieval Feature extraction Feature extraction Hard sample triplet loss Hard sample triplet loss Image reconstruction Image reconstruction local image masking local image masking Remote sensing Remote sensing remote sensing image-text retrieval (RSITR) remote sensing image-text retrieval (RSITR) semantic alignment semantic alignment Semantics Semantics Sensors Sensors text-guided reconstruction text-guided reconstruction Training Training Transformers Transformers Visualization Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Xiumei , Zheng, Xiangtao , Lu, Xiaoqiang . Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
MLA Chen, Xiumei et al. "Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) .
APA Chen, Xiumei , Zheng, Xiangtao , Lu, Xiaoqiang . Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
Export to NoteExpress RIS BibTex

Version :

Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained Ship Classification SCIE
期刊论文 | 2025 , 63 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Remote-sensing fine-grained ship classification (RS-FGSC) poses a significant challenge due to the high similarity between classes and the limited availability of labeled data, limiting the effectiveness of traditional supervised classification methods. Recent advancements in large pretrained vision-language models (VLMs) have demonstrated impressive capabilities in few-shot or zero-shot learning, particularly in understanding image content. This study delves into harnessing the potential of VLMs to enhance classification accuracy for unseen ship categories, which holds considerable significance in scenarios with restricted data due to cost or privacy constraints. Directly fine-tuning VLMs for RS-FGSC often encounters the challenge of overfitting the seen classes, resulting in suboptimal generalization to unseen classes, which highlights the difficulty in differentiating complex backgrounds and capturing distinct ship features. To address these issues, we introduce a novel prompt tuning technique that employs a hierarchical, multigranularity prompt design. Our approach integrates remote sensing ship priors through bias terms, learned from a small trainable network. This strategy enhances the model's generalization capabilities while improving its ability to discern intricate backgrounds and learn discriminative ship features. Furthermore, we contribute to the field by introducing a comprehensive dataset, FGSCM-52, significantly expanding existing datasets with more extensive data and detailed annotations for less common ship classes. Extensive experimental evaluations demonstrate the superiority of our proposed method over current state-of-the-art techniques. The source code will be made publicly available.

Keyword :

Adaptation models Adaptation models Computational modeling Computational modeling Data models Data models Feature extraction Feature extraction Generalization Generalization Marine vehicles Marine vehicles Overfitting Overfitting prompt tuning prompt tuning Remote sensing Remote sensing remote sensing image remote sensing image ship classification ship classification Testing Testing Training Training Tuning Tuning vision-language models (VLMs) vision-language models (VLMs)

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lan, Long , Wang, Fengxiang , Zheng, Xiangtao et al. Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained Ship Classification [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
MLA Lan, Long et al. "Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained Ship Classification" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) .
APA Lan, Long , Wang, Fengxiang , Zheng, Xiangtao , Wang, Zengmao , Liu, Xinwang . Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained Ship Classification . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
Export to NoteExpress RIS BibTex

Version :

Integrating Local-Global Structural Interaction Using Siamese Graph Neural Network for Urban Land Use Change Detection From VHR Satellite Images SCIE
期刊论文 | 2024 , 62 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Detecting land use changes in urban areas from very-high-resolution (VHR) satellite images presents two primary challenges: 1) traditional methods focus mainly on comparing changes in land cover-related features, which are insufficient for detecting changes in land use and are prone to pseudo-changes caused by illumination differences, seasonal variations, and subtle structural changes and 2) spatial structural information, which is characterized by topological relationships among land cover objects, is crucial for urban land use classification but remains underexplored in change detection. To address these challenges, this study developed a local-global structural interaction network (LGSI-Net) based on a Siamese graph neural network (SGNN) that integrates high-level structural and semantic information to detect urban land use changes from bitemporal VHR images. We developed both local structural feature interaction module (LSIM) and global structural feature interaction module (GSIM) to enhance the representation of bitemporal structural features at the global scene graph and local object node levels. Experiments on the publicly available MtS-WH dataset and two generated datasets, LUCD-FZ and LUCD-HF, show that the proposed method outperforms the existing bag of visual word (BoVW)-based method and CorrFusionNet. Furthermore, we evaluated the detection performance for different semantic feature extraction strategies and structural feature extraction backbones. The results demonstrate that the proposed method, which integrates high-level semantic and graph isomorphism network (GIN)-derived structural features achieves the best performance. The method trained on the LUCD-FZ dataset was successfully transferred to the LUCD-HF dataset with different urban landscapes, indicating its effectiveness in detecting land use changes from VHR satellite images, even in areas with relatively large imbalances between changed and unchanged samples.

Keyword :

Local-global structural interaction Local-global structural interaction Siamese graph neural networks (SGNNs) Siamese graph neural networks (SGNNs) urban land use change detection urban land use change detection very-high-resolution (VHR) satellite images very-high-resolution (VHR) satellite images

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lou, Kangkai , Li, Mengmeng , Li, Fashuai et al. Integrating Local-Global Structural Interaction Using Siamese Graph Neural Network for Urban Land Use Change Detection From VHR Satellite Images [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
MLA Lou, Kangkai et al. "Integrating Local-Global Structural Interaction Using Siamese Graph Neural Network for Urban Land Use Change Detection From VHR Satellite Images" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) .
APA Lou, Kangkai , Li, Mengmeng , Li, Fashuai , Zheng, Xiangtao . Integrating Local-Global Structural Interaction Using Siamese Graph Neural Network for Urban Land Use Change Detection From VHR Satellite Images . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
Export to NoteExpress RIS BibTex

Version :

Domain Mapping Network for Remote Sensing Cross-Domain Few-Shot Classification SCIE
期刊论文 | 2024 , 62 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

It is a challenging task to recognize novel categories with only a few labeled remote-sensing images. Currently, meta-learning solves the problem by learning prior knowledge from another dataset where the classes are disjoint. However, the existing methods assume the training dataset comes from the same domain as the test dataset. For remote-sensing images, test datasets may come from different domains. It is impossible to collect a training dataset for each domain. Meta-learning and transfer learning are widely used to tackle the few-shot classification and the cross-domain classification, respectively. However, it is difficult to recognize novel categories from various domains with only a few images. In this article, a domain mapping network (DMN) is proposed to cope with the few-shot classification under domain shift. DMN trains an efficient few-shot classification model on the source domain and then adapts the model to the target domain. Specifically, dual autoencoders are exploited to fit the source and target domain distribution. First, DMN learns an autoencoder on the source domain to fit the source domain distribution. Then, a target autoencoder is initiated from the source domain autoencoder and further updated with a few target images. To ensure the distribution alignment, cycle-consistency losses are proposed to jointly train the source autoencoder and target autoencoder. Extensive experiments are conducted to validate the generalizable and superiority of the proposed method.

Keyword :

Adaptation models Adaptation models Cross-domain classification Cross-domain classification few-shot classification few-shot classification Image recognition Image recognition Measurement Measurement meta-learning meta-learning Metalearning Metalearning Remote sensing Remote sensing remote sensing scene classification remote sensing scene classification Task analysis Task analysis Training Training transfer learning transfer learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lu, Xiaoqiang , Gong, Tengfei , Zheng, Xiangtao . Domain Mapping Network for Remote Sensing Cross-Domain Few-Shot Classification [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
MLA Lu, Xiaoqiang et al. "Domain Mapping Network for Remote Sensing Cross-Domain Few-Shot Classification" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) .
APA Lu, Xiaoqiang , Gong, Tengfei , Zheng, Xiangtao . Domain Mapping Network for Remote Sensing Cross-Domain Few-Shot Classification . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
Export to NoteExpress RIS BibTex

Version :

基于多模型协同的舰船目标检测
期刊论文 | 2024 , 45 (14) , 73-83 | 航空学报
Abstract&Keyword Cite

Abstract :

随着遥感成像技术的不断进步,遥感图像的舰船目标检测已成为确保海上运输安全和效率的关键手段,对海上交通、环境保护及国家安全至关重要.然而,由于舰船目标尺度差异大、背景复杂等问题,现有单一检测模型的方法过度依赖训练数据,无法适应尺度多变的舰船目标.提出了一种多模型协同训练的框架,利用多个已训练好的舰船检测模型作为辅助网络,通过知识迁移的方式辅助优化目标数据的主网络.首先,通过三元关系约束建立辅助网络与主网络间的分布知识传递;其次,采用软标签引导策略整合辅助网络中的标签知识,提高舰船检测的准确性.实验结果表明:相较于现有主流方法,所提方法在DOTA和xView数据集上展示了较好的性能,克服了单一模型的局限性,为遥感图像的目标检测提供了新的解决思路.

Keyword :

多尺度表达 多尺度表达 多模型协同 多模型协同 目标检测 目标检测 知识融合 知识融合 舰船识别 舰船识别

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 肖欣林 , 施伟超 , 郑向涛 et al. 基于多模型协同的舰船目标检测 [J]. | 航空学报 , 2024 , 45 (14) : 73-83 .
MLA 肖欣林 et al. "基于多模型协同的舰船目标检测" . | 航空学报 45 . 14 (2024) : 73-83 .
APA 肖欣林 , 施伟超 , 郑向涛 , 高跃明 , 卢孝强 . 基于多模型协同的舰船目标检测 . | 航空学报 , 2024 , 45 (14) , 73-83 .
Export to NoteExpress RIS BibTex

Version :

Deep Feature Reconstruction Learning for Open-Set Classification of Remote-Sensing Imagery SCIE
期刊论文 | 2023 , 20 | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
Abstract&Keyword Cite

Abstract :

Existing remote-sensing scene image (RSSI) classification methods usually rely on the static closed-set assumption that testing samples do not belong to unknown classes. However, practical applications are usually the open-set classification problem, which means that RSSIs from unknown classes will appear in the testing set. Most existing methods are prone to forcibly misclassify RSSIs of unknown classes into known classes, resulting in poor practical performance. In this letter, a deep feature reconstruction learning (DFRL) framework is proposed for the open-set classification of RSSIs (OSC-RSSIs). The proposed DFRL unifies discriminative feature learning and feature reconstruction into an end-to-end network. First, a feature extraction module is utilized to project raw input data from the image space to the feature space to extract deep features. Then, the deep features are fed to a deep feature reconstruction module for distinguishing known and unknown classes based on feature-level reconstruction errors. The feature-level reconstruction can effectively suppress the interference of complex backgrounds. In addition, a sparse regularization is introduced to improve the discrimination of image representation. Experiments on three RSSI datasets demonstrate the effectiveness of DFRL for OSC-RSSIs.

Keyword :

Deep learning Deep learning feature reconstruction feature reconstruction open-set classification open-set classification remote-sensing imagery remote-sensing imagery

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Sun, Hao , Li, Qianqian , Yu, Jie et al. Deep Feature Reconstruction Learning for Open-Set Classification of Remote-Sensing Imagery [J]. | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , 2023 , 20 .
MLA Sun, Hao et al. "Deep Feature Reconstruction Learning for Open-Set Classification of Remote-Sensing Imagery" . | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS 20 (2023) .
APA Sun, Hao , Li, Qianqian , Yu, Jie , Zhou, Dongbo , Chen, Wenjing , Zheng, Xiangtao et al. Deep Feature Reconstruction Learning for Open-Set Classification of Remote-Sensing Imagery . | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , 2023 , 20 .
Export to NoteExpress RIS BibTex

Version :

Identity Feature Disentanglement for Visible-Infrared Person Re-Identification SCIE
期刊论文 | 2023 , 19 (6) | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
Abstract&Keyword Cite

Abstract :

Visible-infrared person re-identification (VI-ReID) task aims to retrieve persons from different spectrum cameras (i.e., visible and infrared images). The biggest challenge of VI-ReID is the huge cross-modal discrepancy caused by different imaging mechanisms. Many VI-ReID methods have been proposed by embedding different modal person images into a shared feature space to narrow the cross-modal discrepancy. However, these methods ignore the purification of identity features, which results in identity features containing different modal information and failing to align well. In this article, an identity feature disentanglement method is proposed to disentangle the identity features from identity-irrelevant information, such as pose and modality. Specifically, images of different modalities are first processed to extract shared features that reduce the cross-modal discrepancy preliminarily. Then the extracted feature of each image is disentangled into a latent identity variable and an identity-irrelevant variable. In order to enforce the latent identity variable to contain as much identity information as possible and as little identity-irrelevant information, an ID-discriminative loss and an ID-swapping reconstruction process are additionally designed. Extensive quantitative and qualitative experiments on two popular public VI-ReID datasets, RegDB and SYSU-MM01, demonstrate the efficacy and superiority of the proposed method.

Keyword :

cross-modal cross-modal deep learning deep learning feature disentanglement feature disentanglement Visible-infrared person re-identification Visible-infrared person re-identification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Xiumei , Zheng, Xiangtao , Lu, Xiaoqiang . Identity Feature Disentanglement for Visible-Infrared Person Re-Identification [J]. | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS , 2023 , 19 (6) .
MLA Chen, Xiumei et al. "Identity Feature Disentanglement for Visible-Infrared Person Re-Identification" . | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS 19 . 6 (2023) .
APA Chen, Xiumei , Zheng, Xiangtao , Lu, Xiaoqiang . Identity Feature Disentanglement for Visible-Infrared Person Re-Identification . | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS , 2023 , 19 (6) .
Export to NoteExpress RIS BibTex

Version :

Hyperspectral and LiDAR Representation With Spectral-Spatial Graph Network SCIE
期刊论文 | 2023 , 16 , 9446-9460 | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Land cover analysis has received significant attention in remote sensing-related fields. To take advantage of multimodal data, hyperspectral images (HSI) and light detection and ranging (LiDAR) are often combined. However, it is difficult to capture intricate local and global spectral-spatial associations between HSI and LiDAR. To exploit the complementary information of multimodal data, a spectral-spatial graph network is proposed that integrates HSI and LiDAR data into intricate local and global spectral-spatial associations. Specifically, the network consists of a local module and a global module. The local module uses convolution techniques applied over image patches to preserve the local spatial relationships available in multimodal data. The global module constructs a spectral-spatial multimodal graph, which is used to preserve spectral-spatial proximity in multimodal data. Both the local and global modules are utilized to their utmost capacity to generate the final multimodal data representation. Experiments on multimodal remote sensing datasets reveal that the proposed network has attained performance levels comparable to state-of-the-art methods.

Keyword :

Data fusion Data fusion graph neural network graph neural network multimodal data multimodal data remote sensing classification remote sensing classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Du, Xingqian , Zheng, Xiangtao , Lu, Xiaoqiang et al. Hyperspectral and LiDAR Representation With Spectral-Spatial Graph Network [J]. | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2023 , 16 : 9446-9460 .
MLA Du, Xingqian et al. "Hyperspectral and LiDAR Representation With Spectral-Spatial Graph Network" . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 16 (2023) : 9446-9460 .
APA Du, Xingqian , Zheng, Xiangtao , Lu, Xiaoqiang , Wang, Xin . Hyperspectral and LiDAR Representation With Spectral-Spatial Graph Network . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2023 , 16 , 9446-9460 .
Export to NoteExpress RIS BibTex

Version :

Special issue on intelligence technology for remote sensing image SCIE
期刊论文 | 2023 , 8 (4) , 1164-1165 | CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY
Abstract&Keyword Cite

Keyword :

image analysis image analysis image classification image classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zheng, Xiangtao , Vozel, Benoit , Hong, Danfeng . Special issue on intelligence technology for remote sensing image [J]. | CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY , 2023 , 8 (4) : 1164-1165 .
MLA Zheng, Xiangtao et al. "Special issue on intelligence technology for remote sensing image" . | CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 8 . 4 (2023) : 1164-1165 .
APA Zheng, Xiangtao , Vozel, Benoit , Hong, Danfeng . Special issue on intelligence technology for remote sensing image . | CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY , 2023 , 8 (4) , 1164-1165 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 2 >

Export

Results:

Selected

to

Format:
Online/Total:694/10841630
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1