Query:
学者姓名:黄展超
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
To scientifically plan and accurately manage the coastal aquaculture industry, it is especially critical to quickly and accurately extract raft aquaculture areas. In the study, the Raft-Former was designed to accurately extract coastal raft aquaculture in Sansha Bay using Sentinel-2 remote sensing imagery. Specifically, a Feature Enhancement Module (FEM) was designed to selectively learn the interest features for solving the omission and mis-extraction caused by changes in the coastal environment. For the boundary adhesion problems caused by the dense distribution of raft aquaculture areas, a Feature Alignment Module (FAM) was developed to enhance edge-aware ability. A Global-Local Fusion Module (GLFM) was introduced to effectively integrate the local features with multi-scale and global features to overcome significant scale differences in aquaculture areas. Numerous experiments show that our method is better than the state-of-the-art models. Specifically, Raft-Former respectively achieves 90.05% and 86.73% mIoU on the Sansha Bay dataset.
Keyword :
edge-aware edge-aware multi-scale multi-scale Raft extraction Raft extraction Sentinel-2 remote sensing imagery Sentinel-2 remote sensing imagery
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Su, Hua , Liu, Yuxin , Huang, Zhanchao et al. Edge-aware transformer for coastal raft aquaculture extraction in optical remote sensing imagery [J]. | INTERNATIONAL JOURNAL OF DIGITAL EARTH , 2025 , 18 (1) . |
MLA | Su, Hua et al. "Edge-aware transformer for coastal raft aquaculture extraction in optical remote sensing imagery" . | INTERNATIONAL JOURNAL OF DIGITAL EARTH 18 . 1 (2025) . |
APA | Su, Hua , Liu, Yuxin , Huang, Zhanchao , Wang, An , Hong, Wenjun , Cai, Junchao . Edge-aware transformer for coastal raft aquaculture extraction in optical remote sensing imagery . | INTERNATIONAL JOURNAL OF DIGITAL EARTH , 2025 , 18 (1) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Sea ice change detection is vital for understanding climate dynamics and ensuring maritime safety. Existing deep learning methods often struggle with the significant impact of color variations in satellite imagery, which can lead to inaccurate detection results. Moreover, the scarcity of labeled sea ice change data limits the ability of models to generalize across diverse scenarios. To address these challenges, we propose SICNet, a sea ice change detection model with enhanced color robustness and data efficiency. A wavelet-guided color-robust fusion (WCF) module is introduced to reduce low-frequency color discrepancies while preserving high-frequency edge details. In addition, a novel change-sensitive CutMix (CSC) strategy is used to augment training samples by focusing on regions with moderate changes, effectively increasing data diversity. Experiments conducted on our constructed sea ice change dataset demonstrate that SICNet achieves superior performance and robustness under varying environmental and lighting conditions. © 2004-2012 IEEE.
Keyword :
Change detection Change detection Climate change Climate change Color Color Deep learning Deep learning Labeled data Labeled data Learning systems Learning systems Sampling Sampling Satellite imagery Satellite imagery Sea ice Sea ice Wavelet transforms Wavelet transforms
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Hong, Wenjun , Huang, Zhanchao , Yang, Yongke et al. Color-Robust Sea Ice Change Detection [J]. | IEEE Geoscience and Remote Sensing Letters , 2025 , 22 . |
MLA | Hong, Wenjun et al. "Color-Robust Sea Ice Change Detection" . | IEEE Geoscience and Remote Sensing Letters 22 (2025) . |
APA | Hong, Wenjun , Huang, Zhanchao , Yang, Yongke , Cai, Junchao , Guan, Weiwang , Zhou, Jiajun et al. Color-Robust Sea Ice Change Detection . | IEEE Geoscience and Remote Sensing Letters , 2025 , 22 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The development of in situ observations has significantly improved ocean heat content (OHC) estimation. However, high-resolution OHC data remain limited, hindering detailed studies on mesoscale oceanic warming variability. This study used a deep learning method-Densely Deep Neural Network (DDNN) to reconstruct a high-resolution (0.25 degrees x 0.25 degrees) global OHC dataset for the upper 2000m ocean from 1993 to 2023, named the Ocean Projection and Extension Neural Network 0.25 degrees (OPEN0.25 degrees) product. This deep ocean remote sensing approach integrates multi-source remote sensing data, including Sea Surface Temperature (SST), Absolute Dynamic Topography (ADT), and Sea Surface Wind (SSW), alongside spatiotemporal coordinates and in situ observations. The DDNN model was trained using Argo-based gridded data and EN4-profile data, initially undergoing pre-training to assimilate large-scale oceanic features, followed by fine-tuning to enhance its accuracy in capturing mesoscale thermal structures. Our results demonstrate that the DDNN model achieves high accuracy across various depths. Particularly, OPEN0.25 degrees can effectively capture detailed thermal variations in regions with complex dynamics, as well as the heat transfer processes within the ocean interior, outperforming traditional methods in resolution. The research highlights that, influenced by strong El Nino-Southern Oscillation (ENSO) events, OHC in the upper 700m of the Pacific Ocean potentially far exceeding expectations over the past decade. Through this study, OPEN0.25 degrees has demonstrated its critical role in detecting and monitoring long-term changes in global OHC at high resolution.
Keyword :
densely deep neural network densely deep neural network high-resolution high-resolution ocean heat content ocean heat content OPEN0.25 degrees dataset OPEN0.25 degrees dataset satellite observations satellite observations
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Su, Hua , Teng, Jianchen , Zhang, Feiyan et al. Can satellite observations detect global ocean heat content change with high resolution by deep learning? [J]. | ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING , 2025 , 225 : 52-68 . |
MLA | Su, Hua et al. "Can satellite observations detect global ocean heat content change with high resolution by deep learning?" . | ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING 225 (2025) : 52-68 . |
APA | Su, Hua , Teng, Jianchen , Zhang, Feiyan , Wang, An , Huang, Zhanchao . Can satellite observations detect global ocean heat content change with high resolution by deep learning? . | ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING , 2025 , 225 , 52-68 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Subsurface density (SD) is a crucial dynamic environment parameter reflecting a 3-D ocean process and stratification, with significant implications for the physical, chemical, and biological processes of the ocean environment. Thus, accurate SD retrieval is essential for studying dynamic processes in the ocean interior. However, complete spatiotemporally accurate SD retrieval remains a challenge in terms of the equation of state and physical methods. This study proposes a novel multiscale mixed residual transformer (MMRT) neural network method to compensate for the inadequacy of the existing methods in dealing with spatiotemporal nonlinear processes and dependence. Considering the spatial correlation and temporal dependence of dynamic processes within the ocean, the MMRT addresses temporal dependence by fully using the transformer's processing of time-series data and spatial correlation by compensating for deficiencies in spatial feature information through multiscale mixed residuals. The MMRT model was compared with the existing random forest (RF) and recurrent neural network (RNN) methods. The MMRT model achieves the best accuracy with an average determination coefficient (R-2) of 0.988 and an average root mean square error (RMSE) of 0.050 kg/m(3) for all layers. The MMRT model not only outperforms the RF and RNN methods regarding reliability and generalization ability when estimating global ocean SD from remote sensing data but also has a more interpretable encoding process. The MMRT model offers a new method for directly estimating SD using multisource satellite observations, providing significant technical support for future remote sensing super-resolution and prediction of subsurface parameters.
Keyword :
Global ocean Global ocean remote sensing observations remote sensing observations subsurface density (SD) subsurface density (SD) transformer transformer
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Su, Hua , Qiu, Junlong , Tang, Zhiwei et al. Retrieving Global Ocean Subsurface Density by Combining Remote Sensing Observations and Multiscale Mixed Residual Transformer [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 . |
MLA | Su, Hua et al. "Retrieving Global Ocean Subsurface Density by Combining Remote Sensing Observations and Multiscale Mixed Residual Transformer" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) . |
APA | Su, Hua , Qiu, Junlong , Tang, Zhiwei , Huang, Zhanchao , Yan, Xiao-Hai . Retrieving Global Ocean Subsurface Density by Combining Remote Sensing Observations and Multiscale Mixed Residual Transformer . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Estimating high-resolution ocean subsurface temperature has great importance for the refined study of ocean climate variability and change. However, the insufficient resolution and accuracy of subsurface temperature data greatly limits our comprehensive understanding of mesoscale and other fine-scale ocean processes. In this study, we integrated multiple remote sensing data and in situ observations to compare four models within two frameworks (gradient boosting and deep learning). The optimal model, Deep Forest, was selected to generate a high-resolution subsurface temperature dataset (DORS0.25 degrees) for the upper 2000 m from 1993 to 2023. DORS0.25 degrees exhibits excellent reconstruction accuracy, with an average R-2 of 0.980 and RMSE of 0.579 degrees C, and the monthly average accuracy is higher than IAP and ORAS5 datasets. Particularly, DORS0.25 degrees can effectively capture detailed ocean warming characteristics in complex dynamic regions such as the Gulf Stream and the Kuroshio Extension, facilitating the study of mesoscale processes and warming within the global-scale ocean. Moreover, the research highlights that the rate of warming over the past decade has been significant, and ocean warming has consistently reached new highs since 2019. This study has demonstrated that DORS0.25 degrees is a crucial dataset for understanding and monitoring the spatiotemporal characteristics and processes of global ocean warming, providing valuable data support for the sustainable development of the marine environment and climate change actions.
Keyword :
Deep forest Deep forest DORS0.25 degrees dataset DORS0.25 degrees dataset High resolution High resolution Ocean warming Ocean warming Remote sensing observations Remote sensing observations Subsurface temperature Subsurface temperature
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Su, Hua , Zhang, Feiyan , Teng, Jianchen et al. Reconstructing high-resolution subsurface temperature of the global ocean using deep forest with combined remote sensing and in situ observations [J]. | ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING , 2024 , 218 : 389-404 . |
MLA | Su, Hua et al. "Reconstructing high-resolution subsurface temperature of the global ocean using deep forest with combined remote sensing and in situ observations" . | ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING 218 (2024) : 389-404 . |
APA | Su, Hua , Zhang, Feiyan , Teng, Jianchen , Wang, An , Huang, Zhanchao . Reconstructing high-resolution subsurface temperature of the global ocean using deep forest with combined remote sensing and in situ observations . | ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING , 2024 , 218 , 389-404 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The recognition of sea ice is of great significance for reflecting climate change and ensuring the safety of ship navigation. Recently, many deep-learning-based methods have been proposed and applied to segment and recognize sea ice regions. However, there are huge differences in sea ice size and irregular edge profiles, which bring challenges to the existing sea ice recognition. In this article, a global-local Transformer network, called SeaIceNet, is proposed for sea ice recognition in optical remote sensing images. In SeaIceNet, a dual global-attention head (DGAH) is proposed to capture global information. On this basis, a global-local feature fusion (GLFF) mechanism is designed to fuse global structural correlation features and local spatial detail features. Furthermore, a detail-guided decoder is developed to retain more high-resolution detail information during feature reconstruction for improving the performance of sea ice recognition. Extensive experiments on several sea ice datasets demonstrated that the proposed SeaIceNet has better performance than the existing methods in multiple evaluation indicators. Moreover, it excels in addressing challenges associated with sea ice recognition in optical remote sensing images, including the difficulty in accurately identifying irregular frozen ponds in complex environments, the broken and unclear boundaries between sea and thin ice that hinder precise segmentation, and the loss of high-resolution spatial details during model learning that complicates refinement.
Keyword :
Accuracy Accuracy Climate change Climate change Data mining Data mining Deep learning Deep learning Feature extraction Feature extraction Ice Ice Image segmentation Image segmentation Integrated optics Integrated optics Optical imaging Optical imaging Optical sensors Optical sensors Remote sensing Remote sensing Sea ice Sea ice sea ice recognition sea ice recognition semantic segmentation semantic segmentation Transformer model Transformer model
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Hong, Wenjun , Huang, Zhanchao , Wang, An et al. SeaIceNet: Sea Ice Recognition via Global-Local Transformer in Optical Remote Sensing Images [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 . |
MLA | Hong, Wenjun et al. "SeaIceNet: Sea Ice Recognition via Global-Local Transformer in Optical Remote Sensing Images" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) . |
APA | Hong, Wenjun , Huang, Zhanchao , Wang, An , Liu, Yuxin , Cai, Junchao , Su, Hua . SeaIceNet: Sea Ice Recognition via Global-Local Transformer in Optical Remote Sensing Images . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Remote sensing scene classification (RSSC) is essential in Earth observation, with applications in land use, environmental status, urban development, and disaster risk assessment. However, redundant background interference, varying feature scales, and high interclass similarity in remote sensing images present significant challenges for RSSC. To address these challenges, this article proposes a novel hierarchical graph-enhanced transformer network (HGTNet) for RSSC. Initially, we introduce a dual attention (DA) module, which extracts key feature information from both the channel and spatial domains, effectively suppressing background noise. Subsequently, we meticulously design a three-stage hierarchical transformer extractor, incorporating a DA module at the bottleneck of each stage to facilitate information exchange between different stages, in conjunction with the Swin transformer block to capture multiscale global visual information. Moreover, we develop a fine-grained graph neural network extractor that constructs the spatial topological relationships of pixel-level scene images, thereby aiding in the discrimination of similar complex scene categories. Finally, the visual features and spatial structural features are fully integrated and input into the classifier by employing skip connections. HGTNet achieves classification accuracies of 98.47%, 95.75%, and 96.33% on the aerial image, NWPU-RESISC45, and OPTIMAL-31 datasets, respectively, demonstrating superior performance compared to other state-of-the-art models. Extensive experimental results indicate that our proposed method effectively learns critical multiscale visual features and distinguishes between similar complex scenes, thereby significantly enhancing the accuracy of RSSC.
Keyword :
Attention mechanism Attention mechanism Attention mechanisms Attention mechanisms Data mining Data mining Earth Earth Feature extraction Feature extraction graph neural network (GNN) graph neural network (GNN) Graph neural networks Graph neural networks Remote sensing Remote sensing remote sensing scene classification (RSSC) remote sensing scene classification (RSSC) Scene classification Scene classification Sensors Sensors spatial structural feature spatial structural feature transformer transformer Transformers Transformers Visualization Visualization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Ziwei , Xu, Weiming , Yang, Shiyu et al. A Hierarchical Graph-Enhanced Transformer Network for Remote Sensing Scene Classification [J]. | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2024 , 17 : 20315-20330 . |
MLA | Li, Ziwei et al. "A Hierarchical Graph-Enhanced Transformer Network for Remote Sensing Scene Classification" . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 17 (2024) : 20315-20330 . |
APA | Li, Ziwei , Xu, Weiming , Yang, Shiyu , Wang, Juan , Su, Hua , Huang, Zhanchao et al. A Hierarchical Graph-Enhanced Transformer Network for Remote Sensing Scene Classification . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2024 , 17 , 20315-20330 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Arbitrary-oriented object detection (AOOD) has been widely applied to locate and classify objects with diverse orientations in remote sensing images. However, the inconsistent features for the localization and classification tasks in AOOD models may lead to ambiguity and low-quality object predictions, which constrains the detection performance. In this article, an AOOD method called task-wise sampling convolutions (TS-Conv) is proposed. TS-Conv adaptively samples task-wise features from respective sensitive regions and maps these features together in alignment to guide a dynamic label assignment for better predictions. Specifically, sampling positions of the localization convolution in TS-Conv are supervised by the oriented bounding box (OBB) prediction associated with spatial coordinates, while sampling positions and convolutional kernel of the classification convolution are designed to be adaptively adjusted according to different orientations for improving the orientation robustness of features. Furthermore, a dynamic task-consistent-aware label assignment (DTLA) strategy is developed to select optimal candidate positions and assign labels dynamically according to ranked task-aware scores obtained from TS-Conv. Extensive experiments on several public datasets covering multiple scenes, multimodal images, and multiple categories of objects demonstrate the effectiveness, scalability, and superior performance of the proposed TS-Conv.
Keyword :
Arbitrary-oriented object detection (AOOD) Arbitrary-oriented object detection (AOOD) convolutional neural network (CNN) convolutional neural network (CNN) Convolutional neural networks Convolutional neural networks dynamic label assignment dynamic label assignment Feature extraction Feature extraction Location awareness Location awareness Object detection Object detection oriented bounding box (OBB) oriented bounding box (OBB) Remote sensing Remote sensing Task analysis Task analysis task-wise sampling strategy task-wise sampling strategy Training Training
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Zhanchao , Li, Wei , Xia, Xiang-Gen et al. Task-Wise Sampling Convolutions for Arbitrary-Oriented Object Detection in Aerial Images [J]. | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2024 , 36 (3) : 5204-5218 . |
MLA | Huang, Zhanchao et al. "Task-Wise Sampling Convolutions for Arbitrary-Oriented Object Detection in Aerial Images" . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 36 . 3 (2024) : 5204-5218 . |
APA | Huang, Zhanchao , Li, Wei , Xia, Xiang-Gen , Wang, Hao , Tao, Ran . Task-Wise Sampling Convolutions for Arbitrary-Oriented Object Detection in Aerial Images . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2024 , 36 (3) , 5204-5218 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The recognition of sea ice is of great significance for reflecting climate change and ensuring the safety of ship navigation. Recently, many deep learning based methods have been proposed and applied to segment and recognize sea ice regions. However, the diverse scales of sea ice areas, the zigzag and fine edge contours, and the difficulty in distinguishing different types of sea ice pose challenges to existing sea ice recognition models. In this paper, a Global-Local Detail Guided Transformer (GDGT) method is proposed for sea ice recognition in optical remote sensing images. In GDGT, a globallocal feature fusiont mechanism is designed to fuse global structural correlation features and local spatial detail features. Furthermore, a detail-guided decoder is developed to retain more high-resolution detail information during feature reconstruction for improving the performance of sea ice recognition. Experiments on the produced sea ice dataset demonstrated the effectiveness and advancement of GDGT.
Keyword :
deep learning deep learning image segmentation image segmentation sea ice recognition sea ice recognition Transformer model Transformer model
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Zhanchao , Hong, Wenjun , Su, Hua . GLOBAL-LOCAL DETAIL GUIDED TRANSFORMER FOR SEA ICE RECOGNITION IN OPTICAL REMOTE SENSING IMAGES [J]. | IGARSS 2024-2024 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, IGARSS 2024 , 2024 : 1768-1772 . |
MLA | Huang, Zhanchao et al. "GLOBAL-LOCAL DETAIL GUIDED TRANSFORMER FOR SEA ICE RECOGNITION IN OPTICAL REMOTE SENSING IMAGES" . | IGARSS 2024-2024 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, IGARSS 2024 (2024) : 1768-1772 . |
APA | Huang, Zhanchao , Hong, Wenjun , Su, Hua . GLOBAL-LOCAL DETAIL GUIDED TRANSFORMER FOR SEA ICE RECOGNITION IN OPTICAL REMOTE SENSING IMAGES . | IGARSS 2024-2024 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, IGARSS 2024 , 2024 , 1768-1772 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In large-scale disaster events, the planning of optimal rescue routes depends on the object detection ability at the disaster scene, with one of the main challenges being the presence of dense and occluded objects. Existing methods, which are typically based on the RGB modality, struggle to distinguish targets with similar colors and textures in crowded environments and are unable to identify obscured objects. To this end, we first construct two multimodal dense and occlusion vehicle detection datasets for large-scale events, utilizing RGB and height map modalities. Based on these datasets, we propose a multimodal collaboration network (MuDet) for dense and occluded vehicle detection, MuDet for short. MuDet hierarchically enhances the completeness of discriminable information within and across modalities and differentiates between simple and complex samples. MuDet includes three main modules: Unimodal Feature Hierarchical Enhancement (Uni-Enh), Multimodal Cross Learning (Mul-Lea), and Hard-easy Discriminative (He-Dis) Pattern. Uni-Enh and Mul-Lea enhance the features within each modality and facilitate the cross-integration of features from two heterogeneous modalities. He-Dis effectively separates densely occluded vehicle targets with significant intra-class differences and minimal inter-class differences by defining and thresholding confidence values, thereby suppressing the complex background. Experimental results on two re-labeled multimodal benchmark datasets, the 4K Stereo Aerial Imagery of a Large Camping Site (4K-SAI-LCS) dataset, and the ISPRS Potsdam dataset, demonstrate the robustness and generalization of the MuDet.
Keyword :
Convolutional neural networks Convolutional neural networks Convolutional neural networks (CNNs) Convolutional neural networks (CNNs) dense and occluded dense and occluded Disasters Disasters Feature extraction Feature extraction hard-easy balanced attention hard-easy balanced attention large-scale disaster events large-scale disaster events multimodal vehicle detection (MVD) multimodal vehicle detection (MVD) Object detection Object detection Remote sensing Remote sensing remote Sensing (RS) remote Sensing (RS) Streaming media Streaming media Vehicle detection Vehicle detection
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Xin , Huang, Zhanchao , Wang, Li et al. Multimodal Collaboration Networks for Geospatial Vehicle Detection in Dense, Occluded, and Large-Scale Events [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 . |
MLA | Wu, Xin et al. "Multimodal Collaboration Networks for Geospatial Vehicle Detection in Dense, Occluded, and Large-Scale Events" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) . |
APA | Wu, Xin , Huang, Zhanchao , Wang, Li , Chanussot, Jocelyn , Tian, Jiaojiao . Multimodal Collaboration Networks for Geospatial Vehicle Detection in Dense, Occluded, and Large-Scale Events . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |