• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:于元隆

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 15 >
Cross-View Fusion for Multi-View Clustering SCIE
期刊论文 | 2025 , 32 , 621-625 | IEEE SIGNAL PROCESSING LETTERS
Abstract&Keyword Cite Version(2)

Abstract :

Multi-view clustering has attracted significant attention in recent years because it can leverage the consistent and complementary information of multiple views to improve clustering performance. However, effectively fuse the information and balance the consistent and complementary information of multiple views are common challenges faced by multi-view clustering. Most existing multi-view fusion works focus on weighted-sum fusion and concatenating fusion, which unable to fully fuse the underlying information, and not consider balancing the consistent and complementary information of multiple views. To this end, we propose Cross-view Fusion for Multi-view Clustering (CFMVC). Specifically, CFMVC combines deep neural network and graph convolutional network for cross-view information fusion, which fully fuses feature information and structural information of multiple views. In order to balance the consistent and complementary information of multiple views, CFMVC enhances the correlation among the same samples to maximize the consistent information while simultaneously reinforcing the independence among different samples to maximize the complementary information. Experimental results on several multi-view datasets demonstrate the effectiveness of CFMVC for multi-view clustering task.

Keyword :

Cross-view Cross-view deep neural network deep neural network graph convolutional network graph convolutional network multi-view clustering multi-view clustering multi-view fusion multi-view fusion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Zhijie , Huang, Binqiang , Zheng, Qinghai et al. Cross-View Fusion for Multi-View Clustering [J]. | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 : 621-625 .
MLA Huang, Zhijie et al. "Cross-View Fusion for Multi-View Clustering" . | IEEE SIGNAL PROCESSING LETTERS 32 (2025) : 621-625 .
APA Huang, Zhijie , Huang, Binqiang , Zheng, Qinghai , Yu, Yuanlong . Cross-View Fusion for Multi-View Clustering . | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 , 621-625 .
Export to NoteExpress RIS BibTex

Version :

Cross-View Fusion for Multi-View Clustering Scopus
期刊论文 | 2025 , 32 , 621-625 | IEEE Signal Processing Letters
Cross-View Fusion for Multi-View Clustering EI
期刊论文 | 2025 , 32 , 621-625 | IEEE Signal Processing Letters
Trusted Cross-view Completion for incomplete multi-view classification SCIE
期刊论文 | 2025 , 629 | NEUROCOMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

In real-world scenarios, missing views is common due to the complexity of data collection. Therefore, it is inevitable to classify incomplete multi-view data. Although substantial progress has been achieved, there are still two challenging problems with incomplete multi-view classification: (1) Simply ignoring these missing views is often ineffective, especially under high missing rates, which can lead to incomplete analysis and unreliable results. (2) Most existing multi-view classification models primarily focus on maximizing consistency between different views. However, neglecting specific-view information may lead to decreased performance. To solve the above problems, we propose a novel framework called Trusted Cross-View Completion (TCVC) for incomplete multi-view classification. Specifically, TCVC consists of three modules: Cross-view Feature Learning Module (CVFL), Imputation Module (IM) and Trusted Fusion Module (TFM). First, CVFL mines specific- view information to obtain cross-view reconstruction features. Then, IM restores the missing view by fusing cross-view reconstruction features with weights, guided by uncertainty-aware information. This information is the quality assessment of the cross-view reconstruction features in TFM. Moreover, the recovered views are supervised by cross-view neighborhood-aware. Finally, TFM effectively fuses complete data to generate trusted classification predictions. Extensive experiments show that our method is effective and robust.

Keyword :

Cross-view feature learning Cross-view feature learning Incomplete multi-view classification Incomplete multi-view classification Uncertainty-aware Uncertainty-aware

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Liping , Chen, Shiyun , Song, Peihuan et al. Trusted Cross-view Completion for incomplete multi-view classification [J]. | NEUROCOMPUTING , 2025 , 629 .
MLA Zhou, Liping et al. "Trusted Cross-view Completion for incomplete multi-view classification" . | NEUROCOMPUTING 629 (2025) .
APA Zhou, Liping , Chen, Shiyun , Song, Peihuan , Zheng, Qinghai , Yu, Yuanlong . Trusted Cross-view Completion for incomplete multi-view classification . | NEUROCOMPUTING , 2025 , 629 .
Export to NoteExpress RIS BibTex

Version :

Trusted Cross-view Completion for incomplete multi-view classification Scopus
期刊论文 | 2025 , 629 | Neurocomputing
Trusted Cross-view Completion for incomplete multi-view classification EI
期刊论文 | 2025 , 629 | Neurocomputing
Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images SCIE
期刊论文 | 2025 , 63 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite Version(2)

Abstract :

For objects with arbitrary angles in optical remote sensing (RS) images, the oriented bounding box regression task often faces the problem of ambiguous boundaries between positive and negative samples. The statistical analysis of existing label assignment strategies reveals that anchors with low Intersection over Union (IoU) between ground truth (GT) may also accurately surround the GT after decoding. Therefore, this article proposes an attention-based mean-max balance assignment (AMMBA) strategy, which consists of two parts: mean-max balance assignment (MMBA) strategy and balance feature pyramid with attention (BFPA). MMBA employs the mean-max assignment (MMA) and balance assignment (BA) to dynamically calculate a positive threshold and adaptively match better positive samples for each GT for training. Meanwhile, to meet the need of MMBA for more accurate feature maps, we construct a BFPA module that integrates spatial and scale attention mechanisms to promote global information propagation. Combined with S2ANet, our AMMBA method can effectively achieve state-of-the-art performance, with a precision of 80.91% on the DOTA dataset in a simple plug-and-play fashion. Extensive experiments on three challenging optical RS image datasets (DOTA-v1.0, HRSC, and DIOR-R) further demonstrate the balance between precision and speed in single-stage object detectors. Our AMMBA has enough potential to assist all existing RS models in a simple way to achieve better detection performance. The code is available at https://github.com/promisekoloer/AMMBA.

Keyword :

Accuracy Accuracy Attention feature fusion Attention feature fusion Detectors Detectors Feature extraction Feature extraction label assignment label assignment Location awareness Location awareness Object detection Object detection optical remote sensing (RS) images optical remote sensing (RS) images Optical scattering Optical scattering oriented object detection oriented object detection Remote sensing Remote sensing Semantics Semantics Shape Shape Training Training

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Qifeng , Chen, Nuo , Huang, Haibin et al. Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
MLA Lin, Qifeng et al. "Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) .
APA Lin, Qifeng , Chen, Nuo , Huang, Haibin , Zhu, Daoye , Fu, Gang , Chen, Chuanxi et al. Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
Export to NoteExpress RIS BibTex

Version :

Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images EI
期刊论文 | 2025 , 63 | IEEE Transactions on Geoscience and Remote Sensing
Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images Scopus
期刊论文 | 2025 , 63 | IEEE Transactions on Geoscience and Remote Sensing
Learning Interpretable Brain Functional Connectivity Via Self-Supervised Triplet Network With Depth-Wise Attention Scopus
期刊论文 | 2024 , 28 (11) , 1-14 | IEEE Journal of Biomedical and Health Informatics
Abstract&Keyword Cite

Abstract :

Brain functional connectivity has been routinely explored to reveal the functional interaction dynamics between the brain regions. However, conventional functional connectivity measures rely on deterministic models fixed for all participants, usually demanding application-specific empirical analysis, while deep learning approaches focus on finding discriminative features for state classification, thus having limited capability to capture the interpretable functional connectivity characteristics. To address the challenges, this study proposes a self-supervised triplet network with depth-wise attention (TripletNet-DA) to generate the functional connectivity: 1) TripletNet-DA firstly utilizes channel-wise transformations for temporal data augmentation, where the correlated & uncorrelated sample pairs are constructed for self-supervised training, 2) Channel encoder is designed with a convolution network to extract the deep features, while similarity estimator is employed to generate the similarity pairs and the functional connectivity representations with prominent patterns emphasized via depth-wise attention mechanism, 3) TripletNet-DA applies Triplet loss with anchor-negative similarity penalty for model training, where the similarities of uncorrelated sample pairs are minimized to enhance model's learning capability. Experimental results on pathological EEG datasets (Autism Spectrum Disorder, Major Depressive Disorder) indicate that 1) TripletNet-DA demonstrates superiority in both ASD discrimination and MDD classification than the state-of-the-art counterparts in various frequency bands, where the connectivity features in beta & gamma bands have respectively achieved the accuracy of 97.05%,98.32% for ASD discrimination, 89.88%,91.80% for MDD classification in the eyes-closed condition, and 90.90%,92.26% for MDD classification in the eyes-open condition, 2) TripletNet-DA enables to uncover significant differences of functional connectivity between the ASD EEG and the TD ones, and the prominent connectivity links are in accordance with the empirical findings that frontal lobe demonstrates more connectivity links and significant frontal-temporal connectivity occurs in the beta band, thus providing potential biomarkers for clinical ASD analysis. IEEE

Keyword :

Analytical models Analytical models Brain Functional Connectivity Brain Functional Connectivity Brain modeling Brain modeling Correlation Correlation Depth-wise Attention Depth-wise Attention Electroencephalography Electroencephalography Self-supervised Learning Self-supervised Learning Task analysis Task analysis Time series analysis Time series analysis Training Training Triplet Network Triplet Network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Tang, Y. , Huang, W. , Liu, R. et al. Learning Interpretable Brain Functional Connectivity Via Self-Supervised Triplet Network With Depth-Wise Attention [J]. | IEEE Journal of Biomedical and Health Informatics , 2024 , 28 (11) : 1-14 .
MLA Tang, Y. et al. "Learning Interpretable Brain Functional Connectivity Via Self-Supervised Triplet Network With Depth-Wise Attention" . | IEEE Journal of Biomedical and Health Informatics 28 . 11 (2024) : 1-14 .
APA Tang, Y. , Huang, W. , Liu, R. , Yu, Y. . Learning Interpretable Brain Functional Connectivity Via Self-Supervised Triplet Network With Depth-Wise Attention . | IEEE Journal of Biomedical and Health Informatics , 2024 , 28 (11) , 1-14 .
Export to NoteExpress RIS BibTex

Version :

Learning Nighttime Semantic Segmentation the Hard Way EI
期刊论文 | 2024 , 20 (7) | ACM Transactions on Multimedia Computing, Communications and Applications
Abstract&Keyword Cite

Abstract :

Nighttime semantic segmentation is an important but challenging research problem for autonomous driving. The major challenges lie in the small objects or regions from the under-/over-exposed areas or suffer from motion blur caused by the camera deployed on moving vehicles. To resolve this, we propose a novel hard-class-aware module that bridges the main network for full-class segmentation and the hard-class network for segmenting aforementioned hard-class objects. In specific, it exploits the shared focus of hard-class objects from the dual-stream network, enabling the contextual information flow to guide the model to concentrate on the pixels that are hard to classify. In the end, the estimated hard-class segmentation results will be utilized to infer the final results via an adaptive probabilistic fusion refinement scheme. Moreover, to overcome over-smoothing and noise caused by extreme exposures, our model is modulated by a carefully crafted pretext task of constructing an exposure-aware semantic gradient map, which guides the model to faithfully perceive the structural and semantic information of hard-class objects while mitigating the negative impact of noises and uneven exposures. In experiments, we demonstrate that our unique network design leads to superior segmentation performance over existing methods, featuring the strong ability of perceiving hard-class objects under adverse conditions. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.

Keyword :

Classification (of information) Classification (of information) Semantics Semantics Semantic Segmentation Semantic Segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Wenxi , Cai, Jiaxin , Li, Qi et al. Learning Nighttime Semantic Segmentation the Hard Way [J]. | ACM Transactions on Multimedia Computing, Communications and Applications , 2024 , 20 (7) .
MLA Liu, Wenxi et al. "Learning Nighttime Semantic Segmentation the Hard Way" . | ACM Transactions on Multimedia Computing, Communications and Applications 20 . 7 (2024) .
APA Liu, Wenxi , Cai, Jiaxin , Li, Qi , Liao, Chenyang , Cao, Jingjing , He, Shengfeng et al. Learning Nighttime Semantic Segmentation the Hard Way . | ACM Transactions on Multimedia Computing, Communications and Applications , 2024 , 20 (7) .
Export to NoteExpress RIS BibTex

Version :

Partial multi-label feature selection via low-rank and sparse factorization with manifold learning EI
期刊论文 | 2024 , 296 | Knowledge-Based Systems
Abstract&Keyword Cite

Abstract :

Feature selection is a commonly utilized methodology in multi-label learning (MLL) for tackling the challenge of high-dimensional data. Accurate annotation of relevant labels is crucial for successful multi-label feature selection (MFS). Nevertheless, multi-label datasets frequently consist of ground-truth and noisy labels in real-world applications, giving rise to the partial multi-label learning (PML) problem. The inclusion of noisy labels complicates the task of conventional MFS methods in accurately identifying the optimal features subset in such datasets. To tackle this issue, we propose a novel partial multi-label feature selection method with low-rank sparse factorization and manifold learning, called PMFS-LRS. Specifically, we first decompose the candidate label matrix into two distinct components: a low-rank matrix referring to ground-truth labels and a sparse matrix referring to noisy labels. This decomposition allows PMFS-LRS to effectively distinguish noise labels from ground-truth labels, thereby mitigating the impact of noisy data. Then, the local label correlations are explored using a manifold learning framework to improve the label disambiguation performance. Finally, a l2,1-norm regularization is integrated into the objective function to facilitate effective feature selection. Comprehensive experiments conducted on both real-world and synthetic PML datasets demonstrate that PMFS-LRS is superior to several existing state-of-the-art MFS methods. © 2024 Elsevier B.V.

Keyword :

Clustering algorithms Clustering algorithms Feature Selection Feature Selection Learning systems Learning systems Matrix algebra Matrix algebra Matrix factorization Matrix factorization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Sun, Zhenzhen , Chen, Zexiang , Liu, Jinghua et al. Partial multi-label feature selection via low-rank and sparse factorization with manifold learning [J]. | Knowledge-Based Systems , 2024 , 296 .
MLA Sun, Zhenzhen et al. "Partial multi-label feature selection via low-rank and sparse factorization with manifold learning" . | Knowledge-Based Systems 296 (2024) .
APA Sun, Zhenzhen , Chen, Zexiang , Liu, Jinghua , Chen, Yewang , Yu, Yuanlong . Partial multi-label feature selection via low-rank and sparse factorization with manifold learning . | Knowledge-Based Systems , 2024 , 296 .
Export to NoteExpress RIS BibTex

Version :

Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement Scopus
期刊论文 | 2024 , 132 (11) , 5030-5047 | International Journal of Computer Vision
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

Ultra-high resolution image segmentation has raised increasing interests in recent years due to its realistic applications. In this paper, we innovate the widely used high-resolution image segmentation pipeline, in which an ultra-high resolution image is partitioned into regular patches for local segmentation and then the local results are merged into a high-resolution semantic mask. In particular, we introduce a novel locality-aware context fusion based segmentation model to process local patches, where the relevance between local patch and its various contexts are jointly and complementarily utilized to handle the semantic regions with large variations. Additionally, we present the alternating local enhancement module that restricts the negative impact of redundant information introduced from the contexts, and thus is endowed with the ability of fixing the locality-aware features to produce refined results. Furthermore, in comprehensive experiments, we demonstrate that our model outperforms other state-of-the-art methods in public benchmarks and verify the effectiveness of the proposed modules. Our released codes will be available at: https://github.com/liqiokkk/FCtL. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.

Keyword :

Attention mechanism Attention mechanism Context-guided vision model Context-guided vision model Geo-spatial image segmentation Geo-spatial image segmentation Ultra-high resolution image segmentation Ultra-high resolution image segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, W. , Li, Q. , Lin, X. et al. Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement [J]. | International Journal of Computer Vision , 2024 , 132 (11) : 5030-5047 .
MLA Liu, W. et al. "Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement" . | International Journal of Computer Vision 132 . 11 (2024) : 5030-5047 .
APA Liu, W. , Li, Q. , Lin, X. , Yang, W. , He, S. , Yu, Y. . Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement . | International Journal of Computer Vision , 2024 , 132 (11) , 5030-5047 .
Export to NoteExpress RIS BibTex

Version :

Verifiable Deep Learning Inference on Heterogeneous Edge Devices with Trusted Execution Environment Scopus
期刊论文 | 2024 , 24 (17) , 1-1 | IEEE Sensors Journal
Abstract&Keyword Cite

Abstract :

Deep learning inference on edge devices is susceptible to security threats, particularly Fault Injection Attacks (FIAs), which are easily executed and pose a significant risk to the inference. These attacks could potentially lead to alterations to the memory of the edge device or errors in the execution of instructions. Specifically, time-intensive convolution computation is considerably vulnerable in deep learning inference at the edge. To detect and defend attacks against deep learning inference on heterogeneous edge devices, we propose an efficient hardware-based solution for verifiable model inference named DarkneTV. It leverages an asynchronous mechanism to conduct the hash checking of convolution weights and the verification of convolution computations within the Trusted Execution Environment (TEE) of the Central Processing Unit (CPU) when the integrated Graphics Processing Unit (GPU) runs model inference. It protects the integrity of convolution weights and the correctness of inference results, and effectively detects abnormal weight modifications and incorrect inference results regarding neural operators. Extensive experimental results show that DarkneTV identifies tiny FIAs against convolution weights and computation with over 99.03% accuracy but less extra time overhead. The asynchronous mechanism significantly improves the performance of verifiable inference. Typically, the speedups of the GPU-accelerated verifiable inference on the Hikey 960 achieve 8.50x-11.31x compared with the CPU-only mode. IEEE

Keyword :

Computational modeling Computational modeling Convolution Convolution Deep learning Deep learning deep learning inference deep learning inference fault injection attacks fault injection attacks Graphics processing units Graphics processing units heterogeneous edge devices heterogeneous edge devices Image edge detection Image edge detection Inference algorithms Inference algorithms Performance evaluation Performance evaluation Trusted Execution Environment Trusted Execution Environment verifiable learning verifiable learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liao, L. , Zheng, Y. , Lu, H. et al. Verifiable Deep Learning Inference on Heterogeneous Edge Devices with Trusted Execution Environment [J]. | IEEE Sensors Journal , 2024 , 24 (17) : 1-1 .
MLA Liao, L. et al. "Verifiable Deep Learning Inference on Heterogeneous Edge Devices with Trusted Execution Environment" . | IEEE Sensors Journal 24 . 17 (2024) : 1-1 .
APA Liao, L. , Zheng, Y. , Lu, H. , Liu, X. , Chen, S. , Yu, Y. . Verifiable Deep Learning Inference on Heterogeneous Edge Devices with Trusted Execution Environment . | IEEE Sensors Journal , 2024 , 24 (17) , 1-1 .
Export to NoteExpress RIS BibTex

Version :

Monocular BEV Perception of Road Scenes Via Front-to-Top View Projection Scopus
期刊论文 | 2024 , 46 (9) , 1-17 | IEEE Transactions on Pattern Analysis and Machine Intelligence
Abstract&Keyword Cite

Abstract :

HD map reconstruction is crucial for autonomous driving. LiDAR-based methods are limited due to expensive sensors and time-consuming computation. Camera-based methods usually need to perform road segmentation and view transformation separately, which often causes distortion and missing content. To push the limits of the technology, we present a novel framework that reconstructs a local map formed by road layout and vehicle occupancy in the bird&#x0027;s-eye view given a front-view monocular image only. We propose a front-to-top view projection (FTVP) module, which takes the constraint of cycle consistency between views into account and makes full use of their correlation to strengthen the view transformation and scene understanding. In addition, we apply multi-scale FTVP modules to propagate the rich spatial information of low-level features to mitigate spatial deviation of the predicted object location. Experiments on public benchmarks show that our method achieves various tasks on road layout estimation, vehicle occupancy estimation, and multi-class semantic estimation, at a performance level comparable to the state-of-the-arts, while maintaining superior efficiency. IEEE

Keyword :

Autonomous driving Autonomous driving BEV perception BEV perception Estimation Estimation Feature extraction Feature extraction Layout Layout Roads Roads segmentation segmentation Task analysis Task analysis Three-dimensional displays Three-dimensional displays Transformers Transformers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, W. , Li, Q. , Yang, W. et al. Monocular BEV Perception of Road Scenes Via Front-to-Top View Projection [J]. | IEEE Transactions on Pattern Analysis and Machine Intelligence , 2024 , 46 (9) : 1-17 .
MLA Liu, W. et al. "Monocular BEV Perception of Road Scenes Via Front-to-Top View Projection" . | IEEE Transactions on Pattern Analysis and Machine Intelligence 46 . 9 (2024) : 1-17 .
APA Liu, W. , Li, Q. , Yang, W. , Cai, J. , Yu, Y. , Ma, Y. et al. Monocular BEV Perception of Road Scenes Via Front-to-Top View Projection . | IEEE Transactions on Pattern Analysis and Machine Intelligence , 2024 , 46 (9) , 1-17 .
Export to NoteExpress RIS BibTex

Version :

Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement SCIE
期刊论文 | 2024 , 132 (11) , 5030-5047 | INTERNATIONAL JOURNAL OF COMPUTER VISION
Abstract&Keyword Cite Version(2)

Abstract :

Ultra-high resolution image segmentation has raised increasing interests in recent years due to its realistic applications. In this paper, we innovate the widely used high-resolution image segmentation pipeline, in which an ultra-high resolution image is partitioned into regular patches for local segmentation and then the local results are merged into a high-resolution semantic mask. In particular, we introduce a novel locality-aware context fusion based segmentation model to process local patches, where the relevance between local patch and its various contexts are jointly and complementarily utilized to handle the semantic regions with large variations. Additionally, we present the alternating local enhancement module that restricts the negative impact of redundant information introduced from the contexts, and thus is endowed with the ability of fixing the locality-aware features to produce refined results. Furthermore, in comprehensive experiments, we demonstrate that our model outperforms other state-of-the-art methods in public benchmarks and verify the effectiveness of the proposed modules. Our released codes will be available at: https://github.com/liqiokkk/FCtL.

Keyword :

Attention mechanism Attention mechanism Context-guided vision model Context-guided vision model Geo-spatial image segmentation Geo-spatial image segmentation Ultra-high resolution image segmentation Ultra-high resolution image segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Wenxi , Li, Qi , Lin, Xindai et al. Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement [J]. | INTERNATIONAL JOURNAL OF COMPUTER VISION , 2024 , 132 (11) : 5030-5047 .
MLA Liu, Wenxi et al. "Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement" . | INTERNATIONAL JOURNAL OF COMPUTER VISION 132 . 11 (2024) : 5030-5047 .
APA Liu, Wenxi , Li, Qi , Lin, Xindai , Yang, Weixiang , He, Shengfeng , Yu, Yuanlong . Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement . | INTERNATIONAL JOURNAL OF COMPUTER VISION , 2024 , 132 (11) , 5030-5047 .
Export to NoteExpress RIS BibTex

Version :

Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement Scopus
期刊论文 | 2024 , 132 (11) , 5030-5047 | International Journal of Computer Vision
Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement EI
期刊论文 | 2024 , 132 (11) , 5030-5047 | International Journal of Computer Vision
10| 20| 50 per page
< Page ,Total 15 >

Export

Results:

Selected

to

Format:
Online/Total:1016/9711397
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1