Query:
学者姓名:于元隆
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
In real-world scenarios, missing views is common due to the complexity of data collection. Therefore, it is inevitable to classify incomplete multi-view data. Although substantial progress has been achieved, there are still two challenging problems with incomplete multi-view classification: (1) Simply ignoring these missing views is often ineffective, especially under high missing rates, which can lead to incomplete analysis and unreliable results. (2) Most existing multi-view classification models primarily focus on maximizing consistency between different views. However, neglecting specific-view information may lead to decreased performance. To solve the above problems, we propose a novel framework called Trusted Cross-View Completion (TCVC) for incomplete multi-view classification. Specifically, TCVC consists of three modules: Cross-view Feature Learning Module (CVFL), Imputation Module (IM) and Trusted Fusion Module (TFM). First, CVFL mines specific- view information to obtain cross-view reconstruction features. Then, IM restores the missing view by fusing cross-view reconstruction features with weights, guided by uncertainty-aware information. This information is the quality assessment of the cross-view reconstruction features in TFM. Moreover, the recovered views are supervised by cross-view neighborhood-aware. Finally, TFM effectively fuses complete data to generate trusted classification predictions. Extensive experiments show that our method is effective and robust.
Keyword :
Cross-view feature learning Cross-view feature learning Incomplete multi-view classification Incomplete multi-view classification Uncertainty-aware Uncertainty-aware
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhou, Liping , Chen, Shiyun , Song, Peihuan et al. Trusted Cross-view Completion for incomplete multi-view classification [J]. | NEUROCOMPUTING , 2025 , 629 . |
MLA | Zhou, Liping et al. "Trusted Cross-view Completion for incomplete multi-view classification" . | NEUROCOMPUTING 629 (2025) . |
APA | Zhou, Liping , Chen, Shiyun , Song, Peihuan , Zheng, Qinghai , Yu, Yuanlong . Trusted Cross-view Completion for incomplete multi-view classification . | NEUROCOMPUTING , 2025 , 629 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-view clustering has attracted significant attention in recent years because it can leverage the consistent and complementary information of multiple views to improve clustering performance. However, effectively fuse the information and balance the consistent and complementary information of multiple views are common challenges faced by multi-view clustering. Most existing multi-view fusion works focus on weighted-sum fusion and concatenating fusion, which unable to fully fuse the underlying information, and not consider balancing the consistent and complementary information of multiple views. To this end, we propose Cross-view Fusion for Multi-view Clustering (CFMVC). Specifically, CFMVC combines deep neural network and graph convolutional network for cross-view information fusion, which fully fuses feature information and structural information of multiple views. In order to balance the consistent and complementary information of multiple views, CFMVC enhances the correlation among the same samples to maximize the consistent information while simultaneously reinforcing the independence among different samples to maximize the complementary information. Experimental results on several multi-view datasets demonstrate the effectiveness of CFMVC for multi-view clustering task.
Keyword :
Cross-view Cross-view deep neural network deep neural network graph convolutional network graph convolutional network multi-view clustering multi-view clustering multi-view fusion multi-view fusion
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Zhijie , Huang, Binqiang , Zheng, Qinghai et al. Cross-View Fusion for Multi-View Clustering [J]. | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 : 621-625 . |
MLA | Huang, Zhijie et al. "Cross-View Fusion for Multi-View Clustering" . | IEEE SIGNAL PROCESSING LETTERS 32 (2025) : 621-625 . |
APA | Huang, Zhijie , Huang, Binqiang , Zheng, Qinghai , Yu, Yuanlong . Cross-View Fusion for Multi-View Clustering . | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 , 621-625 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
For objects with arbitrary angles in optical remote sensing (RS) images, the oriented bounding box regression task often faces the problem of ambiguous boundaries between positive and negative samples. The statistical analysis of existing label assignment strategies reveals that anchors with low Intersection over Union (IoU) between ground truth (GT) may also accurately surround the GT after decoding. Therefore, this article proposes an attention-based mean-max balance assignment (AMMBA) strategy, which consists of two parts: mean-max balance assignment (MMBA) strategy and balance feature pyramid with attention (BFPA). MMBA employs the mean-max assignment (MMA) and balance assignment (BA) to dynamically calculate a positive threshold and adaptively match better positive samples for each GT for training. Meanwhile, to meet the need of MMBA for more accurate feature maps, we construct a BFPA module that integrates spatial and scale attention mechanisms to promote global information propagation. Combined with S2ANet, our AMMBA method can effectively achieve state-of-the-art performance, with a precision of 80.91% on the DOTA dataset in a simple plug-and-play fashion. Extensive experiments on three challenging optical RS image datasets (DOTA-v1.0, HRSC, and DIOR-R) further demonstrate the balance between precision and speed in single-stage object detectors. Our AMMBA has enough potential to assist all existing RS models in a simple way to achieve better detection performance. The code is available at https://github.com/promisekoloer/AMMBA.
Keyword :
Accuracy Accuracy Attention feature fusion Attention feature fusion Detectors Detectors Feature extraction Feature extraction label assignment label assignment Location awareness Location awareness Object detection Object detection optical remote sensing (RS) images optical remote sensing (RS) images Optical scattering Optical scattering oriented object detection oriented object detection Remote sensing Remote sensing Semantics Semantics Shape Shape Training Training
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Qifeng , Chen, Nuo , Huang, Haibin et al. Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
MLA | Lin, Qifeng et al. "Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) . |
APA | Lin, Qifeng , Chen, Nuo , Huang, Haibin , Zhu, Daoye , Fu, Gang , Chen, Chuanxi et al. Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
目的 针对科研项目实施过程中的任务调度优化、任务数据实时共享、工效分析及其制约因素挖掘等需求,设计一种大数据驱动的科研过程评估分析系统HTower.方法 系统以任务为基本单元进行项目调度与实施过程跟踪分析,通过项目任务执行方案与结果等内容的在线协同编辑,解决科研过程数据协同更新与共享难的问题,减少因频繁会议和即时消息通讯所浪费的科研时间.依据工时、工效、任务进度等科研过程量化评估结果,挖掘制约科研项目实施效率和个人科研效率提升的关键因素,指导项目负责人和科研人员提高科研效率和项目实施质量.结果 HTower系统可将项目任务完成的提前率提升到 10.8%,准点率达到 79.5%,既能通过量化评估分析制约科研项目实施效率的相关因素,也能用于研究生科研过程的量化评估.结论 HTower系统的应用不仅可提高项目任务调度水平和团队科研效率,而且可及时发现研究生科研效率较低的原因,优化导师的学术指导策略,促进研究生科研能力的提升.
Keyword :
任务调度 任务调度 协同编辑 协同编辑 大数据 大数据 科研过程评估 科研过程评估 量化评估 量化评估
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 廖龙龙 , 曾文滨 , 方鑫 et al. 大数据驱动的科研过程评估分析系统 [J]. | 河南科技学院学报(自然科学版) , 2025 , 53 (3) : 70-79 . |
MLA | 廖龙龙 et al. "大数据驱动的科研过程评估分析系统" . | 河南科技学院学报(自然科学版) 53 . 3 (2025) : 70-79 . |
APA | 廖龙龙 , 曾文滨 , 方鑫 , 郑志伟 , 于元隆 . 大数据驱动的科研过程评估分析系统 . | 河南科技学院学报(自然科学版) , 2025 , 53 (3) , 70-79 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
HD map reconstruction is crucial for autonomous driving. LiDAR-based methods are limited due to expensive sensors and time-consuming computation. Camera-based methods usually need to perform road segmentation and view transformation separately, which often causes distortion and missing content. To push the limits of the technology, we present a novel framework that reconstructs a local map formed by road layout and vehicle occupancy in the bird's-eye view given a front-view monocular image only. We propose a front-to-top view projection (FTVP) module, which takes the constraint of cycle consistency between views into account and makes full use of their correlation to strengthen the view transformation and scene understanding. In addition, we apply multi-scale FTVP modules to propagate the rich spatial information of low-level features to mitigate spatial deviation of the predicted object location. Experiments on public benchmarks show that our method achieves various tasks on road layout estimation, vehicle occupancy estimation, and multi-class semantic estimation, at a performance level comparable to the state-of-the-arts, while maintaining superior efficiency.
Keyword :
autonomous driving autonomous driving BEV perception BEV perception segmentation segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Liu, Wenxi , Li, Qi , Yang, Weixiang et al. Monocular BEV Perception of Road Scenes via Front-to-Top View Projection [J]. | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2024 , 46 (9) : 6109-6125 . |
MLA | Liu, Wenxi et al. "Monocular BEV Perception of Road Scenes via Front-to-Top View Projection" . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 46 . 9 (2024) : 6109-6125 . |
APA | Liu, Wenxi , Li, Qi , Yang, Weixiang , Cai, Jiaxin , Yu, Yuanlong , Ma, Yuexin et al. Monocular BEV Perception of Road Scenes via Front-to-Top View Projection . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2024 , 46 (9) , 6109-6125 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
With the growing significance of data privacy protection, Source-Free Domain Adaptation (SFDA) has gained attention as a research topic that aims to transfer knowledge from a labeled source domain to an unlabeled target domain without accessing source data. However, the absence of source data often leads to model collapse or restricts the performance improvements of SFDA methods, as there is insufficient true-labeled knowledge for each category. To tackle this, Source-Free Active Domain Adaptation (SFADA) has emerged as a new task that aims to improve SFDA by selecting a small set of informative target samples labeled by experts. Nevertheless, existing SFADA methods impose a significant burden on human labelers, requiring them to continuously label a substantial number of samples throughout the training period. In this paper, a novel approach is proposed to alleviate the labeling burden in SFADA by only necessitating the labeling of an extremely small number of samples on a one-time basis. Moreover, considering the inherent sparsity of these selected samples in the target domain, a Self-adaptive Clustering-based Active Learning (SCAL) method is proposed that propagates the labels of selected samples to other datapoints within the same cluster. To further enhance the accuracy of SCAL, a self-adaptive scale search method is devised that automatically determines the optimal clustering scale, using the entropy of the entire target dataset as a guiding criterion. The experimental evaluation presents compelling evidence of our method's supremacy. Specifically, it outstrips previous SFDA methods, delivering state-of-the-art (SOTA) results on standard benchmarks. Remarkably, it accomplishes this with less than 0.5% annotation cost, in stark contrast to the approximate 5% required by earlier techniques. The approach thus not only sets new performance benchmarks but also offers a markedly more practical and cost-effective solution for SFADA, making it an attractive choice for real-world applications where labeling resources are limited. We propose a novel approach to alleviate the labeling burden in SFADA by only necessitating the labeling of an extremely small number of samples on a one-time basis. Moreover, considering the inherent sparsity of these selected samples in the target domain, we propose a Self-adaptive Clustering-based Active Learning (SCAL) method that propagates the labels of selected samples to other datapoints within the same cluster. To further enhance the accuracy of SCAL, we devise an self-adaptive scale search method that automatically determines the optimal clustering scale, using the entropy of the entire target dataset as a guiding criterion.image
Keyword :
computer vision computer vision image recognition image recognition
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Sun, Zhishu , Lin, Luojun , Yu, Yuanlong . You only label once: A self-adaptive clustering-based method for source-free active domain adaptation [J]. | IET IMAGE PROCESSING , 2024 , 18 (5) : 1268-1282 . |
MLA | Sun, Zhishu et al. "You only label once: A self-adaptive clustering-based method for source-free active domain adaptation" . | IET IMAGE PROCESSING 18 . 5 (2024) : 1268-1282 . |
APA | Sun, Zhishu , Lin, Luojun , Yu, Yuanlong . You only label once: A self-adaptive clustering-based method for source-free active domain adaptation . | IET IMAGE PROCESSING , 2024 , 18 (5) , 1268-1282 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Ultra-high resolution image segmentation poses a formidable challenge for UAVs with limited computation resources. Moreover, with multiple deployed tasks (e.g., mapping, localization, and decision making), the demand for a memory efficient model becomes more urgent. This letter delves into the intricate problem of achieving efficient and effective segmentation of ultra-high resolution UAV imagery, while operating under stringent GPU memory limitation. To address this problem, we propose a GPU memory-efficient and effective framework. Specifically, we introduce a novel and efficient spatial-guided high-resolution query module, which enables our model to effectively infer pixel-wise segmentation results by querying nearest latent embeddings from low-resolution features. Additionally, we present a memory-based interaction scheme with linear complexity to rectify semantic bias beneath the high-resolution spatial guidance via associating cross-image contextual semantics. For evaluation, we perform comprehensive experiments over public benchmarks under both conditions of small and large GPU memory usage limitations. Notably, our model gains around 3% advantage against SOTA in mIoU using comparable memory. Furthermore, we show that our model can be deployed on the embedded platform with less than 8 G memory like Jetson TX2.
Keyword :
Aerial Systems: Perception and Autonomy Aerial Systems: Perception and Autonomy Autonomous aerial vehicles Autonomous aerial vehicles Deep Learning for Visual Perception Deep Learning for Visual Perception Graphics processing units Graphics processing units Image resolution Image resolution Memory management Memory management Semantics Semantics Semantic segmentation Semantic segmentation Spatial resolution Spatial resolution
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Qi , Cai, Jiaxin , Luo, Jiexin et al. Memory-Constrained Semantic Segmentation for Ultra-High Resolution UAV Imagery [J]. | IEEE ROBOTICS AND AUTOMATION LETTERS , 2024 , 9 (2) : 1708-1715 . |
MLA | Li, Qi et al. "Memory-Constrained Semantic Segmentation for Ultra-High Resolution UAV Imagery" . | IEEE ROBOTICS AND AUTOMATION LETTERS 9 . 2 (2024) : 1708-1715 . |
APA | Li, Qi , Cai, Jiaxin , Luo, Jiexin , Yu, Yuanlong , Gu, Jason , Pan, Jia et al. Memory-Constrained Semantic Segmentation for Ultra-High Resolution UAV Imagery . | IEEE ROBOTICS AND AUTOMATION LETTERS , 2024 , 9 (2) , 1708-1715 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Ultra-high resolution image segmentation has raised increasing interests in recent years due to its realistic applications. In this paper, we innovate the widely used high-resolution image segmentation pipeline, in which an ultra-high resolution image is partitioned into regular patches for local segmentation and then the local results are merged into a high-resolution semantic mask. In particular, we introduce a novel locality-aware context fusion based segmentation model to process local patches, where the relevance between local patch and its various contexts are jointly and complementarily utilized to handle the semantic regions with large variations. Additionally, we present the alternating local enhancement module that restricts the negative impact of redundant information introduced from the contexts, and thus is endowed with the ability of fixing the locality-aware features to produce refined results. Furthermore, in comprehensive experiments, we demonstrate that our model outperforms other state-of-the-art methods in public benchmarks and verify the effectiveness of the proposed modules. Our released codes will be available at: https://github.com/liqiokkk/FCtL.
Keyword :
Attention mechanism Attention mechanism Context-guided vision model Context-guided vision model Geo-spatial image segmentation Geo-spatial image segmentation Ultra-high resolution image segmentation Ultra-high resolution image segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Liu, Wenxi , Li, Qi , Lin, Xindai et al. Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement [J]. | INTERNATIONAL JOURNAL OF COMPUTER VISION , 2024 , 132 (11) : 5030-5047 . |
MLA | Liu, Wenxi et al. "Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement" . | INTERNATIONAL JOURNAL OF COMPUTER VISION 132 . 11 (2024) : 5030-5047 . |
APA | Liu, Wenxi , Li, Qi , Lin, Xindai , Yang, Weixiang , He, Shengfeng , Yu, Yuanlong . Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement . | INTERNATIONAL JOURNAL OF COMPUTER VISION , 2024 , 132 (11) , 5030-5047 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Brain functional connectivity has been widely explored to reveal the functional interaction dynamics between the brain regions. However, conventional connectivity measures rely on deterministic models demanding application-specific empirical analysis, while deep learning approaches focus on finding discriminative features for state classification, having limited capability to capture the interpretable connectivity characteristics. To address the challenges, this study proposes a self-supervised triplet network with depth-wise attention (TripletNet-DA) to generate the functional connectivity: 1) TripletNet-DA firstly utilizes channel-wise transformations for temporal data augmentation, where the correlated & uncorrelated sample pairs are constructed for self-supervised training, 2) Channel encoder is designed with a convolution network to extract the deep features, while similarity estimator is employed to generate the similarity pairs and the functional connectivity representations, 3) TripletNet-DA applies Triplet loss with anchor-negative similarity penalty for model training, where the similarities of uncorrelated sample pairs are minimized to enhance model's learning capability. Experimental results on pathological EEG datasets (Autism Spectrum Disorder, Major Depressive Disorder) indicate that 1) TripletNet-DA demonstrates superiority in both ASD discrimination and MDD classification than the state-of-the-art counterparts, where the connectivity features in beta & gamma bands have respectively achieved the accuracy of 97.05%, 98.32% for ASD discrimination, 89.88%, 91.80% for MDD classification in the eyes-closed condition and 90.90%, 92.26% in the eyes-open condition, 2) TripletNet-DA enables to uncover significant differences of functional connectivity between ASD EEG and TD ones, and the prominent connectivity links are in accordance with the empirical findings, thus providing potential biomarkers for clinical ASD analysis.
Keyword :
Analytical models Analytical models Brain functional connectivity Brain functional connectivity Brain modeling Brain modeling Correlation Correlation depth-wise attention depth-wise attention Electroencephalography Electroencephalography self-supervised learning self-supervised learning Task analysis Task analysis Time series analysis Time series analysis Training Training triplet network triplet network
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Tang, Yunbo , Huang, Weirong , Liu, Rongchang et al. Learning Interpretable Brain Functional Connectivity via Self-Supervised Triplet Network With Depth-Wise Attention [J]. | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS , 2024 , 28 (11) : 6685-6698 . |
MLA | Tang, Yunbo et al. "Learning Interpretable Brain Functional Connectivity via Self-Supervised Triplet Network With Depth-Wise Attention" . | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS 28 . 11 (2024) : 6685-6698 . |
APA | Tang, Yunbo , Huang, Weirong , Liu, Rongchang , Yu, Yuanlong . Learning Interpretable Brain Functional Connectivity via Self-Supervised Triplet Network With Depth-Wise Attention . | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS , 2024 , 28 (11) , 6685-6698 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Deep learning inference on edge devices is susceptible to security threats, particularly fault injection attacks (FIAs), which are easily executed and pose a significant risk to the inference. These attacks could potentially lead to alterations to the memory of the edge device or errors in the execution of instructions. Specifically, time-intensive convolution computation is considerably vulnerable in deep learning inference at the edge. To detect and defend attacks against deep learning inference on heterogeneous edge devices, we propose an efficient hardware-based solution for verifiable model inference named DarkneTV. It leverages an asynchronous mechanism to conduct hash checking of convolution weights and verification of convolution computations within the trusted execution environment (TEE) of the central processing unit (CPU) when the integrated graphics processing unit (GPU) runs model inference. It protects the integrity of convolution weights and the correctness of inference results, and effectively detects abnormal weight modifications and incorrect inference results regarding neural operators. Extensive experimental results show that DarkneTV identifies tiny FIAs against convolution weights and computation with over 99.03% accuracy but less extra time overhead. The asynchronous mechanism significantly improves the performance of verifiable inference. Typically, the speedups of the GPU-accelerated verifiable inference on the Hikey 960 achieve 8.50 x -11.31 x compared with the CPU-only mode.
Keyword :
Deep learning inference Deep learning inference fault injection attacks fault injection attacks heterogeneous edge devices heterogeneous edge devices trusted execution environment (TEE) trusted execution environment (TEE) verifiable learning verifiable learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Liao, Longlong , Zheng, Yuqiang , Lu, Hong et al. Verifiable Deep Learning Inference on Heterogeneous Edge Devices With Trusted Execution Environment [J]. | IEEE SENSORS JOURNAL , 2024 , 24 (17) : 28351-28362 . |
MLA | Liao, Longlong et al. "Verifiable Deep Learning Inference on Heterogeneous Edge Devices With Trusted Execution Environment" . | IEEE SENSORS JOURNAL 24 . 17 (2024) : 28351-28362 . |
APA | Liao, Longlong , Zheng, Yuqiang , Lu, Hong , Liu, Xinqi , Chen, Shuguang , Yu, Yuanlong . Verifiable Deep Learning Inference on Heterogeneous Edge Devices With Trusted Execution Environment . | IEEE SENSORS JOURNAL , 2024 , 24 (17) , 28351-28362 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |