Query:
学者姓名:王舒
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
Image super-resolution (SR) has recently gained traction in various fields, including remote sensing, biomedicine, and video surveillance. Nonetheless, the majority of advancements in SR have been achieved by scaling the architecture of convolutional neural networks, which inevitably increases computational complexity. In addition, most existing SR models struggle to effectively capture high-frequency information, resulting in overly smooth reconstructed images. To address this issue, we propose a lightweight Progressive Feature Aggregation Network (PFAN), which leverages Progressive Feature Aggregation Block to enhance different features through a progressive strategy. Specifically, we propose a Key Information Perception Module for capturing high-frequency details from cross-spatial-channel dimension to recover edge features. Besides, we design a Local Feature Enhancement Module, which effectively combines multi-scale convolutions for local feature extraction and Transformer for long-range dependencies modeling. Through the progressive fusion of rich edge details and texture features, our PFAN successfully achieves better reconstruction performance. Extensive experiments on five benchmark datasets demonstrate that PFAN outperforms state-of-the-art methods and strikes a better balance across SR performance, parameters, and computational complexity. Code can be available at https://github.com/handsomeyxk/PFAN.
Keyword :
CNN CNN Key information perception Key information perception Local feature enhancement Local feature enhancement Progressive feature aggregation network Progressive feature aggregation network Super-resolution Super-resolution Transformer Transformer
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Liqiong , Yang, Xiangkun , Wang, Shu et al. PFAN: progressive feature aggregation network for lightweight image super-resolution [J]. | VISUAL COMPUTER , 2025 . |
MLA | Chen, Liqiong et al. "PFAN: progressive feature aggregation network for lightweight image super-resolution" . | VISUAL COMPUTER (2025) . |
APA | Chen, Liqiong , Yang, Xiangkun , Wang, Shu , Shen, Ying , Wu, Jing , Huang, Feng et al. PFAN: progressive feature aggregation network for lightweight image super-resolution . | VISUAL COMPUTER , 2025 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Visual object tracking is essentially crucial for unmanned aerial vehicles (UAVs). Despite the substantial progress, most of the existing UAV trackers are designed for well-conditioned daytime data, while for the scenarios in challenging weather condition, e.g. foggy or nighttime environment, the tremendous domain gap leads to significant performance degradation. To address this issue, in this paper, we propose a novel robust UAV tracker termed LVPTrack, which conducts high quality label-aligned visual prompt tuning to adapt to various challenging weather conditions. Specifically, we first synthesize the sequential foggy and nighttime video frames to assist the model training. A domain adaptive teacher-student network is utilized to distill the hierarchical visual semantic of the target objects in cross-domain scenarios. Then we propose a target-aware pseudo-label voting (PLV) strategy to alleviate the target-level misalignment in the dual domains. Furthermore, we propose a dynamic aggregated prompt (DAP) module to facilitate the appearance variation adaptation of the target object in challenging scenarios. Extensive experiments demonstrate that our tracker achieves superior performance over existing state-of-the-art UAV trackers. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Keyword :
Drones Drones Target drones Target drones
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Hongjing , Yao, Siyuan , Huang, Feng et al. LVPTrack: High Performance Domain Adaptive UAV Tracking with Label Aligned Visual Prompt Tuning [C] . 2025 : 8395-8403 . |
MLA | Wu, Hongjing et al. "LVPTrack: High Performance Domain Adaptive UAV Tracking with Label Aligned Visual Prompt Tuning" . (2025) : 8395-8403 . |
APA | Wu, Hongjing , Yao, Siyuan , Huang, Feng , Wang, Shu , Zhang, Linchao , Zheng, Zhuoran et al. LVPTrack: High Performance Domain Adaptive UAV Tracking with Label Aligned Visual Prompt Tuning . (2025) : 8395-8403 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Growing evidence highlights the roles of glymphatic system and peripheral inflammation in Parkinson's disease (PD). We evaluated their interrelationship and potential mechanisms contributing to motor symptoms using DTI-ALPS and inflammatory markers (leukocyte, lymphocyte, neutrophil counts, neutrophil-to-lymphocyte ratio [NLR], and platelet-to-lymphocyte ratio [PLR]) in 134 PD patients (52 tremor-dominant [TD], 62 postural instability and gait difficulty [PIGD]) and 81 healthy controls (HC, 33 with inflammatory markers). PD exhibited lower DTI-ALPS than HC (1.43 +/- 0.19 vs. 1.52 +/- 0.21, p = 0.001). DTI-ALPS was negatively correlated with NLR, PLR, and neutrophils in PD (all p < 0.05) and with neutrophils in PIGD (beta = -0.043, p = 0.048), and positively correlated with lymphocytes in TD (beta = 0.105, p = 0.034). DTI-ALPS mediated the relationship between peripheral inflammation (NLR and neutrophils) and MDS-UPDRS III score in PD. Overall, glymphatic dysfunction correlates with peripheral inflammation and may mediate effects of inflammation on motor symptoms in PD, with distinct inflammation profiles between TD and PIGD.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Ruolan , Cai, Guoen , Chen, Ying et al. Association of glymphatic system function with peripheral inflammation and motor symptoms in Parkinson's disease [J]. | NPJ PARKINSONS DISEASE , 2025 , 11 (1) . |
MLA | Lin, Ruolan et al. "Association of glymphatic system function with peripheral inflammation and motor symptoms in Parkinson's disease" . | NPJ PARKINSONS DISEASE 11 . 1 (2025) . |
APA | Lin, Ruolan , Cai, Guoen , Chen, Ying , Zheng, Jinmei , Wang, Shu , Xiao, Huinan et al. Association of glymphatic system function with peripheral inflammation and motor symptoms in Parkinson's disease . | NPJ PARKINSONS DISEASE , 2025 , 11 (1) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Polarization can improve the autonomous reconnaissance capability of unmanned aerial vehicle, but it is easily interfered by the variation of detection angle and target materials, which affects the robustness of polarization detection. In this paper, a real-time low-altitude camouflaged target detection algorithm of YOLO-Polarization based on polarized images is proposed. The coded image fused with multi-polarization direction information is used as input, the 3D convolution module is applied to extract the connection features from the different polarization direction images, and a feature enhancement module (FEM) is introduced to further enhance the multi-level features. In addition, the cross-level feature aggregation network is adopted to make full use of the feature information of different scales to complete the effective aggregation of features, and finally combined with multi-channel feature information output detection results. A dataset consisting of polarized images of low-altitude camouflaged targets (PICO) which include 10 types of targets is constructed. The experimental results based on PICO dataset show that the proposed method can effectively detect the camouflaged targets, with mAP0. 5:0. 95 up to 52. 0% and mAP0. 5 up to 91. 5% . The detection rate achieves 55. 0 frames / s, which meets the requirement of real-time detection. © 2024 China Ordnance Industry Corporation. All rights reserved.
Keyword :
Aircraft detection Aircraft detection Antennas Antennas Deep learning Deep learning Feature extraction Feature extraction Image enhancement Image enhancement Polarization Polarization Signal detection Signal detection Unmanned aerial vehicles (UAV) Unmanned aerial vehicles (UAV)
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shen, Ying , Liu, Xiancai , Wang, Shu et al. Real-time Detection of Low-altitude Camouflaged Targets Based on Polarization Encoded Images [J]. | Acta Armamentarii , 2024 , 45 (5) : 1374-1383 . |
MLA | Shen, Ying et al. "Real-time Detection of Low-altitude Camouflaged Targets Based on Polarization Encoded Images" . | Acta Armamentarii 45 . 5 (2024) : 1374-1383 . |
APA | Shen, Ying , Liu, Xiancai , Wang, Shu , Huang, Feng . Real-time Detection of Low-altitude Camouflaged Targets Based on Polarization Encoded Images . | Acta Armamentarii , 2024 , 45 (5) , 1374-1383 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Nuclei segmentation and classification play a crucial role in pathology diagnosis, enabling pathologists to analyze cellular characteristics accurately. Overlapping cluster nuclei, misdetection of small-scale nuclei, and pleomorphic nuclei-induced misclassification have always been major challenges in the nuclei segmentation and classification tasks. To this end, we introduce an auxiliary task of nuclei boundary-guided contrastive learning to enhance the representativeness and discriminative power of visual features, particularly for addressing the challenge posed by the unclear contours of adherent nuclei and small nuclei. In addition, misclassifications resulting from pleomorphic nuclei often exhibit low classification confidence, indicating a high level of uncertainty. To mitigate misclassification, we capitalize on the characteristic clustering of similar cells to propose a locality-aware class embedding module, offering a regional perspective to capture category information. Moreover, we address uncertain classification in densely aggregated nuclei by designing a top-k uncertainty attention module that leverages deep features to enhance shallow features, thereby improving the learning of contextual semantic information. We demonstrate that the proposed network outperforms the off-the-shelf methods in both nuclei segmentation and classification experiments, achieving the state-of-the-art performance. © 2024 Elsevier Ltd
Keyword :
Classification (of information) Classification (of information) Computer aided diagnosis Computer aided diagnosis Deep learning Deep learning Image classification Image classification Semantics Semantics Semantic Segmentation Semantic Segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Liu, Wenxi , Zhang, Qing , Li, Qi et al. Contrastive and uncertainty-aware nuclei segmentation and classification [J]. | Computers in Biology and Medicine , 2024 , 178 . |
MLA | Liu, Wenxi et al. "Contrastive and uncertainty-aware nuclei segmentation and classification" . | Computers in Biology and Medicine 178 (2024) . |
APA | Liu, Wenxi , Zhang, Qing , Li, Qi , Wang, Shu . Contrastive and uncertainty-aware nuclei segmentation and classification . | Computers in Biology and Medicine , 2024 , 178 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Camouflaged target detection aims to detect targets that blend into their surroundings, but RGB has difficulty distinguishing between targets and backgrounds. While methods using multispectral image (MSI) can distinguish targets from background via spectral information, they are limited by imaging speed, resolution, and high cost for camouflaged target detection. Here, we propose a novel camouflaged target detection workflow based on reconstructed MSI from RGB image. Specifically, we propose a spectral reconstruction model, S2HFormer, which utilizes the deep neural network to fit the mapping of RGB image to MSI without additional information. And the reconstructed MSI based on S2HFormer achieves higher accuracy in both reconstruction and target detection, outperforming existing methods. Furthermore, we integrate a spectral band selection algorithm to optimize the number of bands used for improving detection efficiency. Experimental results show that the proposed method acquires MSI at 55 frames per second (FPS) and achieves an F -score of 0.925, achieving real-time (24 FPS) MSI acquisition. The evaluation indicates the effectiveness and efficiency of our method for camouflaged target detection.
Keyword :
Camouflaged target detection Camouflaged target detection Deep learning Deep learning Multispectral Multispectral Remote Sensing Remote Sensing Spectral reconstruction Spectral reconstruction
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, Shu , Xu, Yixuan , Zeng, Dawei et al. Deep learning-based spectral reconstruction in camouflaged target detection [J]. | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION , 2024 , 126 . |
MLA | Wang, Shu et al. "Deep learning-based spectral reconstruction in camouflaged target detection" . | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION 126 (2024) . |
APA | Wang, Shu , Xu, Yixuan , Zeng, Dawei , Huang, Feng , Liang, Lingyu . Deep learning-based spectral reconstruction in camouflaged target detection . | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION , 2024 , 126 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment.Despite advancements in optical detection capabilities through im-aging systems,including spectral,polarization,and infrared technologies,there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes.Here,this study proposes a snapshot multispectral image-based camouflaged detection model,multispectral YOLO(MS-YOLO),which utilizes the SPD-Conv and SimAM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information.Besides,the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD),which encompasses diverse scenes,target scales,and attitudes.To minimize infor-mation redundancy,MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input.Through experiments on the MSCPD,MS-YOLO achieves a mean Average Precision of 94.31%and real-time detection at 65 frames per second,which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes.Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shu Wang , Dawei Zeng , Yixuan Xu et al. Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images [J]. | 防务技术 , 2024 , 34 (4) : 269-281 . |
MLA | Shu Wang et al. "Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images" . | 防务技术 34 . 4 (2024) : 269-281 . |
APA | Shu Wang , Dawei Zeng , Yixuan Xu , Gonghan Yang , Feng Huang , Liqiong Chen . Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images . | 防务技术 , 2024 , 34 (4) , 269-281 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO (MS-YOLO), which utilizes the SPD-Conv and SimAM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset (MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield. (c) 2023 China Ordnance Society. Publishing services by Elsevier B.V. on behalf of KeAi Communications Co. Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).
Keyword :
Camouflaged people detection Camouflaged people detection Complex remote sensing scenes Complex remote sensing scenes MS-YOLO MS-YOLO Optimal band selection Optimal band selection Snapshot multispectral imaging Snapshot multispectral imaging
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, Shu , Zeng, Dawei , Xu, Yixuan et al. Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images [J]. | DEFENCE TECHNOLOGY , 2024 , 34 : 269-281 . |
MLA | Wang, Shu et al. "Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images" . | DEFENCE TECHNOLOGY 34 (2024) : 269-281 . |
APA | Wang, Shu , Zeng, Dawei , Xu, Yixuan , Yang, Gonghan , Huang, Feng , Chen, Liqiong . Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images . | DEFENCE TECHNOLOGY , 2024 , 34 , 269-281 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Ductal carcinoma in situ with microinvasion (DCISM) is a challenging subtype of breast cancer with controversial invasiveness and prognosis. Accurate diagnosis of DCISM from ductal carcinoma in situ (DCIS) is crucial for optimal treatment and improved clinical outcomes. However, there are often some suspicious small cancer nests in DCIS, and it is difficult to diagnose the presence of intact myoepithelium by conventional hematoxylin and eosin (H&E) stained images. Although a variety of biomarkers are available for immunohistochemical (IHC) staining of myoepithelial cells, no single biomarker is consistently sensitive to all tumor lesions. Here, we introduced a new diagnostic method that provides rapid and accurate diagnosis of DCISM using multiphoton microscopy (MPM). Suspicious foci in H&E-stained images were labeled as regions of interest (ROIs), and the nuclei within these ROIs were segmented using a deep learning model. MPM was used to capture images of the ROIs in H&E-stained sections. The intensity of two-photon excitation fluorescence (TPEF) in the myoepithelium was significantly different from that in tumor parenchyma and tumor stroma. Through the use of MPM, the myoepithelium and basement membrane can be easily observed via TPEF and second-harmonic generation (SHG), respectively. By fusing the nuclei in H&E-stained images with MPM images, DCISM can be differentiated from suspicious small cancer clusters in DCIS. The proposed method demonstrated good consistency with the cytokeratin 5/6 (CK5/6) myoepithelial staining method (kappa coefficient = 0.818). Accurate distinction between ductal carcinoma in situ with microinvasion (DCISM) and ductal carcinoma in situ (DCIS) is crucial for optimal treatment and improved clinical outcomes. However, current diagnostic methods are often unreliable or time-consuming. Here, the authors present a novel diagnostic method that allows rapid and accurate diagnosis of DCISM by fusing multiphoton microscopy images with H&E-stained nuclear images. Myoepithelium and basement membrane can be visualized directly on H&E-stained sections without the need for immunohistochemical staining. This approach could facilitate the clinical diagnosis of DCISM, and has the potential to optimize risk stratification and improve prognosis in DCIS patients.image
Keyword :
basement membrane basement membrane breast cancer breast cancer ductal carcinoma in situ ductal carcinoma in situ ductal carcinoma in situ with microinvasion ductal carcinoma in situ with microinvasion image fusion image fusion multiphoton microscopy multiphoton microscopy myoepithelium myoepithelium
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Han, Xiahui , Liu, Yulan , Zhang, Shichao et al. Improving the diagnosis of ductal carcinoma in situ with microinvasion without immunohistochemistry: An innovative method with H&E-stained and multiphoton microscopy images [J]. | INTERNATIONAL JOURNAL OF CANCER , 2024 , 154 (10) : 1802-1813 . |
MLA | Han, Xiahui et al. "Improving the diagnosis of ductal carcinoma in situ with microinvasion without immunohistochemistry: An innovative method with H&E-stained and multiphoton microscopy images" . | INTERNATIONAL JOURNAL OF CANCER 154 . 10 (2024) : 1802-1813 . |
APA | Han, Xiahui , Liu, Yulan , Zhang, Shichao , Li, Lianhuang , Zheng, Liqin , Qiu, Lida et al. Improving the diagnosis of ductal carcinoma in situ with microinvasion without immunohistochemistry: An innovative method with H&E-stained and multiphoton microscopy images . | INTERNATIONAL JOURNAL OF CANCER , 2024 , 154 (10) , 1802-1813 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Diagnostic pathology, historically dependent on visual scrutiny by experts, is essential for disease detection. Advances in digital pathology and developments in computer vision technology have led to the application of artificial intelligence (AI) in this field. Despite these advancements, the variability in pathologists' subjective interpretations of diagnostic criteria can lead to inconsistent outcomes. To meet the need for precision in cancer therapies, there is an increasing demand for accurate pathological diagnoses. Consequently, traditional diagnostic pathology is evolving towards "next-generation diagnostic pathology", prioritizing on the development of a multi-dimensional, intelligent diagnostic approach. Using nonlinear optical effects arising from the interaction of light with biological tissues, multiphoton microscopy (MPM) enables high-resolution label-free imaging of multiple intrinsic components across various human pathological tissues. AI-empowered MPM further improves the accuracy and efficiency of diagnosis, holding promise for providing auxiliary pathology diagnostic methods based on multiphoton diagnostic criteria. In this review, we systematically outline the applications of MPM in pathological diagnosis across various human diseases, and summarize common multiphoton diagnostic features. Moreover, we examine the significant role of AI in enhancing multiphoton pathological diagnosis, including aspects such as image preprocessing, refined differential diagnosis, and the prognostication of outcomes. We also discuss the challenges and perspectives faced by the integration of MPM and AI, encompassing equipment, datasets, analytical models, and integration into the existing clinical pathways. Finally, the review explores the synergy between AI and label-free MPM to forge novel diagnostic frameworks, aiming to accelerate the adoption and implementation of intelligent multiphoton pathology systems in clinical settings. AI-empowered multiphoton microscopy enhances diagnostic accuracy and efficiency for various human diseases, evolving towards next-generation diagnostic pathology with an endogenous, multi-dimensional, and intelligent approach.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, Shu , Pan, Junlin , Zhang, Xiao et al. Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy [J]. | LIGHT-SCIENCE & APPLICATIONS , 2024 , 13 (1) . |
MLA | Wang, Shu et al. "Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy" . | LIGHT-SCIENCE & APPLICATIONS 13 . 1 (2024) . |
APA | Wang, Shu , Pan, Junlin , Zhang, Xiao , Li, Yueying , Liu, Wenxi , Lin, Ruolan et al. Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy . | LIGHT-SCIENCE & APPLICATIONS , 2024 , 13 (1) . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |