Query:
学者姓名:杨明静
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
Unsupervised domain adaptation(UDA) aims to mitigate the performance drop of models tested on the target domain, due to the domain shift from the target to sources. Most UDA segmentation methods focus on the scenario of solely single source domain. However, in practical situations data with gold standard could be available from multiple sources (domains), and the multi-source training data could provide more information for knowledge transfer. How to utilize them to achieve better domain adaptation yet remains to be further explored. This work investigates multi-source UDA and proposes a new framework for medical image segmentation. Firstly, we employ a multi-level adversarial learning scheme to adapt features at different levels between each of the source domains and the target, to improve the segmentation performance. Then, we propose a multi-model consistency loss to transfer the learned multi-source knowledge to the target domain simultaneously. Finally, we validated the proposed framework on two applications, i.e., multi-modality cardiac segmentation and cross-modality liver segmentation. The results showed our method delivered promising performance and compared favorably to state-of-the-art approaches. © 2023 IEEE.
Keyword :
Job analysis Job analysis Knowledge management Knowledge management Medical imaging Medical imaging Semantics Semantics Semantic Segmentation Semantic Segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pei, Chenhao , Wu, Fuping , Yang, Mingjing et al. Multi-Source Domain Adaptation for Medical Image Segmentation [J]. | IEEE Transactions on Medical Imaging , 2024 , 43 (4) : 1640-1651 . |
MLA | Pei, Chenhao et al. "Multi-Source Domain Adaptation for Medical Image Segmentation" . | IEEE Transactions on Medical Imaging 43 . 4 (2024) : 1640-1651 . |
APA | Pei, Chenhao , Wu, Fuping , Yang, Mingjing , Pan, Lin , Ding, Wangbin , Dong, Jinwei et al. Multi-Source Domain Adaptation for Medical Image Segmentation . | IEEE Transactions on Medical Imaging , 2024 , 43 (4) , 1640-1651 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Abdominal organ segmentation can help doctors to have a more intuitive observation of the abdominal organ structure and tissue lesion structure, thereby improving the accuracy of disease diagnosis. Accurate segmentation results can provide valuable information for clinical diagnosis and follow-up, such as organ size, location, boundary status, and spatial relationship of multiple organs. Manual labels are precious and difficult to obtain in medical segmentation, so the use of pseudo-labels is an irresistible trend. In this paper, we demonstrate that pseudo-labels are beneficial to enrich the learning samples and enhance the feature learning ability of the model for abdominal organs and tumors. In this paper, we propose a semi-supervised parallel segmentation model that simultaneously aggregates local and global information using parallel modules of CNNS and transformers at high scales. The two-stage strategy and lightweight network make our model extremely efficient. Our method achieved an average DSC score of 89.75% and 3.78% for the organs and tumors, respectively, on the testing set. The average NSD scores were 93.51% and 1.82% for the organs and tumors, respectively. The average running time and area under GPU memory-time curve are 14.85 s and 15963 MB. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
Keyword :
Abdominal organ and tumor segmentation Abdominal organ and tumor segmentation Hybrid architecture Hybrid architecture Pseudo-label Pseudo-label
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Y. , Wu, Z. , Chen, H. et al. Conformer: A Parallel Segmentation Network Combining Swin Transformer and Convolutional Neutral Network [未知]. |
MLA | Chen, Y. et al. "Conformer: A Parallel Segmentation Network Combining Swin Transformer and Convolutional Neutral Network" [未知]. |
APA | Chen, Y. , Wu, Z. , Chen, H. , Yang, M. . Conformer: A Parallel Segmentation Network Combining Swin Transformer and Convolutional Neutral Network [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Accurate and efficient segmentation of multiple abdominal organs from medical images is crucial for clinical applications such as disease diagnosis and treatment planning. In this paper, we propose a novel approach for abdominal organ segmentation using the U-Net architecture. Our method addresses the challenges posed by anatomical variations and the proximity of organs in the abdominal region. To improve the segmentation accuracy, we introduce an attention mechanism into the U-Net architecture. This mechanism allows the network to focus on salient regions and suppress irrelevant background regions, enhancing the overall segmentation performance. Additionally, we incorporate 3D information by connecting three consecutive slices as 3-dimensional inputs. This enables us to exploit the spatial context across the slices while minimizing the increase in GPU memory usage. We evaluate our proposed method on the MICCAI FLARE 2023 validation dataset, the mean DSC is 0.3683 and the mean NSD is 0.3668. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
Keyword :
attention mechanism attention mechanism organ segmentation organ segmentation U-Net U-Net
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lei, R. , Yang, M. . 2.5D U-Net for Abdominal Multi-organ Segmentation [未知]. |
MLA | Lei, R. et al. "2.5D U-Net for Abdominal Multi-organ Segmentation" [未知]. |
APA | Lei, R. , Yang, M. . 2.5D U-Net for Abdominal Multi-organ Segmentation [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Given the diversity of medical images, traditional image segmentation models face the issue of domain shift. Unsupervised domain adaptation (UDA) methods have emerged as a pivotal strategy for cross modality analysis. These methods typically utilize generative adversarial networks (GANs) for both image-level and feature-level domain adaptation through the transformation and reconstruction of images, assuming the features between domains are well-aligned. However, this assumption falters with significant gaps between different medical image modalities, such as MRI and CT. These gaps hinder the effective training of segmentation networks with cross-modality images and can lead to misleading training guidance and instability. To address these challenges, this paper introduces a novel approach comprising a cross-modality feature alignment sub-network and a cross pseudo supervised dual-stream segmentation sub-network. These components work together to bridge domain discrepancies more effectively and ensure a stable training environment. The feature alignment sub-network is designed for the bidirectional alignment of features between the source and target domains, incorporating a self-attention module to aid in learning structurally consistent and relevant information. The segmentation sub-network leverages an enhanced cross-pseudo-supervised loss to harmonize the output of the two segmentation networks, assessing pseudo-distances between domains to improve the pseudo-label quality and thus enhancing the overall learning efficiency of the framework. This method's success is demonstrated by notable advancements in segmentation precision across target domains for abdomen and brain tasks.
Keyword :
cross modality segmentation cross modality segmentation cross pseudo supervision cross pseudo supervision feature alignment feature alignment unsupervised domain adaptation unsupervised domain adaptation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yang, Mingjing , Wu, Zhicheng , Zheng, Hanyu et al. Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning [J]. | DIAGNOSTICS , 2024 , 14 (16) . |
MLA | Yang, Mingjing et al. "Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning" . | DIAGNOSTICS 14 . 16 (2024) . |
APA | Yang, Mingjing , Wu, Zhicheng , Zheng, Hanyu , Huang, Liqin , Ding, Wangbin , Pan, Lin et al. Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning . | DIAGNOSTICS , 2024 , 14 (16) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Pedestrian Attribute Recognition (PAR) involves identifying the attributes of individuals in person images. Existing PAR methods typically rely on CNNs as the backbone network to extract pedestrian features. However, CNNs process only one adjacent region at a time, leading to the loss of long-range inter-relations between different attribute-specific regions. To address this limitation, we leverage the Vision Transformer (ViT) instead of CNNs as the backbone for PAR, aiming to model long-range relations and extract more robust features. However, PAR suffers from an inherent attribute imbalance issue, causing ViT to naturally focus more on attributes that appear frequently in the training set and ignore some pedestrian attributes that appear less. The native features extracted by ViT are not able to tolerate the imbalance attribute distribution issue. To tackle this issue, we propose two novel components: the Selective Feature Activation Method (SFAM) and the Orthogonal Feature Activation Loss. SFAM smartly suppresses the more informative attribute-specific features, compelling the PAR model to capture discriminative features from regions that are easily overlooked. The proposed loss enforces an orthogonal constraint on the original feature extracted by ViT and the suppressed features from SFAM, promoting the complementarity of features in space. We conduct experiments on several benchmark PAR datasets, including PETA, PA100K, RAPv1, and RAPv2, demonstrating the effectiveness of our method. Specifically, our method outperforms existing state-of-the-art approaches by GRL, IAA-Caps, ALM, and SSC in terms of mA on the four datasets, respectively. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Keyword :
Artificial intelligence Artificial intelligence Chemical activation Chemical activation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Junyi , Huang, Yan , Gao, Min et al. Selective and Orthogonal Feature Activation for Pedestrian Attribute Recognition [C] . 2024 : 6039-6047 . |
MLA | Wu, Junyi et al. "Selective and Orthogonal Feature Activation for Pedestrian Attribute Recognition" . (2024) : 6039-6047 . |
APA | Wu, Junyi , Huang, Yan , Gao, Min , Niu, Yuzhen , Yang, Mingjing , Gao, Zhipeng et al. Selective and Orthogonal Feature Activation for Pedestrian Attribute Recognition . (2024) : 6039-6047 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Unsupervised domain adaptation(UDA) aims to mitigate the performance drop of models tested on the target domain, due to the domain shift from the target to sources. Most UDA segmentation methods focus on the scenario of solely single source domain. However, in practical situations data with gold standard could be available from multiple sources (domains), and the multi-source training data could provide more information for knowledge transfer. How to utilize them to achieve better domain adaptation yet remains to be further explored. This work investigates multi-source UDA and proposes a new framework for medical image segmentation. Firstly, we employ a multi-level adversarial learning scheme to adapt features at different levels between each of the source domains and the target, to improve the segmentation performance. Then, we propose a multi-model consistency loss to transfer the learned multi-source knowledge to the target domain simultaneously. Finally, we validated the proposed framework on two applications, i.e., multi-modality cardiac segmentation and cross-modality liver segmentation. The results showed our method delivered promising performance and compared favorably to state-of-the-art approaches.
Keyword :
Domain adaptation Domain adaptation medical image segmentation medical image segmentation multi-source multi-source unsupervised learning unsupervised learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pei, Chenhao , Wu, Fuping , Yang, Mingjing et al. Multi-Source Domain Adaptation for Medical Image Segmentation [J]. | IEEE TRANSACTIONS ON MEDICAL IMAGING , 2024 , 43 (4) : 1640-1651 . |
MLA | Pei, Chenhao et al. "Multi-Source Domain Adaptation for Medical Image Segmentation" . | IEEE TRANSACTIONS ON MEDICAL IMAGING 43 . 4 (2024) : 1640-1651 . |
APA | Pei, Chenhao , Wu, Fuping , Yang, Mingjing , Pan, Lin , Ding, Wangbin , Dong, Jinwei et al. Multi-Source Domain Adaptation for Medical Image Segmentation . | IEEE TRANSACTIONS ON MEDICAL IMAGING , 2024 , 43 (4) , 1640-1651 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Pedestrian Attribute Recognition (PAR) involves identifying the attributes of individuals in person images. Existing PAR methods typically rely on CNNs as the backbone network to extract pedestrian features. However, CNNs process only one adjacent region at a time, leading to the loss of long-range inter-relations between different attribute-specific regions. To address this limitation, we leverage the Vision Transformer (ViT) instead of CNNs as the backbone for PAR, aiming to model long-range relations and extract more robust features. However, PAR suffers from an inherent attribute imbalance issue, causing ViT to naturally focus more on attributes that appear frequently in the training set and ignore some pedestrian attributes that appear less. The native features extracted by ViT are not able to tolerate the imbalance attribute distribution issue. To tackle this issue, we propose two novel components: the Selective Feature Activation Method (SFAM) and the Orthogonal Feature Activation Loss. SFAM smartly suppresses the more informative attribute-specific features, compelling the PAR model to capture discriminative features from regions that are easily overlooked. The proposed loss enforces an orthogonal constraint on the original feature extracted by ViT and the suppressed features from SFAM, promoting the complementarity of features in space. We conduct experiments on several benchmark PAR datasets, including PETA, PA100K, RAPv1, and RAPv2, demonstrating the effectiveness of our method. Specifically, our method outperforms existing state-of-the-art approaches by GRL, IAACaps, ALM, and SSC in terms of mA on the four datasets, respectively.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Junyi , Huang, Yan , Gao, Min et al. Selective and Orthogonal Feature Activation for Pedestrian Attribute Recognition [J]. | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6 , 2024 : 6039-6047 . |
MLA | Wu, Junyi et al. "Selective and Orthogonal Feature Activation for Pedestrian Attribute Recognition" . | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6 (2024) : 6039-6047 . |
APA | Wu, Junyi , Huang, Yan , Gao, Min , Niu, Yuzhen , Yang, Mingjing , Gao, Zhipeng et al. Selective and Orthogonal Feature Activation for Pedestrian Attribute Recognition . | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6 , 2024 , 6039-6047 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Assessment of myocardial viability is essential in diagnosis and treatment management of patients suffering from myocardial infarction, and classification of pathology on the myocardium is the key to this assessment. This work defines a new task of medical image analysis, i.e., to perform myocardial pathology segmentation (MyoPS) combining three-sequence cardiac magnetic resonance (CMR) images, which was first proposed in the MyoPS challenge, in conjunction with MICCAI 2020. Note that MyoPS refers to both myocardial pathology segmentation and the challenge in this paper. The challenge provided 45 paired and pre-aligned CMR images, allowing algorithms to combine the complementary information from the three CMR sequences for pathology segmentation. In this article, we provide details of the challenge, survey the works from fifteen participants and interpret their methods according to five aspects, i.e., preprocessing, data augmentation, learning strategy, model architecture and post-processing. In addition, we analyze the results with respect to different factors, in order to examine the key obstacles and explore the potential of solutions, as well as to provide a benchmark for future research. The average Dice scores of submitted algorithms were 0.614 +/- 0.231 and 0.644 +/- 0.153 for myocardial scars and edema, respectively. We conclude that while promising results have been reported, the research is still in the early stage, and more in-depth exploration is needed before a successful application to the clinics. MyoPS data and evaluation tool continue to be publicly available upon registration via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20/).
Keyword :
Benchmark Benchmark Cardiac magnetic resonance Cardiac magnetic resonance Multi-sequence MRI Multi-sequence MRI Multi-source images Multi-source images Myocardial pathology segmentation Myocardial pathology segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Lei , Wu, Fuping , Wang, Sihan et al. MyoPS: A benchmark of myocardial pathology segmentation combining three-sequence cardiac magnetic resonance images [J]. | MEDICAL IMAGE ANALYSIS , 2023 , 87 . |
MLA | Li, Lei et al. "MyoPS: A benchmark of myocardial pathology segmentation combining three-sequence cardiac magnetic resonance images" . | MEDICAL IMAGE ANALYSIS 87 (2023) . |
APA | Li, Lei , Wu, Fuping , Wang, Sihan , Luo, Xinzhe , Martin-Isla, Carlos , Zhai, Shuwei et al. MyoPS: A benchmark of myocardial pathology segmentation combining three-sequence cardiac magnetic resonance images . | MEDICAL IMAGE ANALYSIS , 2023 , 87 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Automatic segmentation of left atrial (LA) scars from late gadolinium enhanced CMR images is a crucial step for atrial fibrillation (AF) recurrence analysis. However, delineating LA scars is tedious and error-prone due to the variation of scar shapes. In this work, we propose a boundary-aware LA scar segmentation network, which is composed of two branches to segment LA and LA scars, respectively. We explore the inherent spatial relationship between LA and LA scars. By introducing a Sobel fusion module between the two segmentation branches, the spatial information of LA boundaries can be propagated from the LA branch to the scar branch. Thus, LA scar segmentation can be performed condition on the LA boundaries regions. In our experiments, 40 labeled images were used to train the proposed network, and the remaining 20 labeled images were used for evaluation. The network achieved an average Dice score of 0.608 for LA scar segmentation. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keyword :
Boundary-Aware Boundary-Aware Left Atrial Scar Left Atrial Scar Multi-depth Segmentation Multi-depth Segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, M. , Ding, W. , Yang, M. et al. Multi-depth Boundary-Aware Left Atrial Scar Segmentation Network [未知]. |
MLA | Wu, M. et al. "Multi-depth Boundary-Aware Left Atrial Scar Segmentation Network" [未知]. |
APA | Wu, M. , Ding, W. , Yang, M. , Huang, L. . Multi-depth Boundary-Aware Left Atrial Scar Segmentation Network [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Parkinson's disease (PD) is a serious neurological disease. Many studies have preseted regions of interest such as substantia nigra (SN) for PD detection from magnetic resonance imaging (MRI). However, the SN is not the only region with remarkable tissue changes in PD MRIs. Patients with Prodromal Parkinson's Disease usually present with non-motor symptoms, and the associated brain regions may show varying degrees of damage on imaging. Therefore, exploring PD-related regions from whole-brain MRI is essential. In this study, we proposed an interpretable PD detection framework, including PD classification and feature region visualization. Specifically, we constructed a 3D ResNet model that could detect PD from whole-brain MRIs and discover other brain regions related to PD through 3D Gradient-weighted Class Activation Mapping (Grad-CAM) and Unified Parkinson's Disease Rating Scale (UPDRS). We obtained T1-Weighted MRIs from the Parkinson's Progression Markers Initiative (PPMI) database. The average classification accuracy of the 5-fold cross-validation and held-out dataset reached 96.1% and 94.5%, respectively. In addition, we used the 3D Grad-CAM framework to extract the weight of the feature map and obtain visual interpretation. The heat map highlighted the regions that were crucial for PD classification and found significant differences between PD and HC in frontal lobe related to linguistic semantic disorders. The UPDRS scores of PD and HC on the linguistic semantic function items were also remarkably different. Combined with previous studies, this work verified the significance of the frontal lobe and proved that the correlation between the frontal lobe and the pathogenesis of PD was explanatory.
Keyword :
3D ResNet 3D ResNet Frontal lobe Frontal lobe Grad-CAM Grad-CAM MRI MRI Parkinson's diseases Parkinson's diseases Semantics Semantics
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yang, Mingjing , Huang, Xianbin , Huang, Liqin et al. Diagnosis of Parkinson's disease based on 3D ResNet: The frontal lobe is crucial [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 85 . |
MLA | Yang, Mingjing et al. "Diagnosis of Parkinson's disease based on 3D ResNet: The frontal lobe is crucial" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 85 (2023) . |
APA | Yang, Mingjing , Huang, Xianbin , Huang, Liqin , Cai, Guoen . Diagnosis of Parkinson's disease based on 3D ResNet: The frontal lobe is crucial . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 85 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |