Query:
学者姓名:潘林
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
MyoPS (Myocardial Pathology Segmentation) is used for auxiliary diagnosis of myocardial infarction by accurately segmenting myocardial lesions (such as scars and edema). However, CMR images are complex, manual segmentation is time-consuming and relies on professional knowledge, and there are differences in imaging data from different centers, which increases the difficulty of segmentation. To this end, this study developed a domain generalization module that flexibly integrates LGE, T2-weighted, and Cine sequences to improve cross-center and multi-sequence adaptability and robustness. Our method combines the domain generalization module with the nnUNet segmentation network, and reduces the differences between different data distributions by utilizing the domain generalization module for data mixing enhancement, thereby enhancing the model’s generalization ability and improving segmentation performance. In tests conducted on the data set of the MyoPS++ Challenge, our network performed well in segmenting scars and edema. Compared with the native segmentation network, it has a greater performance improvement, which verifies its ability to handle multi-center, Effectiveness in multi-sequence CMR data. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
Keyword :
Diagnosis Diagnosis Diseases Diseases Image segmentation Image segmentation Pathology Pathology
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Leyang , Tu, Yaosheng , Bai, Penggang et al. Domain Generalization in Myocardial Pathology Segmentation with MixUp Augmentation [C] . 2025 : 34-45 . |
MLA | Chen, Leyang et al. "Domain Generalization in Myocardial Pathology Segmentation with MixUp Augmentation" . (2025) : 34-45 . |
APA | Chen, Leyang , Tu, Yaosheng , Bai, Penggang , Pan, Lin . Domain Generalization in Myocardial Pathology Segmentation with MixUp Augmentation . (2025) : 34-45 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-sequence cardiac magnetic resonance (MS-CMR) images are capable of providing myocardial pathology information for patients with myocardial infarction. Precise myocardial structure and pathology segmentation hold significant importance for subsequent diagnosis and treatment. Nevertheless, traditional manual myocardial structure and pathology segmentation is not only time-consuming and labor-intensive but also has a low accuracy rate, particularly when identifying pathology such as scars and edema that are small in volume and have low contrast with the surrounding tissues, it becomes even more challenging. To address this issue, this paper proposes an improved nn-UNet for fully automatic segmentation of myocardial pathologies. In this network, based on nn-Unet, we use multi-modal data as input to make up for the lack of information in a single mode. For multi-modal data, we utilize cross normalization to improve the generalization performance. Meanwhile, multi-scale attention modules are integrated to process features at different resolutions, thus improving the feature representation capability of neural networks. Through feature fusion and attention weighting, the model can better capture the global and local information of myocardial pathologies, and achieve more accurate segmentation of myocardial pathologies. To verify the effectiveness of the proposed method, we conducted an evaluation using five-fold cross-validation in the dataset of the MyoPS++ challenge. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
Keyword :
Cardiology Cardiology Deep learning Deep learning Diagnosis Diagnosis Image segmentation Image segmentation Nuclear magnetic resonance Nuclear magnetic resonance Pathology Pathology
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Tang, Mengshi , Li, Nuoxi , Pan, Lin . Improved nn-UNet: Generalizable Multi-scale Attention-Driven Segmentation of Multi-sequence Myocardial Pathology [C] . 2025 : 96-105 . |
MLA | Tang, Mengshi et al. "Improved nn-UNet: Generalizable Multi-scale Attention-Driven Segmentation of Multi-sequence Myocardial Pathology" . (2025) : 96-105 . |
APA | Tang, Mengshi , Li, Nuoxi , Pan, Lin . Improved nn-UNet: Generalizable Multi-scale Attention-Driven Segmentation of Multi-sequence Myocardial Pathology . (2025) : 96-105 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Automatic segmentation of left atrial cavity and scar in late gadolinium enhanced magnetic resonance imaging has important clinical significance for the diagnosis of atrial fibrillation. Owing to the inferior image quality, thin walls, surrounding enhancement regions, and complex morphology of left atrial scars, the automatic quantitative analysis of them is extremely challenging. Either manual segmentation of the left atrial cavity or the atrial scar is very time-consuming and subjective errors may occur. In this work, a deep neural network named ResCEAUNet has been developed and validated for automatic segmentation of left atrial scars. We adopt nnUNet as the baseline. To enhance segmentation accuracy, we introduce two key improvements to our model: the lightweight Convolutional Block Attention Module (CBAM) and the edge attention module. The edge attention module significantly improves the model’s ability to delineate intricate boundaries of the atrial wall and scar tissue, particularly beneficial for thin structures like the left atrium. Simultaneously, CBAM sharpens the model’s focus on relevant features, enabling more precise localization and identification of scar tissue without substantially increasing computational complexity. These synergistic enhancements result in a robust and efficient segmentation model, demonstrating its effectiveness by achieving a Dice score of 0.6181 on the LAScarqs++ 2024 validation dataset. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
Keyword :
Deep neural networks Deep neural networks Diagnosis Diagnosis Diffusion tensor imaging Diffusion tensor imaging Dynamic contrast enhanced MRI Dynamic contrast enhanced MRI Image segmentation Image segmentation Nuclear magnetic resonance Nuclear magnetic resonance
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Yashuang , Cheng, Haiyan , Li, Douzhi et al. Left Atrial Scar Segmentation and Quantification Using Residual CBAM-EAM Attention UNet for LGE MRI [C] . 2025 : 149-157 . |
MLA | Zhang, Yashuang et al. "Left Atrial Scar Segmentation and Quantification Using Residual CBAM-EAM Attention UNet for LGE MRI" . (2025) : 149-157 . |
APA | Zhang, Yashuang , Cheng, Haiyan , Li, Douzhi , Pan, Lin . Left Atrial Scar Segmentation and Quantification Using Residual CBAM-EAM Attention UNet for LGE MRI . (2025) : 149-157 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Unsupervised domain adaptation(UDA) aims to mitigate the performance drop of models tested on the target domain, due to the domain shift from the target to sources. Most UDA segmentation methods focus on the scenario of solely single source domain. However, in practical situations data with gold standard could be available from multiple sources (domains), and the multi-source training data could provide more information for knowledge transfer. How to utilize them to achieve better domain adaptation yet remains to be further explored. This work investigates multi-source UDA and proposes a new framework for medical image segmentation. Firstly, we employ a multi-level adversarial learning scheme to adapt features at different levels between each of the source domains and the target, to improve the segmentation performance. Then, we propose a multi-model consistency loss to transfer the learned multi-source knowledge to the target domain simultaneously. Finally, we validated the proposed framework on two applications, i.e., multi-modality cardiac segmentation and cross-modality liver segmentation. The results showed our method delivered promising performance and compared favorably to state-of-the-art approaches.
Keyword :
Domain adaptation Domain adaptation medical image segmentation medical image segmentation multi-source multi-source unsupervised learning unsupervised learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pei, Chenhao , Wu, Fuping , Yang, Mingjing et al. Multi-Source Domain Adaptation for Medical Image Segmentation [J]. | IEEE TRANSACTIONS ON MEDICAL IMAGING , 2024 , 43 (4) : 1640-1651 . |
MLA | Pei, Chenhao et al. "Multi-Source Domain Adaptation for Medical Image Segmentation" . | IEEE TRANSACTIONS ON MEDICAL IMAGING 43 . 4 (2024) : 1640-1651 . |
APA | Pei, Chenhao , Wu, Fuping , Yang, Mingjing , Pan, Lin , Ding, Wangbin , Dong, Jinwei et al. Multi-Source Domain Adaptation for Medical Image Segmentation . | IEEE TRANSACTIONS ON MEDICAL IMAGING , 2024 , 43 (4) , 1640-1651 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Given the diversity of medical images, traditional image segmentation models face the issue of domain shift. Unsupervised domain adaptation (UDA) methods have emerged as a pivotal strategy for cross modality analysis. These methods typically utilize generative adversarial networks (GANs) for both image-level and feature-level domain adaptation through the transformation and reconstruction of images, assuming the features between domains are well-aligned. However, this assumption falters with significant gaps between different medical image modalities, such as MRI and CT. These gaps hinder the effective training of segmentation networks with cross-modality images and can lead to misleading training guidance and instability. To address these challenges, this paper introduces a novel approach comprising a cross-modality feature alignment sub-network and a cross pseudo supervised dual-stream segmentation sub-network. These components work together to bridge domain discrepancies more effectively and ensure a stable training environment. The feature alignment sub-network is designed for the bidirectional alignment of features between the source and target domains, incorporating a self-attention module to aid in learning structurally consistent and relevant information. The segmentation sub-network leverages an enhanced cross-pseudo-supervised loss to harmonize the output of the two segmentation networks, assessing pseudo-distances between domains to improve the pseudo-label quality and thus enhancing the overall learning efficiency of the framework. This method's success is demonstrated by notable advancements in segmentation precision across target domains for abdomen and brain tasks.
Keyword :
cross modality segmentation cross modality segmentation cross pseudo supervision cross pseudo supervision feature alignment feature alignment unsupervised domain adaptation unsupervised domain adaptation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yang, Mingjing , Wu, Zhicheng , Zheng, Hanyu et al. Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning [J]. | DIAGNOSTICS , 2024 , 14 (16) . |
MLA | Yang, Mingjing et al. "Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning" . | DIAGNOSTICS 14 . 16 (2024) . |
APA | Yang, Mingjing , Wu, Zhicheng , Zheng, Hanyu , Huang, Liqin , Ding, Wangbin , Pan, Lin et al. Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning . | DIAGNOSTICS , 2024 , 14 (16) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Purpose: The identification of early-stage Parkinson's disease (PD) is important for the effective management of patients, affecting their treatment and prognosis. Recently, structural brain networks (SBNs) have been used to diagnose PD. However, how to mine abnormal patterns from high-dimensional SBNs has been a challenge due to the complex topology of the brain. Meanwhile, the existing prediction mechanisms of deep learning models are often complicated, and it is difficult to extract effective interpretations. In addition, most works only focus on the classification of imaging and ignore clinical scores in practical applications, which limits the ability of the model. Inspired by the regional modularity of SBNs, we adopted graph learning from the perspective of node clustering to construct an interpretable framework for PD classification.Methods: In this study, a multi-task graph structure learning framework based on node clustering (MNC-Net) is proposed for the early diagnosis of PD. Specifically, we modeled complex SBNs into modular graphs that facilitated the representation learning of abnormal patterns. Traditional graph neural networks are optimized through graph structure learning based on node clustering, which identifies potentially abnormal brain regions and reduces the impact of irrelevant noise. Furthermore, we employed a regression task to link clinical scores to disease classification, and incorporated latent domain information into model training through multi-task learning.Results: We validated the proposed approach on the Parkinsons Progression Markers Initiative dataset. Exper-imental results showed that our MNC-Net effectively separated the early-stage PD from healthy controls(HC) with an accuracy of 95.5%. The t-SNE figures have showed that our graph structure learning method can capture more efficient and discriminatory features. Furthermore, node clustering parameters were used as important weights to extract salient task-related brain regions(ROIs). These ROIs are involved in the development of mood disorders, tremors, imbalances and other symptoms, highlighting the importance of memory, language and mild motor function in early PD. In addition, statistical results from clinical scores confirmed that our model could capture abnormal connectivity that was significantly different between PD and HC. These results are consistent with previous studies, demonstrating the interpretability of our methods.
Keyword :
Clinical scores Clinical scores Early Parkinson?s disease Early Parkinson?s disease Graph neural networks Graph neural networks Structural brain network Structural brain network
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Liqin , Ye, Xiaofang , Yang, Mingjing et al. MNC-Net: Multi-task graph structure learning based on node clustering for early Parkinson's disease diagnosis [J]. | COMPUTERS IN BIOLOGY AND MEDICINE , 2023 , 152 . |
MLA | Huang, Liqin et al. "MNC-Net: Multi-task graph structure learning based on node clustering for early Parkinson's disease diagnosis" . | COMPUTERS IN BIOLOGY AND MEDICINE 152 (2023) . |
APA | Huang, Liqin , Ye, Xiaofang , Yang, Mingjing , Pan, Lin , Zheng, Shao hua . MNC-Net: Multi-task graph structure learning based on node clustering for early Parkinson's disease diagnosis . | COMPUTERS IN BIOLOGY AND MEDICINE , 2023 , 152 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Background: Automatic pulmonary artery-vein separation has considerable importance in the diagnosis and treatment of lung diseases. However, insufficient connectivity and spatial inconsistency have always been the problems of artery-vein separation. Methods: A novel automatic method for artery-vein separation in CT images is presented in this work. Specifically, a multi-scale information aggregated network (MSIA-Net) including multi-scale fusion blocks and deep supervision, is proposed to learn the features of artery-vein and aggregate additional semantic information, respectively. The proposed method integrates nine MSIA-Net models for artery-vein separation, vessel segmentation, and centerline separation tasks along with axial, coronal, and sagittal multi-view slices. First, the preliminary artery-vein separation results are obtained by the proposed multi-view fusion strategy (MVFS). Then, centerline correction algorithm (CCA) is used to correct the preliminary results of artery- vein separation by the centerline separation results. Finally, the vessel segmentation results are utilized to reconstruct the artery-vein morphology. In addition, weighted cross-entropy and dice loss are employed to solve the class imbalance problem. Results: We constructed 50 manually labeled contrast-enhanced computed CT scans for five-fold cross -validation, and experimental results demonstrated that our method achieves superior segmentation perfor-mance of 97.7%, 85.1%, and 84.9% on ACC, Pre, and DSC, respectively. Additionally, a series of ablation studies demonstrate the effectiveness of the proposed components. Conclusion: The proposed method can effectively solve the problem of insufficient vascular connectivity and correct the spatial inconsistency of artery-vein.
Keyword :
Centerline correction Centerline correction CT images CT images Multi-scale information aggregated Multi-scale information aggregated Pulmonary artery-vein separation Pulmonary artery-vein separation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Lin , Li, Zhaopei , Shen, Zhiqiang et al. Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation [J]. | COMPUTERS IN BIOLOGY AND MEDICINE , 2023 , 155 . |
MLA | Pan, Lin et al. "Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation" . | COMPUTERS IN BIOLOGY AND MEDICINE 155 (2023) . |
APA | Pan, Lin , Li, Zhaopei , Shen, Zhiqiang , Liu, Zheng , Huang, Liqin , Yang, Mingjing et al. Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation . | COMPUTERS IN BIOLOGY AND MEDICINE , 2023 , 155 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Background: With the wide application of CT scanning, the separation of pulmonary arteries and veins (A/V) based on CT images plays an important role for assisting surgeons in preoperative planning of lung cancer surgery. However, distinguishing between arteries and veins in chest CT images remains challenging due to the complex structure and the presence of their similarities. Methods: We proposed a novel method for automatically separating pulmonary arteries and veins based on vessel topology information and a twin-pipe deep learning network. First, vessel tree topology is constructed by combining scale-space particles and multi-stencils fast marching (MSFM) methods to ensure the continuity and authenticity of the topology. Second, a twin-pipe network is designed to learn the multiscale differences between arteries and veins and the characteristics of the small arteries that closely accompany bronchi. Finally, we designed a topology optimizer that considers interbranch and intrabranch topological relationships to optimize the results of arteries and veins classification. Results: The proposed approach is validated on the public dataset CARVE14 and our private dataset. Compared with ground truth, the proposed method achieves an average accuracy of 90.1% on the CARVE14 dataset, and 96.2% on our local dataset. Conclusions: The method can effectively separate pulmonary arteries and veins and has good generalization for chest CT images from different devices, as well as enhanced and noncontrast CT image sequences from the same device.
Keyword :
Chest CT images Chest CT images Preoperative planning Preoperative planning Pulmonary artery-vein segmentation Pulmonary artery-vein segmentation Topology reconstruction Topology reconstruction Twin-pipe network Twin-pipe network
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Lin , Yan, Xiaochao , Zheng, Yaoyong et al. Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction [J]. | PEERJ COMPUTER SCIENCE , 2023 , 9 . |
MLA | Pan, Lin et al. "Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction" . | PEERJ COMPUTER SCIENCE 9 (2023) . |
APA | Pan, Lin , Yan, Xiaochao , Zheng, Yaoyong , Huang, Liqin , Zhang, Zhen , Fu, Rongda et al. Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction . | PEERJ COMPUTER SCIENCE , 2023 , 9 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Objectives Patients with T4 obstructive colorectal cancer (OCC) have a high mortality rate. Therefore, an accurate distinction between T4 and T1-T3 (NT4) in OCC is an important part of preoperative evaluation, especially in the emergency setting. This paper introduces three models of radiomics, deep learning, and deep learning-based radiomics to identify T4 OCC.Methods We established a dataset of computed tomography (CT) images of 164 patients with pathologically confirmed OCC, from which 2537 slides were extracted. First, since T4 tumors penetrate the bowel wall and involve adjacent organs, we explored whether the peritumoral region contributes to the assessment of T4 OCC. Furthermore, we visualized the radiomics and deep learning features using the t-distributed stochastic neighbor embedding technique (t-SNE). Finally, we built a merged model by fusing radiomic features with deep learning features. In this experiment, the performance of each model was evaluated by the area under the receiver operating characteristic curve (AUC).Results In the test cohort, the AUC values predicted by the radiomics model in the dilated region of interest (dROI) was 0.770. And the AUC value of the deep learning model with the patches extended 20-pixel reached 0.936. Combining the characteristics of radiomics and deep learning, our method achieved an AUC value of 0.947 in the T4 and non-T4 (NT4) classification, and increased the AUC value to 0.950 after the addition of clinical features.Conclusion The prediction results of our merged model of deep learning radiomics outperformed the deep learning model and significantly outperformed the radiomics model. The experimental results demonstrate that combining the peritumoral region improves the prediction performance of the radiomics model and the deep learning model.
Keyword :
Deep learning Deep learning Obstructive colorectal cancer Obstructive colorectal cancer Peritumoral region Peritumoral region Radiomics Radiomics ResNet ResNet
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Lin , He, Tian , Huang, Zihan et al. Radiomics approach with deep learning for predicting T4 obstructive colorectal cancer using CT image [J]. | ABDOMINAL RADIOLOGY , 2023 , 48 (4) : 1246-1259 . |
MLA | Pan, Lin et al. "Radiomics approach with deep learning for predicting T4 obstructive colorectal cancer using CT image" . | ABDOMINAL RADIOLOGY 48 . 4 (2023) : 1246-1259 . |
APA | Pan, Lin , He, Tian , Huang, Zihan , Chen, Shuai , Zhang, Junrong , Zheng, Shaohua et al. Radiomics approach with deep learning for predicting T4 obstructive colorectal cancer using CT image . | ABDOMINAL RADIOLOGY , 2023 , 48 (4) , 1246-1259 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Background and Objectives: Automatic airway segmentation from chest computed tomography (CT) scans plays an important role in pulmonary disease diagnosis and computer-assisted therapy. However, low contrast at peripheral branches and complex tree-like structures remain as two mainly challenges for airway segmentation. Recent research has illustrated that deep learning methods perform well in segmentation tasks. Motivated by these works, a coarse-to-fine segmentation framework is proposed to obtain a complete airway tree. Methods: Our framework segments the overall airway and small branches via the multi-information fusion convolution neural network (Mif-CNN) and the CNN-based region growing, respectively. In Mif-CNN, atrous spatial pyramid pooling (ASPP) is integrated into a u-shaped network, and it can expend the receptive field and capture multi-scale information. Meanwhile, boundary and location information are incorporated into semantic information. These information are fused to help Mif-CNN utilize additional context knowledge and useful features. To improve the performance of the segmentation result, the CNN-based region growing method is designed to focus on obtaining small branches. A voxel classification network (VCN), which can entirely capture the rich information around each voxel, is applied to classify the voxels into airway and non-airway. In addition, a shape reconstruction method is used to refine the airway tree. Results: We evaluate our method on a private dataset and a public dataset from EXACT09. Compared with the segmentation results from other methods, our method demonstrated promising accuracy in complete airway tree segmentation. In the private dataset, the Dice similarity coefficient (DSC), Intersection over Union (IoU), false positive rate (FPR), and sensitivity are 93.5%, 87.8%, 0.015%, and 90.8%, respectively. In the public dataset, the DSC, IoU, FPR, and sensitivity are 95.8%, 91.9%, 0.053% and 96.6%, respectively. Conclusion: The proposed Mif-CNN and CNN-based region growing method segment the airway tree accurately and efficiently in CT scans. Experimental results also demonstrate that the framework is ready for application in computer-aided diagnosis systems for lung disease and other related works. (C) 2022 Elsevier B.V. All rights reserved.
Keyword :
Airway segmentation Airway segmentation Multi-information fusion convolution neural network Multi-information fusion convolution neural network Voxel classification network Voxel classification network
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Guo, Jinquan , Fu, Rongda , Pan, Lin et al. Coarse-to-fine airway segmentation using multi information fusion network and CNN-based region growing [J]. | COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE , 2022 , 215 . |
MLA | Guo, Jinquan et al. "Coarse-to-fine airway segmentation using multi information fusion network and CNN-based region growing" . | COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 215 (2022) . |
APA | Guo, Jinquan , Fu, Rongda , Pan, Lin , Zheng, Shaohua , Huang, Liqin , Zheng, Bin et al. Coarse-to-fine airway segmentation using multi information fusion network and CNN-based region growing . | COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE , 2022 , 215 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |