Query:
学者姓名:黄立勤
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
Gastric cancer is a serious malignant tumor. The gold standard for diagnosing gastric cancer is identifying cancer cells using pathological slides under microscopic examination. While many approaches have been proposed for gastric cancer segmentation, it is still difficult to train large-scale segmentation networks with scant gastroscopy data. Recently, Segmentation Anything Model (SAM) has received a lot of interest lately for its use in segmenting natural and medical images. However, due to high computational complexity and huge computational costs, the application of SAM in resource limited embedded medical devices is limited. In this paper, we proposed GC-SAM, a lightweight model for tumor segmentation. The prompt encoder and mask decoder have been fine-tuned to better face the challenge of segmenting pathological images of gastric cancer tissue. Evaluated on an internal dataset, the GC-SAM achieved state-of-the-art performance compared to classical image segmentation networks, with Dice coefficient of 0.8186. In addition, external validation has confirmed its superior generalization ability. This study demonstrates the great potential of adapting GC-SAM to pathological image segmentation tasks in gastric cancer tissue and provides the possibility for deep learning image segmentation to be transferred to embedded medical devices. © 2024 IEEE.
Keyword :
External validation External validation Fine-tune Fine-tune Gastric cancer Gastric cancer Image segmentation Image segmentation Knowledge distillation Knowledge distillation SAM SAM
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, L. , Geng, Y. , Huang, L. et al. Segmentation of Gastric Cancer Pathological Slice Cancerous Region Based on Lightweight Improved SAM [未知]. |
MLA | Li, L. et al. "Segmentation of Gastric Cancer Pathological Slice Cancerous Region Based on Lightweight Improved SAM" [未知]. |
APA | Li, L. , Geng, Y. , Huang, L. , Li, J. , Niu, D. . Segmentation of Gastric Cancer Pathological Slice Cancerous Region Based on Lightweight Improved SAM [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Cine imaging serves as a vital approach for non-invasive assessment of cardiac functional parameters. The imaging process of Cine cardiac MRI is inherently slow, necessitating the acquisition of data at multiple time points within each cardiac cycle to ensure adequate temporal resolution and motion information. Over prolonged data acquisition and during motion, Cine images can exhibit image degradation, leading to the occurrence of artifacts. Conventional image reconstruction methods often require expert knowledge for feature selection, which may result in information loss and suboptimal outcomes. In this paper, we employ a data-driven deep learning approach to address this issue. This approach utilizes supervised learning to compare data with different acceleration factors to full-sampled spatial domain data, training a context-aware network to reconstruct images with artifacts. In our model training strategy, we employ an adversarial approach to make the reconstructed images closer to ground truth. We incorporate loss functions based on adversarial principles and introduce image quality assessment as a constraint. Our context-aware model efficiently accomplishes artifact removal and image reconstruction tasks.
Keyword :
Cine MRI Cine MRI Context Encoder Context Encoder Deep Learning Deep Learning Generative Adversarial Networks Generative Adversarial Networks Image Reconstruction Image Reconstruction
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Weihua , Tang, Mengshi , Huang, Liqin et al. A Context-Encoders-Based Generative Adversarial Networks for Cine Magnetic Resonance Imaging Reconstruction [J]. | STATISTICAL ATLASES AND COMPUTATIONAL MODELS OF THE HEART. REGULAR AND CMRXRECON CHALLENGE PAPERS, STACOM 2023 , 2024 , 14507 : 359-368 . |
MLA | Zhang, Weihua et al. "A Context-Encoders-Based Generative Adversarial Networks for Cine Magnetic Resonance Imaging Reconstruction" . | STATISTICAL ATLASES AND COMPUTATIONAL MODELS OF THE HEART. REGULAR AND CMRXRECON CHALLENGE PAPERS, STACOM 2023 14507 (2024) : 359-368 . |
APA | Zhang, Weihua , Tang, Mengshi , Huang, Liqin , Li, Wei . A Context-Encoders-Based Generative Adversarial Networks for Cine Magnetic Resonance Imaging Reconstruction . | STATISTICAL ATLASES AND COMPUTATIONAL MODELS OF THE HEART. REGULAR AND CMRXRECON CHALLENGE PAPERS, STACOM 2023 , 2024 , 14507 , 359-368 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Neural networks have found widespread application in medical image registration, although they typically assume access to the entire training dataset during training. In clinical scenarios, medical images of various anatomical targets, such as the heart, brain, and liver, may be obtained successively with advancements in imaging technologies and diagnostic procedures. The accuracy of registration on a new target may degrade over time, as the registration models become outdated due to domain shifts occurring at unpredictable intervals. In this study, we introduce a deep registration model based on continual learning to mitigate the issue of catastrophic forgetting during training with continuous data streams. To enable continuous network training, we propose a dynamic memory system based on a density-based clustering algorithm to retain representative samples from the data stream. Training the registration network on these representative samples enhances its generalization capabilities to accommodate new targets within the data stream. We evaluated our approach using the CHAOS dataset, which comprises multiple targets, such as the liver, left kidney, and spleen, to simulate a data stream. The experimental findings illustrate that the proposed continual registration network achieves comparable performance to a model trained with full data visibility.
Keyword :
continual learning continual learning Data models Data models dynamic memory dynamic memory Heuristic algorithms Heuristic algorithms Liver Liver Medical diagnostic imaging Medical diagnostic imaging Registration network Registration network Streams Streams Task analysis Task analysis Training Training
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Ding, Wangbin , Sun, Haoran , Pei, Chenhao et al. Multi-Organ Registration With Continual Learning [J]. | IEEE SIGNAL PROCESSING LETTERS , 2024 , 31 : 1204-1208 . |
MLA | Ding, Wangbin et al. "Multi-Organ Registration With Continual Learning" . | IEEE SIGNAL PROCESSING LETTERS 31 (2024) : 1204-1208 . |
APA | Ding, Wangbin , Sun, Haoran , Pei, Chenhao , Jia, Dengqiang , Huang, Liqin . Multi-Organ Registration With Continual Learning . | IEEE SIGNAL PROCESSING LETTERS , 2024 , 31 , 1204-1208 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Background: With the wide application of CT scanning, the separation of pulmonary arteries and veins (A/V) based on CT images plays an important role for assisting surgeons in preoperative planning of lung cancer surgery. However, distinguishing between arteries and veins in chest CT images remains challenging due to the complex structure and the presence of their similarities. Methods: We proposed a novel method for automatically separating pulmonary arteries and veins based on vessel topology information and a twin-pipe deep learning network. First, vessel tree topology is constructed by combining scale-space particles and multi-stencils fast marching (MSFM) methods to ensure the continuity and authenticity of the topology. Second, a twin-pipe network is designed to learn the multiscale differences between arteries and veins and the characteristics of the small arteries that closely accompany bronchi. Finally, we designed a topology optimizer that considers interbranch and intrabranch topological relationships to optimize the results of arteries and veins classification. Results: The proposed approach is validated on the public dataset CARVE14 and our private dataset. Compared with ground truth, the proposed method achieves an average accuracy of 90.1% on the CARVE14 dataset, and 96.2% on our local dataset. Conclusions: The method can effectively separate pulmonary arteries and veins and has good generalization for chest CT images from different devices, as well as enhanced and noncontrast CT image sequences from the same device.
Keyword :
Chest CT images Chest CT images Preoperative planning Preoperative planning Pulmonary artery-vein segmentation Pulmonary artery-vein segmentation Topology reconstruction Topology reconstruction Twin-pipe network Twin-pipe network
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Lin , Yan, Xiaochao , Zheng, Yaoyong et al. Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction [J]. | PEERJ COMPUTER SCIENCE , 2023 , 9 . |
MLA | Pan, Lin et al. "Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction" . | PEERJ COMPUTER SCIENCE 9 (2023) . |
APA | Pan, Lin , Yan, Xiaochao , Zheng, Yaoyong , Huang, Liqin , Zhang, Zhen , Fu, Rongda et al. Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction . | PEERJ COMPUTER SCIENCE , 2023 , 9 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Background: Automatic pulmonary artery-vein separation has considerable importance in the diagnosis and treatment of lung diseases. However, insufficient connectivity and spatial inconsistency have always been the problems of artery-vein separation. Methods: A novel automatic method for artery-vein separation in CT images is presented in this work. Specifically, a multi-scale information aggregated network (MSIA-Net) including multi-scale fusion blocks and deep supervision, is proposed to learn the features of artery-vein and aggregate additional semantic information, respectively. The proposed method integrates nine MSIA-Net models for artery-vein separation, vessel segmentation, and centerline separation tasks along with axial, coronal, and sagittal multi-view slices. First, the preliminary artery-vein separation results are obtained by the proposed multi-view fusion strategy (MVFS). Then, centerline correction algorithm (CCA) is used to correct the preliminary results of artery- vein separation by the centerline separation results. Finally, the vessel segmentation results are utilized to reconstruct the artery-vein morphology. In addition, weighted cross-entropy and dice loss are employed to solve the class imbalance problem. Results: We constructed 50 manually labeled contrast-enhanced computed CT scans for five-fold cross -validation, and experimental results demonstrated that our method achieves superior segmentation perfor-mance of 97.7%, 85.1%, and 84.9% on ACC, Pre, and DSC, respectively. Additionally, a series of ablation studies demonstrate the effectiveness of the proposed components. Conclusion: The proposed method can effectively solve the problem of insufficient vascular connectivity and correct the spatial inconsistency of artery-vein.
Keyword :
Centerline correction Centerline correction CT images CT images Multi-scale information aggregated Multi-scale information aggregated Pulmonary artery-vein separation Pulmonary artery-vein separation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Lin , Li, Zhaopei , Shen, Zhiqiang et al. Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation [J]. | COMPUTERS IN BIOLOGY AND MEDICINE , 2023 , 155 . |
MLA | Pan, Lin et al. "Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation" . | COMPUTERS IN BIOLOGY AND MEDICINE 155 (2023) . |
APA | Pan, Lin , Li, Zhaopei , Shen, Zhiqiang , Liu, Zheng , Huang, Liqin , Yang, Mingjing et al. Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation . | COMPUTERS IN BIOLOGY AND MEDICINE , 2023 , 155 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Cell segmentation is one of the most fundamental tasks in the areas of medical image analysis, which assists in cell recognition and number counting. The segmentation results obtained will be poor due to the diverse cell morphology and the frequent presence of impurities in the cell pictures. In order to solve the cell segmentation which are from a competition held by Neural Information Processing Systems(NIPS), we present a network that combines attention gates with U-Net++ to segment varied sizes of cells. Using the feature filtering of the attention gate can adjust the convolution block’s output, so as to improve the segmentation effect. The F1 score of our method reached 0.5874, Rank Running Time get 2.5431 seconds. © 2023 SPIE.
Keyword :
Cells Cells Cytology Cytology Image segmentation Image segmentation Medical imaging Medical imaging
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yang, Xinye , Chen, Hao , Huang, Lihua et al. Multi-modal Cell Segmentation based on U-Net++ and Attention Gate [C] . 2023 . |
MLA | Yang, Xinye et al. "Multi-modal Cell Segmentation based on U-Net++ and Attention Gate" . (2023) . |
APA | Yang, Xinye , Chen, Hao , Huang, Lihua , Zhang, Xuru , Huang, Liqin . Multi-modal Cell Segmentation based on U-Net++ and Attention Gate . (2023) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Convolutions neural networks have obtained promising results in various medical image segmentation tasks. However, these methods ignore the problem of domain shift, which will lead to a model trained in a source domain performing poorly when applied to different target domains. In this work, we propose a two-stage segmentation network, and utilize histogram matching to eliminate domain shift. Specifically, the first stage obtains the region of interest by performing coarsely segmentation on down-sample images. Then the second stage segments the left atrium (LA) based on the region of interest. The method is evaluated on LAScarQS 2022 data-set, acquiring average Dice of 0.87790 for LA segmentation. Besides, the two-stage network is about four times faster against a single-stage network in the test phase. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keyword :
Deep Learning Deep Learning Domain Shift Domain Shift Histogram Matching Augmentation Histogram Matching Augmentation Left Atrial Segmentation Left Atrial Segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, X. , Yang, X. , Huang, L. et al. Two Stage of Histogram Matching Augmentation for Domain Generalization: Application to Left Atrial Segmentation [未知]. |
MLA | Zhang, X. et al. "Two Stage of Histogram Matching Augmentation for Domain Generalization: Application to Left Atrial Segmentation" [未知]. |
APA | Zhang, X. , Yang, X. , Huang, L. , Huang, L. . Two Stage of Histogram Matching Augmentation for Domain Generalization: Application to Left Atrial Segmentation [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Automatic segmentation of left atrial (LA) scars from late gadolinium enhanced CMR images is a crucial step for atrial fibrillation (AF) recurrence analysis. However, delineating LA scars is tedious and error-prone due to the variation of scar shapes. In this work, we propose a boundary-aware LA scar segmentation network, which is composed of two branches to segment LA and LA scars, respectively. We explore the inherent spatial relationship between LA and LA scars. By introducing a Sobel fusion module between the two segmentation branches, the spatial information of LA boundaries can be propagated from the LA branch to the scar branch. Thus, LA scar segmentation can be performed condition on the LA boundaries regions. In our experiments, 40 labeled images were used to train the proposed network, and the remaining 20 labeled images were used for evaluation. The network achieved an average Dice score of 0.608 for LA scar segmentation. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keyword :
Boundary-Aware Boundary-Aware Left Atrial Scar Left Atrial Scar Multi-depth Segmentation Multi-depth Segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, M. , Ding, W. , Yang, M. et al. Multi-depth Boundary-Aware Left Atrial Scar Segmentation Network [未知]. |
MLA | Wu, M. et al. "Multi-depth Boundary-Aware Left Atrial Scar Segmentation Network" [未知]. |
APA | Wu, M. , Ding, W. , Yang, M. , Huang, L. . Multi-depth Boundary-Aware Left Atrial Scar Segmentation Network [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities. This paper aims to provide a comprehensive review of multi-modality imaging in cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For the computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data, either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, and catheter ablation therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future.
Keyword :
Cardiac Cardiac Fusion Fusion Multi-modality imaging Multi-modality imaging Registration Registration Review Review Segmentation Segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Lei , Ding, Wangbin , Huang, Liqin et al. Survey paper Multi-modality cardiac image computing: A survey [J]. | MEDICAL IMAGE ANALYSIS , 2023 , 88 . |
MLA | Li, Lei et al. "Survey paper Multi-modality cardiac image computing: A survey" . | MEDICAL IMAGE ANALYSIS 88 (2023) . |
APA | Li, Lei , Ding, Wangbin , Huang, Liqin , Zhuang, Xiahai , Grau, Vicente . Survey paper Multi-modality cardiac image computing: A survey . | MEDICAL IMAGE ANALYSIS , 2023 , 88 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Diabetic retinopathy (DR) is a common ocular disease in diabetic patients. In DR analysis, doctors first need to select excellent-quality images of ultra wide optical coherence tomography imaging (UW-OCTA). Only high-quality images can be used for lesion segmentation and proliferative diabetic retinopathy (PDR) detection. In practical applications, UW-OCTA has a small number of images with poor quality, so the dataset constructed from UW-OCTA faces the problem of class-imbalance. In this work, we employ data enhancement strategy and develop a loss function to alleviate class-imbalance. Specifically, we apply Fourier Transformation to the poor quality data with limited numbers, thus expanding this category data. We also utilize characteristics of class-imbalance to improve the cross-entropy loss by weighting. This method is evaluated on DRAC2022 dataset, we achieved Quaratic Weight Kappa of 0.7647 and AUC of 0.8458, respectively. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keyword :
Class-aware weighted loss Class-aware weighted loss Deep learning Deep learning Fourier transformation Fourier transformation Image quality assessment Image quality assessment
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Z. , Chen, Y. , Zhang, X. et al. Data Augmentation by Fourier Transformation for Class-Imbalance: Application to Medical Image Quality Assessment [未知]. |
MLA | Wu, Z. et al. "Data Augmentation by Fourier Transformation for Class-Imbalance: Application to Medical Image Quality Assessment" [未知]. |
APA | Wu, Z. , Chen, Y. , Zhang, X. , Huang, L. . Data Augmentation by Fourier Transformation for Class-Imbalance: Application to Medical Image Quality Assessment [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |