Query:
学者姓名:郑绍华
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
医学影像共享是医疗信息云共享中最重要的部分,因为医疗信息80%以上是医学影像,但信息共享面临数据安全、隐私保护和信息检索等问题。虽然已有很多密文域可逆信息隐藏(RDH-EI, Reversible Data Hiding in Encrypted Image)方案,但一般不能直接应用于DICOM医学影像上。为了满足云服务中DICOM文件的隐私保护和信息检索需求,文章提出一种基于ZUC加性同态和多层差值直方图平移的DICOM图像RDH-EI方案。所提方案不改变DICOM文件格式,不增加文件大小,且图像解密和信息提取可分离。实验结果表明,所提出的方案具有良好的灵活性和计算效率,是一种适用云共享的RDH-EI方案。
Keyword :
DICOM DICOM ZUC算法 ZUC算法 加性同态 加性同态 可逆信息隐藏 可逆信息隐藏 多层差值直方图平移 多层差值直方图平移
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 郑梓劲 , 宋志刚 , 杨文琴 et al. 基于国密加性同态的医学影像可逆信息隐藏方法 [J]. | 长江信息通信 , 2024 , 37 (03) : 1-6 . |
MLA | 郑梓劲 et al. "基于国密加性同态的医学影像可逆信息隐藏方法" . | 长江信息通信 37 . 03 (2024) : 1-6 . |
APA | 郑梓劲 , 宋志刚 , 杨文琴 , 李代松 , 郑绍华 . 基于国密加性同态的医学影像可逆信息隐藏方法 . | 长江信息通信 , 2024 , 37 (03) , 1-6 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Objectives Patients with T4 obstructive colorectal cancer (OCC) have a high mortality rate. Therefore, an accurate distinction between T4 and T1-T3 (NT4) in OCC is an important part of preoperative evaluation, especially in the emergency setting. This paper introduces three models of radiomics, deep learning, and deep learning-based radiomics to identify T4 OCC.Methods We established a dataset of computed tomography (CT) images of 164 patients with pathologically confirmed OCC, from which 2537 slides were extracted. First, since T4 tumors penetrate the bowel wall and involve adjacent organs, we explored whether the peritumoral region contributes to the assessment of T4 OCC. Furthermore, we visualized the radiomics and deep learning features using the t-distributed stochastic neighbor embedding technique (t-SNE). Finally, we built a merged model by fusing radiomic features with deep learning features. In this experiment, the performance of each model was evaluated by the area under the receiver operating characteristic curve (AUC).Results In the test cohort, the AUC values predicted by the radiomics model in the dilated region of interest (dROI) was 0.770. And the AUC value of the deep learning model with the patches extended 20-pixel reached 0.936. Combining the characteristics of radiomics and deep learning, our method achieved an AUC value of 0.947 in the T4 and non-T4 (NT4) classification, and increased the AUC value to 0.950 after the addition of clinical features.Conclusion The prediction results of our merged model of deep learning radiomics outperformed the deep learning model and significantly outperformed the radiomics model. The experimental results demonstrate that combining the peritumoral region improves the prediction performance of the radiomics model and the deep learning model.
Keyword :
Deep learning Deep learning Obstructive colorectal cancer Obstructive colorectal cancer Peritumoral region Peritumoral region Radiomics Radiomics ResNet ResNet
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Lin , He, Tian , Huang, Zihan et al. Radiomics approach with deep learning for predicting T4 obstructive colorectal cancer using CT image [J]. | ABDOMINAL RADIOLOGY , 2023 , 48 (4) : 1246-1259 . |
MLA | Pan, Lin et al. "Radiomics approach with deep learning for predicting T4 obstructive colorectal cancer using CT image" . | ABDOMINAL RADIOLOGY 48 . 4 (2023) : 1246-1259 . |
APA | Pan, Lin , He, Tian , Huang, Zihan , Chen, Shuai , Zhang, Junrong , Zheng, Shaohua et al. Radiomics approach with deep learning for predicting T4 obstructive colorectal cancer using CT image . | ABDOMINAL RADIOLOGY , 2023 , 48 (4) , 1246-1259 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Background: With the wide application of CT scanning, the separation of pulmonary arteries and veins (A/V) based on CT images plays an important role for assisting surgeons in preoperative planning of lung cancer surgery. However, distinguishing between arteries and veins in chest CT images remains challenging due to the complex structure and the presence of their similarities. Methods: We proposed a novel method for automatically separating pulmonary arteries and veins based on vessel topology information and a twin-pipe deep learning network. First, vessel tree topology is constructed by combining scale-space particles and multi-stencils fast marching (MSFM) methods to ensure the continuity and authenticity of the topology. Second, a twin-pipe network is designed to learn the multiscale differences between arteries and veins and the characteristics of the small arteries that closely accompany bronchi. Finally, we designed a topology optimizer that considers interbranch and intrabranch topological relationships to optimize the results of arteries and veins classification. Results: The proposed approach is validated on the public dataset CARVE14 and our private dataset. Compared with ground truth, the proposed method achieves an average accuracy of 90.1% on the CARVE14 dataset, and 96.2% on our local dataset. Conclusions: The method can effectively separate pulmonary arteries and veins and has good generalization for chest CT images from different devices, as well as enhanced and noncontrast CT image sequences from the same device.
Keyword :
Chest CT images Chest CT images Preoperative planning Preoperative planning Pulmonary artery-vein segmentation Pulmonary artery-vein segmentation Topology reconstruction Topology reconstruction Twin-pipe network Twin-pipe network
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Lin , Yan, Xiaochao , Zheng, Yaoyong et al. Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction [J]. | PEERJ COMPUTER SCIENCE , 2023 , 9 . |
MLA | Pan, Lin et al. "Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction" . | PEERJ COMPUTER SCIENCE 9 (2023) . |
APA | Pan, Lin , Yan, Xiaochao , Zheng, Yaoyong , Huang, Liqin , Zhang, Zhen , Fu, Rongda et al. Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction . | PEERJ COMPUTER SCIENCE , 2023 , 9 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Background: Automatic pulmonary artery-vein separation has considerable importance in the diagnosis and treatment of lung diseases. However, insufficient connectivity and spatial inconsistency have always been the problems of artery-vein separation. Methods: A novel automatic method for artery-vein separation in CT images is presented in this work. Specifically, a multi-scale information aggregated network (MSIA-Net) including multi-scale fusion blocks and deep supervision, is proposed to learn the features of artery-vein and aggregate additional semantic information, respectively. The proposed method integrates nine MSIA-Net models for artery-vein separation, vessel segmentation, and centerline separation tasks along with axial, coronal, and sagittal multi-view slices. First, the preliminary artery-vein separation results are obtained by the proposed multi-view fusion strategy (MVFS). Then, centerline correction algorithm (CCA) is used to correct the preliminary results of artery- vein separation by the centerline separation results. Finally, the vessel segmentation results are utilized to reconstruct the artery-vein morphology. In addition, weighted cross-entropy and dice loss are employed to solve the class imbalance problem. Results: We constructed 50 manually labeled contrast-enhanced computed CT scans for five-fold cross -validation, and experimental results demonstrated that our method achieves superior segmentation perfor-mance of 97.7%, 85.1%, and 84.9% on ACC, Pre, and DSC, respectively. Additionally, a series of ablation studies demonstrate the effectiveness of the proposed components. Conclusion: The proposed method can effectively solve the problem of insufficient vascular connectivity and correct the spatial inconsistency of artery-vein.
Keyword :
Centerline correction Centerline correction CT images CT images Multi-scale information aggregated Multi-scale information aggregated Pulmonary artery-vein separation Pulmonary artery-vein separation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Lin , Li, Zhaopei , Shen, Zhiqiang et al. Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation [J]. | COMPUTERS IN BIOLOGY AND MEDICINE , 2023 , 155 . |
MLA | Pan, Lin et al. "Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation" . | COMPUTERS IN BIOLOGY AND MEDICINE 155 (2023) . |
APA | Pan, Lin , Li, Zhaopei , Shen, Zhiqiang , Liu, Zheng , Huang, Liqin , Yang, Mingjing et al. Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation . | COMPUTERS IN BIOLOGY AND MEDICINE , 2023 , 155 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Diabetic retinopathy (DR) is a common ocular complication in diabetic patients and is a major cause of blindness in the population. DR often leads to progressive changes in the structure of the vascular system and causes abnormalities. In the process of DR analysis, the image quality needs to be evaluated first, and images with better imaging quality are selected, followed by value-added proliferative diabetic retinopathy (PDR) detection. Therefore, in this paper, the MixNet classification network was first used for image quality assessment (IQA), and then the ResNet50-CMBA network was used for DR grading of images, and both networks were combined with a k-fold cross-validation strategy. We evaluated our method at the 2022 Diabetic Retinopathy Analysis Challenge (DRAC), where image quality was evaluated on 1103 ultra-wide optical coherence tomography angiography (UW-OCTA) images and DR grading was detected on 997 UW-OCTA images. Our method achieved a Quadratic Weight Kappa of 0.7547 and 0.8010 in the test cases, respectively. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keyword :
Diabetic retinopathy grading Diabetic retinopathy grading Image quality assessment Image quality assessment Ultra-wide optical coherence tomography angiograph Ultra-wide optical coherence tomography angiograph
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, W. , Chen, H. , Li, D. et al. Automatic Image Quality Assessment and DR Grading Method Based on Convolutional Neural Network [未知]. |
MLA | Zhang, W. et al. "Automatic Image Quality Assessment and DR Grading Method Based on Convolutional Neural Network" [未知]. |
APA | Zhang, W. , Chen, H. , Li, D. , Zheng, S. . Automatic Image Quality Assessment and DR Grading Method Based on Convolutional Neural Network [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Purpose: The identification of early-stage Parkinson's disease (PD) is important for the effective management of patients, affecting their treatment and prognosis. Recently, structural brain networks (SBNs) have been used to diagnose PD. However, how to mine abnormal patterns from high-dimensional SBNs has been a challenge due to the complex topology of the brain. Meanwhile, the existing prediction mechanisms of deep learning models are often complicated, and it is difficult to extract effective interpretations. In addition, most works only focus on the classification of imaging and ignore clinical scores in practical applications, which limits the ability of the model. Inspired by the regional modularity of SBNs, we adopted graph learning from the perspective of node clustering to construct an interpretable framework for PD classification.Methods: In this study, a multi-task graph structure learning framework based on node clustering (MNC-Net) is proposed for the early diagnosis of PD. Specifically, we modeled complex SBNs into modular graphs that facilitated the representation learning of abnormal patterns. Traditional graph neural networks are optimized through graph structure learning based on node clustering, which identifies potentially abnormal brain regions and reduces the impact of irrelevant noise. Furthermore, we employed a regression task to link clinical scores to disease classification, and incorporated latent domain information into model training through multi-task learning.Results: We validated the proposed approach on the Parkinsons Progression Markers Initiative dataset. Exper-imental results showed that our MNC-Net effectively separated the early-stage PD from healthy controls(HC) with an accuracy of 95.5%. The t-SNE figures have showed that our graph structure learning method can capture more efficient and discriminatory features. Furthermore, node clustering parameters were used as important weights to extract salient task-related brain regions(ROIs). These ROIs are involved in the development of mood disorders, tremors, imbalances and other symptoms, highlighting the importance of memory, language and mild motor function in early PD. In addition, statistical results from clinical scores confirmed that our model could capture abnormal connectivity that was significantly different between PD and HC. These results are consistent with previous studies, demonstrating the interpretability of our methods.
Keyword :
Clinical scores Clinical scores Early Parkinson?s disease Early Parkinson?s disease Graph neural networks Graph neural networks Structural brain network Structural brain network
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Liqin , Ye, Xiaofang , Yang, Mingjing et al. MNC-Net: Multi-task graph structure learning based on node clustering for early Parkinson's disease diagnosis [J]. | COMPUTERS IN BIOLOGY AND MEDICINE , 2023 , 152 . |
MLA | Huang, Liqin et al. "MNC-Net: Multi-task graph structure learning based on node clustering for early Parkinson's disease diagnosis" . | COMPUTERS IN BIOLOGY AND MEDICINE 152 (2023) . |
APA | Huang, Liqin , Ye, Xiaofang , Yang, Mingjing , Pan, Lin , Zheng, Shao hua . MNC-Net: Multi-task graph structure learning based on node clustering for early Parkinson's disease diagnosis . | COMPUTERS IN BIOLOGY AND MEDICINE , 2023 , 152 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Kidney cancer is one of the top ten cancers in the world, and its incidence is still increasing. Early detection and accurate treatment are the most effective control methods. The precise and automatic segmentation of kidney tumors in computed tomography (CT) is an important prerequisite for medical methods such as pathological localization and radiotherapy planning, However, due to the large differences in the shape, size, and location of kidney tumors, the accurate and automatic segmentation of kidney tumors still encounter great challenges. Recently, U-Net and its variants have been adopted to solve medical image segmentation problems. Although these methods achieved favorable performance, the long-range dependencies of feature maps learned by convolutional neural network (CNN) are overlooked, which leaves room for further improvement. In this paper, we propose an squeeze-and-excitation encoder-decoder network, named SeResUNet, for kidney and kidney tumor segmentation. SeResUNet is an U-Net-like architecture. The encoder of SeResUNet contains a SeResNet to learns high-level semantic features and model the long-range dependencies among different channels of the learned feature maps. The decoder is the same as the vanilla U-Net. The encoder and decoder are connected by the skip connections for feature concatenation. We used the kidney and kidney tumor segmentation 2021 dataset to evaluate the proposed method. The dice, surface dice and tumor dice score of SeResUNet are 67.2%, 54.4%, 54.5%, respectively.
Keyword :
Kidney tumor segmentation Kidney tumor segmentation Squeeze-and-excitation network Squeeze-and-excitation network U-Net U-Net
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wen, Jianhui , Li, Zhaopei , Shen, Zhiqiang et al. Squeeze-and-Excitation Encoder-Decoder Network for Kidney and Kidney Tumor Segmentation in CT Images [J]. | KIDNEY AND KIDNEY TUMOR SEGMENTATION, KITS 2021 , 2022 , 13168 : 71-79 . |
MLA | Wen, Jianhui et al. "Squeeze-and-Excitation Encoder-Decoder Network for Kidney and Kidney Tumor Segmentation in CT Images" . | KIDNEY AND KIDNEY TUMOR SEGMENTATION, KITS 2021 13168 (2022) : 71-79 . |
APA | Wen, Jianhui , Li, Zhaopei , Shen, Zhiqiang , Zheng, Yaoyong , Zheng, Shaohua . Squeeze-and-Excitation Encoder-Decoder Network for Kidney and Kidney Tumor Segmentation in CT Images . | KIDNEY AND KIDNEY TUMOR SEGMENTATION, KITS 2021 , 2022 , 13168 , 71-79 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Automatic segmentation of multiple organs is a challenging topic. Most existing approaches are based on 2D network or 3D network, which leads to insufficient contextual exploration in organ segmentation. In recent years, many methods for automatic segmentation based on fully supervised deep learning have been proposed. However, it is very expensive and time-consuming for experienced medical practitioners to annotate a large number of pixels. In this paper, we propose a new two-dimensional multi slices semi-supervised method to perform the task of abdominal organ segmentation. The network adopts the information along the z-axis direction in CT images, preserves and exploits the useful temporal information in adjacent slices. Besides, we combine Cross-Entropy Loss and Dice Loss as loss functions to improve the performance of our method. We apply a teacher-student model with Exponential Moving Average (EMA) strategy to leverage the unlabeled data. The student model is trained with labeled data, and the teacher model is obtained by smoothing the student model weights via EMA. The pseudo-labels of unlabeled images predicted by the teacher model are used to train the student model as the final model. The mean DSC for all cases we obtained on the validation set was 0.5684, the mean NSD was 0.5971, and the total run time was 783.14 s. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keyword :
Computer aided instruction Computer aided instruction Computerized tomography Computerized tomography Deep learning Deep learning Medical imaging Medical imaging Students Students Supervised learning Supervised learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Hao , Zhang, Wen , Yan, Xiaochao et al. Multi-organ Segmentation Based on 2.5D Semi-supervised Learning [C] . 2022 : 74-86 . |
MLA | Chen, Hao et al. "Multi-organ Segmentation Based on 2.5D Semi-supervised Learning" . (2022) : 74-86 . |
APA | Chen, Hao , Zhang, Wen , Yan, Xiaochao , Chen, Yanbin , Chen, Xin , Wu, Mengjun et al. Multi-organ Segmentation Based on 2.5D Semi-supervised Learning . (2022) : 74-86 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Automatic segmentation and centerline extraction of blood vessels from retinal fundus images is an essential step to measure the state of retinal blood vessels and achieve the goal of auxiliary diagnosis. Combining the information of blood vessel segments and centerline can help improve the continuity of results and performance. However, previous studies have usually treated these two tasks as separate research topics. Therefore, we propose a novel multitask learning network (MSC-Net) for retinal vessel segmentation and centerline extraction. The network uses a multibranch design to combine information between two tasks. Channel and atrous spatial fusion block (CAS-FB) is designed to fuse and correct the features of different branches and different scales. The clDice loss function is also used to constrain the topological continuity of blood vessel segments and centerline. Experimental results on different fundus blood vessel datasets (DRIVE, STARE, and CHASE) show that our method can obtain better segmentation and centerline extraction results at different scales and has better topological continuity than state-of-the-art methods.
Keyword :
centerline extraction centerline extraction multitask learning multitask learning retinal fundus images retinal fundus images vessel segmentation vessel segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Lin , Zhang, Zhen , Zheng, Shaohua et al. MSC-Net: Multitask Learning Network for Retinal Vessel Segmentation and Centerline Extraction [J]. | APPLIED SCIENCES-BASEL , 2022 , 12 (1) . |
MLA | Pan, Lin et al. "MSC-Net: Multitask Learning Network for Retinal Vessel Segmentation and Centerline Extraction" . | APPLIED SCIENCES-BASEL 12 . 1 (2022) . |
APA | Pan, Lin , Zhang, Zhen , Zheng, Shaohua , Huang, Liqin . MSC-Net: Multitask Learning Network for Retinal Vessel Segmentation and Centerline Extraction . | APPLIED SCIENCES-BASEL , 2022 , 12 (1) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Automatic segmentation of kidney tumors and lesions in medical images is an essential measure for clinical treatment and diagnosis. In this work, we proposed a two-stage cascade network to segment three hierarchical regions: kidney, kidney tumor and cyst from CT scans. The cascade is designed to decompose the four-class segmentation problem into two segmentation subtasks. The kidney is obtained in the first stage using a modified 3D U-Net called Kidney-Net. In the second stage, we designed a fine segmentation model, which named Masses-Net to segment kidney tumor and cyst based on the kidney which obtained in the first stage. A multi-dimension feature (MDF) module is utilized to learn more spatial and contextual information. The convolutional block attention module (CBAM) also introduced to focus on the important feature. Moreover, we adopted a deep supervision mechanism for regularizing segmentation accuracy and feature learning in the decoding part. Experiments with KiTS2021 testset show that our proposed method achieve Dice, Surface Dice and Tumor Dice of 0.650, 0.518 and 0.478, respectively.
Keyword :
Cascade framework Cascade framework Deep learning Deep learning Kidney/tumor segmentation Kidney/tumor segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Chaonan , Fu, Rongda , Zheng, Shaohua . Kidney and Kidney Tumor Segmentation Using a Two-Stage Cascade Framework [J]. | KIDNEY AND KIDNEY TUMOR SEGMENTATION, KITS 2021 , 2022 , 13168 : 59-70 . |
MLA | Lin, Chaonan et al. "Kidney and Kidney Tumor Segmentation Using a Two-Stage Cascade Framework" . | KIDNEY AND KIDNEY TUMOR SEGMENTATION, KITS 2021 13168 (2022) : 59-70 . |
APA | Lin, Chaonan , Fu, Rongda , Zheng, Shaohua . Kidney and Kidney Tumor Segmentation Using a Two-Stage Cascade Framework . | KIDNEY AND KIDNEY TUMOR SEGMENTATION, KITS 2021 , 2022 , 13168 , 59-70 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |