• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:李兰兰

Refining:

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 4 >
CVT-HNet: a fusion model for recognizing perianal fistulizing Crohn’s disease based on CNN and ViT Scopus
期刊论文 | 2025 , 25 (1) | BMC Medical Imaging
Abstract&Keyword Cite

Abstract :

Background: Accurate identification of anal fistulas is essential, as it directly impacts the severity of subsequent perianal infections, prognostic indicators, and overall treatment outcomes. Traditional manual recognition methods are inefficient. In response, computer vision methods have been adopted to improve efficiency. Convolutional neural networks(CNNs) are the main basis for detecting anal fistulas in current computer vision techniques. However, these methods often struggle to capture long-range dependencies effectively, which results in inadequate handling of images of anal fistulas. Methods: This study proposes a new fusion model, CVT-HNet, that integrates MobileNet with vision transformer technology. This design utilizes CNNs to extract local features and Transformers to capture long-range dependencies. In addition, the MobileNetV2 with Coordinate Attention mechanism and encoder modules are optimized to improve the precision of detecting anal fistulas. Results: Comparative experimental results show that CVT-HNet achieves an accuracy of 80.66% with significant robustness. It surpasses both pure Transformer architecture models and other fusion networks. Internal validation results demonstrate the reliability and consistency of CVT-HNet. External validation demonstrates that our model exhibits commendable transportability and generalizability. In visualization analysis, CVT-HNet exhibits a more concentrated focus on the region of interest in images of anal fistulas. Furthermore, the contribution of each CVT-HNet component module is evaluated by ablation experiments. Conclusion: The experimental results highlight the superior performance and practicality of CVT-HNet in detecting anal fistulas. By combining local and global information, CVT-HNet demonstrates strong performance. The model not only achieves high accuracy and robustness but also exhibits strong generalizability. This makes it suitable for real-world applications where variability in data is common.These findings emphasize its effectiveness in clinical contexts. © The Author(s) 2025.

Keyword :

Convolutional neural network Convolutional neural network Image classification Image classification MRI MRI Perianal fistulizing Crohn’s disease Perianal fistulizing Crohn’s disease Vision transformer Vision transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, L. , Wang, Z. , Wang, C. et al. CVT-HNet: a fusion model for recognizing perianal fistulizing Crohn’s disease based on CNN and ViT [J]. | BMC Medical Imaging , 2025 , 25 (1) .
MLA Li, L. et al. "CVT-HNet: a fusion model for recognizing perianal fistulizing Crohn’s disease based on CNN and ViT" . | BMC Medical Imaging 25 . 1 (2025) .
APA Li, L. , Wang, Z. , Wang, C. , Chen, T. , Deng, K. , Wei, H. et al. CVT-HNet: a fusion model for recognizing perianal fistulizing Crohn’s disease based on CNN and ViT . | BMC Medical Imaging , 2025 , 25 (1) .
Export to NoteExpress RIS BibTex

Version :

Deep learning model targeting cancer surrounding tissues for accurate cancer diagnosis based on histopathological images SCIE
期刊论文 | 2025 , 23 (1) | JOURNAL OF TRANSLATIONAL MEDICINE
WoS CC Cited Count: 2
Abstract&Keyword Cite Version(1)

Abstract :

Accurate and fast histological diagnosis of cancers is crucial for successful treatment. The deep learning-based approaches have assisted pathologists in efficient cancer diagnosis. The remodeled microenvironment and field cancerization may enable the cancer-specific features in the image of non-cancer regions surrounding cancer, which may provide additional information not available in the cancer region to improve cancer diagnosis. Here, we proposed a deep learning framework with fine-tuning target proportion towards cancer surrounding tissues in histological images for gastric cancer diagnosis. Through employing six deep learning-based models targeting region-of-interest (ROI) with different proportions of no-cancer and cancer regions, we uncovered the diagnostic value of non-cancer ROI, and the model performance for cancer diagnosis depended on the proportion. Then, we constructed a model based on MobileNetV2 with the optimized weights targeting non-cancer and cancer ROI to diagnose gastric cancer (DeepNCCNet). In the external validation, the optimized DeepNCCNet demonstrated excellent generalization abilities with an accuracy of 93.96%. In conclusion, we discovered a non-cancer ROI weight-dependent model performance, indicating the diagnostic value of non-cancer regions with potential remodeled microenvironment and field cancerization, which provides a promising image resource for cancer diagnosis. The DeepNCCNet could be readily applied to clinical diagnosis for gastric cancer, which is useful for some clinical settings such as the absence or minimum amount of tumor tissues in the insufficient biopsy.

Keyword :

Cancer-adjacent tissues Cancer-adjacent tissues Cancer diagnosis Cancer diagnosis Deep learning Deep learning Field cancerization Field cancerization Histological image Histological image

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Lanlan , Geng, Yi , Chen, Tao et al. Deep learning model targeting cancer surrounding tissues for accurate cancer diagnosis based on histopathological images [J]. | JOURNAL OF TRANSLATIONAL MEDICINE , 2025 , 23 (1) .
MLA Li, Lanlan et al. "Deep learning model targeting cancer surrounding tissues for accurate cancer diagnosis based on histopathological images" . | JOURNAL OF TRANSLATIONAL MEDICINE 23 . 1 (2025) .
APA Li, Lanlan , Geng, Yi , Chen, Tao , Lin, Kaixin , Xie, Chengjie , Qi, Jing et al. Deep learning model targeting cancer surrounding tissues for accurate cancer diagnosis based on histopathological images . | JOURNAL OF TRANSLATIONAL MEDICINE , 2025 , 23 (1) .
Export to NoteExpress RIS BibTex

Version :

Deep learning model targeting cancer surrounding tissues for accurate cancer diagnosis based on histopathological images Scopus
期刊论文 | 2025 , 23 (1) | Journal of Translational Medicine
CG-Net改进的结直肠癌病灶分割算法 PKU
期刊论文 | 2024 , 45 (1) , 299-306 | 计算机工程与设计
Abstract&Keyword Cite Version(1)

Abstract :

为解决深度学习分割算法在病灶的细节分割上存在漏判且模型参数量较大不利于实际应用的问题,提出一种基于改进的CG-Net的深度轻量化分割神经网络.在编码块加入改进高效金字塔拆分注意力模块和深度可分离卷积,以学习丰富多尺度全局特征;采用残差思想将注意力模块与编码块结合,提出高效金字塔语境引导模块,帮助网络学习全局和局部特征信息.在中山大学附属第六医院提供的腹部MRI图像数据库的结直肠肿瘤病灶分割实验中,验证了改进模型算法在分割精度和模型轻量化方面的有效性.

Keyword :

医学图像分割 医学图像分割 注意力机制 注意力机制 深度可分离卷积 深度可分离卷积 深度学习 深度学习 结直肠癌 结直肠癌 编码解码网络 编码解码网络 轻量级 轻量级

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李兰兰 , 胡益煌 , 王大彪 et al. CG-Net改进的结直肠癌病灶分割算法 [J]. | 计算机工程与设计 , 2024 , 45 (1) : 299-306 .
MLA 李兰兰 et al. "CG-Net改进的结直肠癌病灶分割算法" . | 计算机工程与设计 45 . 1 (2024) : 299-306 .
APA 李兰兰 , 胡益煌 , 王大彪 , 徐斌 , 李娟 . CG-Net改进的结直肠癌病灶分割算法 . | 计算机工程与设计 , 2024 , 45 (1) , 299-306 .
Export to NoteExpress RIS BibTex

Version :

CG-Net改进的结直肠癌病灶分割算法 PKU
期刊论文 | 2024 , 45 (01) , 299-306 | 计算机工程与设计
基于短期密集连接注意网络的结肠息肉分割方法
期刊论文 | 2024 , 52 (8) , 2469-2472,2497 | 计算机与数字工程
Abstract&Keyword Cite Version(1)

Abstract :

结肠镜检查依赖于操作人员且漏检率较高,所以需要一种实时的息肉分割算法,来辅助医生的息肉检测工作.因此论文提出短期密集连接注意网络(Short-Term Dense Concatenate Attention Network,STDCANet).网络编码端的核心层是短期密集连接注意模块,此模块整合了传统卷积、STDC、残差思想和NAM的优势,以较小的计算复杂度保留了可伸缩的感受野和多尺度信息,在解码端引入了PD解码器,摈弃了部分底层特征用于模型的加速,聚合了高层特征实现了较好的分割结果.STDCANet在CVC-ClinicDB数据集上与经典的医学图像分割网络进行性能和模型复杂度的对比,在这两方面均优于对比网络,有临床实时分割的潜力.

Keyword :

医学图像处理 医学图像处理 注意力机制 注意力机制 深度学习 深度学习 结肠镜图像 结肠镜图像

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李兰兰 , 张孝辉 , 王大彪 . 基于短期密集连接注意网络的结肠息肉分割方法 [J]. | 计算机与数字工程 , 2024 , 52 (8) : 2469-2472,2497 .
MLA 李兰兰 et al. "基于短期密集连接注意网络的结肠息肉分割方法" . | 计算机与数字工程 52 . 8 (2024) : 2469-2472,2497 .
APA 李兰兰 , 张孝辉 , 王大彪 . 基于短期密集连接注意网络的结肠息肉分割方法 . | 计算机与数字工程 , 2024 , 52 (8) , 2469-2472,2497 .
Export to NoteExpress RIS BibTex

Version :

基于短期密集连接注意网络的结肠息肉分割方法
期刊论文 | 2024 , 52 (08) , 2469-2472,2497 | 计算机与数字工程
Development and validation of the MRI-based deep learning classifier for distinguishing perianal fi stulizing Crohn's disease from cryptoglandular fi stula: a multicenter cohort study SCIE
期刊论文 | 2024 , 78 | ECLINICALMEDICINE
WoS CC Cited Count: 2
Abstract&Keyword Cite Version(1)

Abstract :

Background A singular reliable modality for early distinguishing perianal fi stulizing Crohn's disease (PFCD) from cryptoglandular fi stula (CGF) is currently lacking. We aimed to develop and validate an MRI-based deep learning classifier to effectively discriminate between them. Methods The present study retrospectively enrolled 1054 patients with PFCD or CGF from three Chinese tertiary referral hospitals between January 1, 2015, and December 31, 2021. The patients were divided into four cohorts: training cohort (n = 800), validation cohort (n = 100), internal test cohort (n = 100) and external test cohort (n = 54). Two deep convolutional neural networks (DCNN), namely MobileNetV2 and ResNet50, were respectively trained using the transfer learning strategy on a dataset consisting of 44871 MR images. The performance of the DCNN models was compared to that of radiologists using various metrics, including receiver operating characteristic curve (ROC) analysis, accuracy, sensitivity, and specificity. Delong testing was employed for comparing the area under curves (AUCs). Univariate and multivariate analyses were conducted to explore potential factors associated with classifier performance. Findings A total of 532 PFCD and 522 CGF patients were included. Both pre-trained DCNN classifiers achieved encouraging performances in the internal test cohort (MobileNetV2 AUC: 0.962, 95% CI 0.903-0.990; ResNet50 AUC: 0.963, 95% CI 0.905-0.990), as well as external test cohort (MobileNetV2 AUC: 0.885, 95% CI 0.769-0.956; ResNet50 AUC: 0.874, 95% CI 0.756-0.949). They had greater AUCs than the radiologists (all p <= 0.001), while had comparable AUCs to each other (p = 0.83 and p = 0.60 in the two test cohorts). None of the potential characteristics had a significant impact on the performance of pre-trained MobileNetV2 classifier in etiologic diagnosis. Previous fi stula surgery influenced the performance of the pre-trained ResNet50 classifier in the internal test cohort (OR 0.157, 95% CI 0.025-0.997, p = 0.05). Interpretation The developed DCNN classifiers exhibited superior robustness in distinguishing PFCD from CGF compared to artificial visual assessment, showing their potential for assisting in early detection of PFCD. Our fi ndings highlight the promising generalized performance of MobileNetV2 over ResNet50, rendering it suitable for deployment on mobile terminals.

Keyword :

Deep convolutional neural network Deep convolutional neural network Deep learning Deep learning Pelvic MRI Pelvic MRI Perianal fistulizing Crohn's disease Perianal fistulizing Crohn's disease

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Heng , Li, Wenru , Chen, Tao et al. Development and validation of the MRI-based deep learning classifier for distinguishing perianal fi stulizing Crohn's disease from cryptoglandular fi stula: a multicenter cohort study [J]. | ECLINICALMEDICINE , 2024 , 78 .
MLA Zhang, Heng et al. "Development and validation of the MRI-based deep learning classifier for distinguishing perianal fi stulizing Crohn's disease from cryptoglandular fi stula: a multicenter cohort study" . | ECLINICALMEDICINE 78 (2024) .
APA Zhang, Heng , Li, Wenru , Chen, Tao , Deng, Ke , Yang, Bolin , Luo, Jingen et al. Development and validation of the MRI-based deep learning classifier for distinguishing perianal fi stulizing Crohn's disease from cryptoglandular fi stula: a multicenter cohort study . | ECLINICALMEDICINE , 2024 , 78 .
Export to NoteExpress RIS BibTex

Version :

Development and validation of the MRI-based deep learning classifier for distinguishing perianal fistulizing Crohn's disease from cryptoglandular fistula: a multicenter cohort study Scopus
期刊论文 | 2024 , 78 | eClinicalMedicine
Accurate tumor segmentation and treatment outcome prediction with DeepTOP SCIE
期刊论文 | 2023 , 183 | RADIOTHERAPY AND ONCOLOGY
WoS CC Cited Count: 3
Abstract&Keyword Cite Version(1)

Abstract :

Background: Accurate outcome prediction prior to treatment can facilitate trial design and clinical deci-sion making to achieve better treatment outcome.Method: We developed the DeepTOP tool with deep learning approach for region-of-interest segmenta-tion and clinical outcome prediction using magnetic resonance imaging (MRI). DeepTOP was constructed with an automatic pipeline from tumor segmentation to outcome prediction. In DeepTOP, the segmenta-tion model used U-Net with a codec structure, and the prediction model was built with a three-layer con-volutional neural network. In addition, the weight distribution algorithm was developed and applied in the prediction model to optimize the performance of DeepTOP.Results: A total of 1889 MRI slices from 99 patients in the phase III multicenter randomized clinical trial (NCT01211210) on neoadjuvant treatment for rectal cancer was used to train and validate DeepTOP. We systematically optimized and validated DeepTOP with multiple devised pipelines in the clinical trial, demonstrating a better performance than other competitive algorithms in accurate tumor segmentation (Dice coefficient: 0.79; IoU: 0.75; slice-specific sensitivity: 0.98) and predicting pathological complete response to chemo/radiotherapy (accuracy: 0.789; specificity: 0.725; and sensitivity: 0.812). DeepTOP is a deep learning tool that could avoid manual labeling and feature extraction and realize automatic tumor segmentation and treatment outcome prediction by using the original MRI images.Conclusion: DeepTOP is open to provide a tractable framework for the development of other segmenta-tion and predicting tools in clinical settings. DeepTOP-based tumor assessment can provide a reference for clinical decision making and facilitate imaging marker-driven trial design.(c) 2023 Elsevier B.V. All rights reserved. Radiotherapy and Oncology 183 (2023) 109550

Keyword :

Cancer treatment Cancer treatment Magnetic resonance image Magnetic resonance image Neural network Neural network Treatment response Treatment response

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Lanlan , Xu, Bin , Zhuang, Zhuokai et al. Accurate tumor segmentation and treatment outcome prediction with DeepTOP [J]. | RADIOTHERAPY AND ONCOLOGY , 2023 , 183 .
MLA Li, Lanlan et al. "Accurate tumor segmentation and treatment outcome prediction with DeepTOP" . | RADIOTHERAPY AND ONCOLOGY 183 (2023) .
APA Li, Lanlan , Xu, Bin , Zhuang, Zhuokai , Li, Juan , Hu, Yihuang , Yang, Hui et al. Accurate tumor segmentation and treatment outcome prediction with DeepTOP . | RADIOTHERAPY AND ONCOLOGY , 2023 , 183 .
Export to NoteExpress RIS BibTex

Version :

Automatic treatment outcome prediction with DeepInteg based on multimodal radiological images in rectal cancer SCIE
期刊论文 | 2023 , 9 (2) | HELIYON
WoS CC Cited Count: 4
Abstract&Keyword Cite Version(1)

Abstract :

Neoadjuvant systemic treatment before surgery is a prevalent regimen in the patients with advanced-stage or high-risk tumor, which has shaped the treatment strategies and cancer survival in the past decades. However, some patients present with poor response to the neoadjuvant treatment. Therefore, it is of great significance to develop tools to help distinguish the patients that could achieve pathological complete response before surgery to avoid inappropriate treat-ment. Here, this study demonstrated a multi-task deep learning tool called DeepInteg. In the DeepInteg framework, the segmentation module was constructed based on the CE-Net with a context extractor to achieve end-to-end delineation of region of interest (ROI) from radiological images, then the features of segmented Magnetic Resonance Imaging (MRI) and Computed To-mography (CT) images of each case were fused and input to the classification module based on a convolution neural network for treatment outcome prediction. The dataset with 1700 MRI and CT slices collected from the prospectively randomized clinical trial (NCT01211210) on systemic treatment for rectal cancer was used to develop and systematically optimize DeepInteg. As a result, DeepInteg achieved automatic segmentation of tumoral ROI with Dices of 0.766 and 0.719 and mIoUs of 0.788 and 0.756 in CT and MRI images, respectively. In addition, DeepInteg ach-ieved AUC of 0.833, accuracy of 0.826 and specificity of 0.856 in the prediction for pathological complete response after treatment, which showed better performance compared with the model based on CT or MRI alone. This study provide a robust framework to develop disease-specific tools for automatic delineation of ROI and clinical outcome prediction. The well-trained Deep-Integ could be readily applied in clinic to predict pathological complete response after neo-adjuvant therapy in rectal cancer patients.

Keyword :

CT CT Deep learning Deep learning MRI MRI Neoadjuvant therapy Neoadjuvant therapy Rectal cancer Rectal cancer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hu, Yihuang , Li, Juan , Zhuang, Zhuokai et al. Automatic treatment outcome prediction with DeepInteg based on multimodal radiological images in rectal cancer [J]. | HELIYON , 2023 , 9 (2) .
MLA Hu, Yihuang et al. "Automatic treatment outcome prediction with DeepInteg based on multimodal radiological images in rectal cancer" . | HELIYON 9 . 2 (2023) .
APA Hu, Yihuang , Li, Juan , Zhuang, Zhuokai , Xu, Bin , Wang, Dabiao , Yu, Huichuan et al. Automatic treatment outcome prediction with DeepInteg based on multimodal radiological images in rectal cancer . | HELIYON , 2023 , 9 (2) .
Export to NoteExpress RIS BibTex

Version :

Automatic treatment outcome prediction with DeepInteg based on multimodal radiological images in rectal cancer Scopus
期刊论文 | 2023 , 9 (2) | Heliyon
基于多模态图像构建CNN-ViT模型在弥漫性大B细胞淋巴瘤骨髓受累诊断中的应用 CSCD PKU
期刊论文 | 2023 , 31 (4) , 390-394 | 中国医学影像学杂志
Abstract&Keyword Cite Version(2)

Abstract :

目的 设计一种融合多模态图像深度学习模型CNN-ViT,诊断弥漫性大B细胞淋巴瘤(DLBCL)骨髓受累.资料与方法 回顾性收集2012年11月—2022年6月福建省立医院经病理证实的DLBCL 78例,其中无骨髓受累46例,有骨髓受累32例,所有患者在化疗前均行全身18F-FDG PET/CT检查、骨髓穿刺细胞涂片和(或)骨髓活检.选取骨盆区域PET及CT图像共9828张.将上述数据按7:1:2随机分为训练集6858张、验证集982张和测试集1988张.结合传统的卷积神经网络(CNN)和Vision-Transformer(ViT)模型设计CNN-ViT模型,分别提取PET和CT图像特征,预测骨髓受累情况.使用测试集的混淆矩阵和损失函数的变化、准确度、敏感度、特异度和F1_score评价模型的性能.结果 CNN-ViT模型诊断DLBCL骨髓受累的准确度、特异度、敏感度和F1_score分别为0.988、0.971、0.997、0.987.结论 CNN-ViT模型可以准确评估DLBCL骨髓受累情况.

Keyword :

B细胞 B细胞 X线计算机 X线计算机 体层摄影术 体层摄影术 正电子发射断层显像术 正电子发射断层显像术 淋巴瘤 淋巴瘤 神经网络 神经网络 骨盆 骨盆 骨髓 骨髓

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李兰兰 , 周颖 , 林禹 et al. 基于多模态图像构建CNN-ViT模型在弥漫性大B细胞淋巴瘤骨髓受累诊断中的应用 [J]. | 中国医学影像学杂志 , 2023 , 31 (4) : 390-394 .
MLA 李兰兰 et al. "基于多模态图像构建CNN-ViT模型在弥漫性大B细胞淋巴瘤骨髓受累诊断中的应用" . | 中国医学影像学杂志 31 . 4 (2023) : 390-394 .
APA 李兰兰 , 周颖 , 林禹 , 尤梦翔 , 林美福 , 陈文新 . 基于多模态图像构建CNN-ViT模型在弥漫性大B细胞淋巴瘤骨髓受累诊断中的应用 . | 中国医学影像学杂志 , 2023 , 31 (4) , 390-394 .
Export to NoteExpress RIS BibTex

Version :

基于多模态图像构建CNN-ViT模型在弥漫性大B细胞淋巴瘤骨髓受累诊断中的应用 CSCD PKU
期刊论文 | 2023 , 31 (04) , 390-394 | 中国医学影像学杂志
基于多模态图像构建CNN-ViT模型在弥漫性大B细胞淋巴瘤骨髓受累诊断中的应用 CSCD PKU
期刊论文 | 2023 , 31 (04) , 390-394 | 中国医学影像学杂志
Developing Preliminary MRI-based Classifier for Perianal Fistulizing Crohn's Disease by Using Deep Convolutional Neural Networks SCIE
期刊论文 | 2023 , 17 , 474-475 | JOURNAL OF CROHNS & COLITIS
Abstract&Keyword Cite

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, H. , Li, L. , Deng, K. et al. Developing Preliminary MRI-based Classifier for Perianal Fistulizing Crohn's Disease by Using Deep Convolutional Neural Networks [J]. | JOURNAL OF CROHNS & COLITIS , 2023 , 17 : 474-475 .
MLA Zhang, H. et al. "Developing Preliminary MRI-based Classifier for Perianal Fistulizing Crohn's Disease by Using Deep Convolutional Neural Networks" . | JOURNAL OF CROHNS & COLITIS 17 (2023) : 474-475 .
APA Zhang, H. , Li, L. , Deng, K. , Li, W. , Ren, D. . Developing Preliminary MRI-based Classifier for Perianal Fistulizing Crohn's Disease by Using Deep Convolutional Neural Networks . | JOURNAL OF CROHNS & COLITIS , 2023 , 17 , 474-475 .
Export to NoteExpress RIS BibTex

Version :

基于深度卷积神经网络的克罗恩病肛瘘磁共振成像诊断模型初探
期刊论文 | 2023 , 07 (2) , 144-150 | 中华炎性肠病杂志
Abstract&Keyword Cite

Abstract :

目的:初步探索基于深度卷积神经网络(DCNN)构建的克罗恩病(CD)肛瘘磁共振成像(MRI)诊断模型效能。方法:采用回顾性研究方法,随机纳入2014年1月至2019年12月中山大学附属第六医院收治的200例初诊CD肛瘘患者和200例初诊腺源性肛瘘患者,每组按8∶1∶1分配至训练集、验证集和测试集。收集所有患者肛管MRI图像,预处理增强图像质量。采用Pytorch深度学习框架和Windows10计算机操作系统,基于4种DCNN(MobileNetV2、VGG11、ResNet18和ResNet34)构建CD肛瘘和腺源性肛瘘的MRI鉴别诊断模型。每种模型根据是否结合迁移学习策略,分为迁移学习型(T)和非迁移学习型(U)。首先,输入训练集(CD肛瘘和腺源性肛瘘患者各160例,共78 321张MRI图像)图像数据,迭代训练至损失最小。然后,根据验证集(CD肛瘘和腺源性肛瘘患者各20例,共9697张MRI图像)的结果选择最佳的训练模型。最后,在测试集(CD肛瘘和腺源性肛瘘患者各20例,共9260张MRI图像)进行诊断效能评估。绘制每种预测模型的受试者操作特征(ROC)曲线并计算曲线下面积(AUC)。采用DeLong检验比较不同模型之间以及预测模型与不同年资放射科医生之间AUC的差异。结果:结合迁移学习策略的4种诊断模型的效能分别为MobileNetV2-T(AUC=0.943,95% CI:0.820 ~ 0.991),VGG11-T(AUC=0.935,95% CI:0.810 ~ 0.988),ResNet18-T(AUC=0.920,95% CI:0.789 ~ 0.988),ResNet34-T(AUC=0.929,95% CI:0.801 ~ 0.986)。结合迁移学习策略的4种模型AUC均高于低年资放射科医生(均 P<0.05),与高年资放射科医生的差异均无统计学意义(均 P>0.05)。 结论:采用基于DCNN的深度学习技术,结合迁移学习策略和高分辨率肛管MRI构建CD肛瘘的病因诊断模型具有可行性。

Keyword :

人工智能 人工智能 克罗恩病 克罗恩病 深度卷积神经网络 深度卷积神经网络 深度学习 深度学习 磁共振成像 磁共振成像 肛瘘 肛瘘

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李兰兰 , 邓珂 , 张恒 et al. 基于深度卷积神经网络的克罗恩病肛瘘磁共振成像诊断模型初探 [J]. | 中华炎性肠病杂志 , 2023 , 07 (2) : 144-150 .
MLA 李兰兰 et al. "基于深度卷积神经网络的克罗恩病肛瘘磁共振成像诊断模型初探" . | 中华炎性肠病杂志 07 . 2 (2023) : 144-150 .
APA 李兰兰 , 邓珂 , 张恒 , 任东林 , 李文儒 . 基于深度卷积神经网络的克罗恩病肛瘘磁共振成像诊断模型初探 . | 中华炎性肠病杂志 , 2023 , 07 (2) , 144-150 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 4 >

Export

Results:

Selected

to

Format:
Online/Total:1508/13840474
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1