• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:刘文犀

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 9 >
Multi-UNet: An effective Multi-U convolutional networks for semantic segmentation EI
期刊论文 | 2025 , 309 | Knowledge-Based Systems
Abstract&Keyword Cite

Abstract :

U-Net is a classic architecture for semantic segmentation. However, it has several limitations, such as difficulty in capturing complex images detail due to its simple U structure, long convergence time arising from fixed network parameters, and suboptimal efficacy in decoding and restoring multi-scale information. To deal with the above issues, we propose a Multiple U-shaped network (Multi-UNet) assuming that constructing appropriate U-shaped structure can achieve better segmentation performance. Firstly, inspired by the concept of connecting multiple similar blocks, our Multi-UNet consists of multiple U-block modules, with each succeeding module directly connected to the previous one to facilitate data transmission between different U structures. We refer to the original bridge connections of U-Net as Intra-U connections and introduce a new type of connection called Inter-U connections. These Inter-U connections aim to retain as much detailed information as possible, enabling effective detection of complex images. Secondly, while maintaining Mean Intersection over Union (Mean-IoU), the up-sampling of each U applies uniformly small channel values to reduce the number of model parameters. Thirdly, a Spatial-Channel Parallel Attention Fusion (SCPAF) module is designed at the initial layer of every subsampling module of U-block architecture. It enhances feature extraction and alleviate computational overhead associated with data transmission. Finally, we replace the final up-sampling module with Atrous Spatial Pyramid Pooling Head (ASPPHead) to accomplish seamless multi-scale feature extraction. Our experiments are compared and analyzed with advanced models on three public datasets, and it can be concluded that the universality and accuracy of Multi-UNet network are superior. © 2024

Keyword :

Image reconstruction Image reconstruction Semantic Segmentation Semantic Segmentation Semantic Web Semantic Web

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Qiangwei , Cao, Jingjing , Ge, Junjie et al. Multi-UNet: An effective Multi-U convolutional networks for semantic segmentation [J]. | Knowledge-Based Systems , 2025 , 309 .
MLA Zhao, Qiangwei et al. "Multi-UNet: An effective Multi-U convolutional networks for semantic segmentation" . | Knowledge-Based Systems 309 (2025) .
APA Zhao, Qiangwei , Cao, Jingjing , Ge, Junjie , Zhu, Qi , Chen, Xiaoming , Liu, Wenxi . Multi-UNet: An effective Multi-U convolutional networks for semantic segmentation . | Knowledge-Based Systems , 2025 , 309 .
Export to NoteExpress RIS BibTex

Version :

Multi-UNet: An effective Multi-U convolutional networks for semantic segmentation SCIE
期刊论文 | 2025 , 309 | KNOWLEDGE-BASED SYSTEMS
Abstract&Keyword Cite Version(2)

Abstract :

U-Net is a classic architecture for semantic segmentation. However, it has several limitations, such as difficulty in capturing complex images detail due to its simple U structure, long convergence time arising from fixed network parameters, and suboptimal efficacy in decoding and restoring multi-scale information. To deal with the above issues, we propose a Multiple U-shaped network (Multi-UNet) assuming that constructing appropriate U-shaped structure can achieve better segmentation performance. Firstly, inspired by the concept of connecting multiple similar blocks, our Multi-UNet consists of multiple U-block modules, with each succeeding module directly connected to the previous one to facilitate data transmission between different U structures. We refer to the original bridge connections of U-Net as Intra-U connections and introduce a new type of connection called Inter-U connections. These Inter-U connections aim to retain as much detailed information as possible, enabling effective detection of complex images. Secondly, while maintaining Mean Intersection over Union (Mean-IoU), the up-sampling of each U applies uniformly small channel values to reduce the number of model parameters. Thirdly, a Spatial-Channel Parallel Attention Fusion (SCPAF) module is designed at the initial layer of every subsampling module of U-block architecture. It enhances feature extraction and alleviate computational overhead associated with data transmission. Finally, we replace the final up-sampling module with Atrous Spatial Pyramid Pooling Head (ASPPHead) to accomplish seamless multi-scale feature extraction. Our experiments are compared and analyzed with advanced models on three public datasets, and it can be concluded that the universality and accuracy of Multi-UNet network are superior.

Keyword :

Multiple U-shaped network Multiple U-shaped network Semantic segmentation Semantic segmentation U-net U-net

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Qiangwei , Cao, Jingjing , Ge, Junjie et al. Multi-UNet: An effective Multi-U convolutional networks for semantic segmentation [J]. | KNOWLEDGE-BASED SYSTEMS , 2025 , 309 .
MLA Zhao, Qiangwei et al. "Multi-UNet: An effective Multi-U convolutional networks for semantic segmentation" . | KNOWLEDGE-BASED SYSTEMS 309 (2025) .
APA Zhao, Qiangwei , Cao, Jingjing , Ge, Junjie , Zhu, Qi , Chen, Xiaoming , Liu, Wenxi . Multi-UNet: An effective Multi-U convolutional networks for semantic segmentation . | KNOWLEDGE-BASED SYSTEMS , 2025 , 309 .
Export to NoteExpress RIS BibTex

Version :

Multi-UNet: An effective Multi-U convolutional networks for semantic segmentation EI
期刊论文 | 2025 , 309 | Knowledge-Based Systems
Multi-UNet: An effective Multi-U convolutional networks for semantic segmentation Scopus
期刊论文 | 2025 , 309 | Knowledge-Based Systems
Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy Scopus
期刊论文 | 2024 , 13 (1) | Light: Science and Applications
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

Diagnostic pathology, historically dependent on visual scrutiny by experts, is essential for disease detection. Advances in digital pathology and developments in computer vision technology have led to the application of artificial intelligence (AI) in this field. Despite these advancements, the variability in pathologists’ subjective interpretations of diagnostic criteria can lead to inconsistent outcomes. To meet the need for precision in cancer therapies, there is an increasing demand for accurate pathological diagnoses. Consequently, traditional diagnostic pathology is evolving towards “next-generation diagnostic pathology”, prioritizing on the development of a multi-dimensional, intelligent diagnostic approach. Using nonlinear optical effects arising from the interaction of light with biological tissues, multiphoton microscopy (MPM) enables high-resolution label-free imaging of multiple intrinsic components across various human pathological tissues. AI-empowered MPM further improves the accuracy and efficiency of diagnosis, holding promise for providing auxiliary pathology diagnostic methods based on multiphoton diagnostic criteria. In this review, we systematically outline the applications of MPM in pathological diagnosis across various human diseases, and summarize common multiphoton diagnostic features. Moreover, we examine the significant role of AI in enhancing multiphoton pathological diagnosis, including aspects such as image preprocessing, refined differential diagnosis, and the prognostication of outcomes. We also discuss the challenges and perspectives faced by the integration of MPM and AI, encompassing equipment, datasets, analytical models, and integration into the existing clinical pathways. Finally, the review explores the synergy between AI and label-free MPM to forge novel diagnostic frameworks, aiming to accelerate the adoption and implementation of intelligent multiphoton pathology systems in clinical settings. © The Author(s) 2024.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, S. , Pan, J. , Zhang, X. et al. Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy [J]. | Light: Science and Applications , 2024 , 13 (1) .
MLA Wang, S. et al. "Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy" . | Light: Science and Applications 13 . 1 (2024) .
APA Wang, S. , Pan, J. , Zhang, X. , Li, Y. , Liu, W. , Lin, R. et al. Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy . | Light: Science and Applications , 2024 , 13 (1) .
Export to NoteExpress RIS BibTex

Version :

Learning Nighttime Semantic Segmentation the Hard Way EI
期刊论文 | 2024 , 20 (7) | ACM Transactions on Multimedia Computing, Communications and Applications
Abstract&Keyword Cite

Abstract :

Nighttime semantic segmentation is an important but challenging research problem for autonomous driving. The major challenges lie in the small objects or regions from the under-/over-exposed areas or suffer from motion blur caused by the camera deployed on moving vehicles. To resolve this, we propose a novel hard-class-aware module that bridges the main network for full-class segmentation and the hard-class network for segmenting aforementioned hard-class objects. In specific, it exploits the shared focus of hard-class objects from the dual-stream network, enabling the contextual information flow to guide the model to concentrate on the pixels that are hard to classify. In the end, the estimated hard-class segmentation results will be utilized to infer the final results via an adaptive probabilistic fusion refinement scheme. Moreover, to overcome over-smoothing and noise caused by extreme exposures, our model is modulated by a carefully crafted pretext task of constructing an exposure-aware semantic gradient map, which guides the model to faithfully perceive the structural and semantic information of hard-class objects while mitigating the negative impact of noises and uneven exposures. In experiments, we demonstrate that our unique network design leads to superior segmentation performance over existing methods, featuring the strong ability of perceiving hard-class objects under adverse conditions. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.

Keyword :

Classification (of information) Classification (of information) Semantics Semantics Semantic Segmentation Semantic Segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Wenxi , Cai, Jiaxin , Li, Qi et al. Learning Nighttime Semantic Segmentation the Hard Way [J]. | ACM Transactions on Multimedia Computing, Communications and Applications , 2024 , 20 (7) .
MLA Liu, Wenxi et al. "Learning Nighttime Semantic Segmentation the Hard Way" . | ACM Transactions on Multimedia Computing, Communications and Applications 20 . 7 (2024) .
APA Liu, Wenxi , Cai, Jiaxin , Li, Qi , Liao, Chenyang , Cao, Jingjing , He, Shengfeng et al. Learning Nighttime Semantic Segmentation the Hard Way . | ACM Transactions on Multimedia Computing, Communications and Applications , 2024 , 20 (7) .
Export to NoteExpress RIS BibTex

Version :

Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement Scopus
期刊论文 | 2024 , 132 (11) , 5030-5047 | International Journal of Computer Vision
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

Ultra-high resolution image segmentation has raised increasing interests in recent years due to its realistic applications. In this paper, we innovate the widely used high-resolution image segmentation pipeline, in which an ultra-high resolution image is partitioned into regular patches for local segmentation and then the local results are merged into a high-resolution semantic mask. In particular, we introduce a novel locality-aware context fusion based segmentation model to process local patches, where the relevance between local patch and its various contexts are jointly and complementarily utilized to handle the semantic regions with large variations. Additionally, we present the alternating local enhancement module that restricts the negative impact of redundant information introduced from the contexts, and thus is endowed with the ability of fixing the locality-aware features to produce refined results. Furthermore, in comprehensive experiments, we demonstrate that our model outperforms other state-of-the-art methods in public benchmarks and verify the effectiveness of the proposed modules. Our released codes will be available at: https://github.com/liqiokkk/FCtL. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.

Keyword :

Attention mechanism Attention mechanism Context-guided vision model Context-guided vision model Geo-spatial image segmentation Geo-spatial image segmentation Ultra-high resolution image segmentation Ultra-high resolution image segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, W. , Li, Q. , Lin, X. et al. Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement [J]. | International Journal of Computer Vision , 2024 , 132 (11) : 5030-5047 .
MLA Liu, W. et al. "Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement" . | International Journal of Computer Vision 132 . 11 (2024) : 5030-5047 .
APA Liu, W. , Li, Q. , Lin, X. , Yang, W. , He, S. , Yu, Y. . Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement . | International Journal of Computer Vision , 2024 , 132 (11) , 5030-5047 .
Export to NoteExpress RIS BibTex

Version :

Monocular BEV Perception of Road Scenes Via Front-to-Top View Projection Scopus
期刊论文 | 2024 , 46 (9) , 1-17 | IEEE Transactions on Pattern Analysis and Machine Intelligence
Abstract&Keyword Cite

Abstract :

HD map reconstruction is crucial for autonomous driving. LiDAR-based methods are limited due to expensive sensors and time-consuming computation. Camera-based methods usually need to perform road segmentation and view transformation separately, which often causes distortion and missing content. To push the limits of the technology, we present a novel framework that reconstructs a local map formed by road layout and vehicle occupancy in the bird&#x0027;s-eye view given a front-view monocular image only. We propose a front-to-top view projection (FTVP) module, which takes the constraint of cycle consistency between views into account and makes full use of their correlation to strengthen the view transformation and scene understanding. In addition, we apply multi-scale FTVP modules to propagate the rich spatial information of low-level features to mitigate spatial deviation of the predicted object location. Experiments on public benchmarks show that our method achieves various tasks on road layout estimation, vehicle occupancy estimation, and multi-class semantic estimation, at a performance level comparable to the state-of-the-arts, while maintaining superior efficiency. IEEE

Keyword :

Autonomous driving Autonomous driving BEV perception BEV perception Estimation Estimation Feature extraction Feature extraction Layout Layout Roads Roads segmentation segmentation Task analysis Task analysis Three-dimensional displays Three-dimensional displays Transformers Transformers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, W. , Li, Q. , Yang, W. et al. Monocular BEV Perception of Road Scenes Via Front-to-Top View Projection [J]. | IEEE Transactions on Pattern Analysis and Machine Intelligence , 2024 , 46 (9) : 1-17 .
MLA Liu, W. et al. "Monocular BEV Perception of Road Scenes Via Front-to-Top View Projection" . | IEEE Transactions on Pattern Analysis and Machine Intelligence 46 . 9 (2024) : 1-17 .
APA Liu, W. , Li, Q. , Yang, W. , Cai, J. , Yu, Y. , Ma, Y. et al. Monocular BEV Perception of Road Scenes Via Front-to-Top View Projection . | IEEE Transactions on Pattern Analysis and Machine Intelligence , 2024 , 46 (9) , 1-17 .
Export to NoteExpress RIS BibTex

Version :

Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement SCIE
期刊论文 | 2024 , 132 (11) , 5030-5047 | INTERNATIONAL JOURNAL OF COMPUTER VISION
Abstract&Keyword Cite Version(2)

Abstract :

Ultra-high resolution image segmentation has raised increasing interests in recent years due to its realistic applications. In this paper, we innovate the widely used high-resolution image segmentation pipeline, in which an ultra-high resolution image is partitioned into regular patches for local segmentation and then the local results are merged into a high-resolution semantic mask. In particular, we introduce a novel locality-aware context fusion based segmentation model to process local patches, where the relevance between local patch and its various contexts are jointly and complementarily utilized to handle the semantic regions with large variations. Additionally, we present the alternating local enhancement module that restricts the negative impact of redundant information introduced from the contexts, and thus is endowed with the ability of fixing the locality-aware features to produce refined results. Furthermore, in comprehensive experiments, we demonstrate that our model outperforms other state-of-the-art methods in public benchmarks and verify the effectiveness of the proposed modules. Our released codes will be available at: https://github.com/liqiokkk/FCtL.

Keyword :

Attention mechanism Attention mechanism Context-guided vision model Context-guided vision model Geo-spatial image segmentation Geo-spatial image segmentation Ultra-high resolution image segmentation Ultra-high resolution image segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Wenxi , Li, Qi , Lin, Xindai et al. Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement [J]. | INTERNATIONAL JOURNAL OF COMPUTER VISION , 2024 , 132 (11) : 5030-5047 .
MLA Liu, Wenxi et al. "Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement" . | INTERNATIONAL JOURNAL OF COMPUTER VISION 132 . 11 (2024) : 5030-5047 .
APA Liu, Wenxi , Li, Qi , Lin, Xindai , Yang, Weixiang , He, Shengfeng , Yu, Yuanlong . Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement . | INTERNATIONAL JOURNAL OF COMPUTER VISION , 2024 , 132 (11) , 5030-5047 .
Export to NoteExpress RIS BibTex

Version :

Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement Scopus
期刊论文 | 2024 , 132 (11) , 5030-5047 | International Journal of Computer Vision
Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement EI
期刊论文 | 2024 , 132 (11) , 5030-5047 | International Journal of Computer Vision
Attentive and Contrastive Image Manipulation Localization With Boundary Guidance SCIE
期刊论文 | 2024 , 19 , 6764-6778 | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

In recent years, the rapid advancement of image generation techniques has resulted in the widespread abuse of manipulated images, leading to a crisis of trust and affecting social equity. Thus, the goal of our work is to detect and localize tampered regions in images. Many deep learning based approaches have been proposed to address this problem, but they can hardly handle the tampered regions that are manually fine-tuned to blend into image background. By observing that the boundaries of tempered regions are critical to separating tampered and non-tampered parts, we present a novel boundary-guided approach to image manipulation detection, which introduces an inherent bias towards exploiting the boundary information of tampered regions. Our model follows an encoder-decoder architecture, with multi-scale localization mask prediction, and is guided to utilize the prior boundary knowledge through an attention mechanism and contrastive learning. In particular, our model is unique in that 1) we propose a boundary-aware attention module in the network decoder, which predicts the boundary of tampered regions and thus uses it as crucial contextual cues to facilitate the localization; and 2) we propose a multi-scale contrastive learning scheme with a novel boundary-guided sampling strategy, leading to more discriminative localization features. Our state-of-art performance on several public benchmarks demonstrates the superiority of our model over prior works.

Keyword :

Contrastive learning Contrastive learning Decoding Decoding Deepfakes Deepfakes Feature extraction Feature extraction Image manipulation detection/localization Image manipulation detection/localization Location awareness Location awareness Task analysis Task analysis Visualization Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Wenxi , Zhang, Hao , Lin, Xinyang et al. Attentive and Contrastive Image Manipulation Localization With Boundary Guidance [J]. | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2024 , 19 : 6764-6778 .
MLA Liu, Wenxi et al. "Attentive and Contrastive Image Manipulation Localization With Boundary Guidance" . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 19 (2024) : 6764-6778 .
APA Liu, Wenxi , Zhang, Hao , Lin, Xinyang , Zhang, Qing , Li, Qi , Liu, Xiaoxiang et al. Attentive and Contrastive Image Manipulation Localization With Boundary Guidance . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2024 , 19 , 6764-6778 .
Export to NoteExpress RIS BibTex

Version :

Attentive and Contrastive Image Manipulation Localization with Boundary Guidance EI
期刊论文 | 2024 , 19 , 6764-6778 | IEEE Transactions on Information Forensics and Security
Attentive and Contrastive Image Manipulation Localization with Boundary Guidance Scopus
期刊论文 | 2024 , 19 , 1-1 | IEEE Transactions on Information Forensics and Security
Tooth Motion Monitoring in Orthodontic Treatment by Mobile Device-based Multi-view Stereo Scopus
期刊论文 | 2024 | IEEE Transactions on Visualization and Computer Graphics
Abstract&Keyword Cite

Abstract :

Nowadays, orthodontics has become an important part of modern personal life to assist one in improving mastication and raising self-esteem. However, the quality of orthodontic treatment still heavily relies on the empirical evaluation of experienced doctors, which lacks quantitative assessment and requires patients to visit clinics frequently for in-person examination. To resolve the aforementioned problem, we propose a novel and practical mobile device-based framework for precisely measuring tooth movement in treatment, so as to simplify and strengthen the traditional tooth monitoring process. To this end, we formulate the tooth movement monitoring task as a multi-view multi-object pose estimation problem via different views that capture multiple texture-less and severely occluded objects (i.e. teeth). Specifically, we exploit a pre-scanned 3D tooth model and a sparse set of multi-view tooth images as inputs for our proposed tooth monitoring framework. After extracting tooth contours and localizing the initial camera pose of each view from the initial configuration, we propose a joint pose estimation scheme to precisely estimate the 3D pose of each individual tooth, so as to infer their relative offsets during treatment. Furthermore, we introduce the metric of Relative Pose Bias to evaluate the individual tooth pose accuracy in a small scale. We demonstrate that our approach is capable of reaching high accuracy and efficiency as practical orthodontic treatment monitoring requires. © 1995-2012 IEEE.

Keyword :

Motion Monitoring Motion Monitoring Multi-view contour fitting Multi-view contour fitting Orthodontics Orthodontics Pose Estimation Pose Estimation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xie, J. , Zhang, C. , Wei, G. et al. Tooth Motion Monitoring in Orthodontic Treatment by Mobile Device-based Multi-view Stereo [J]. | IEEE Transactions on Visualization and Computer Graphics , 2024 .
MLA Xie, J. et al. "Tooth Motion Monitoring in Orthodontic Treatment by Mobile Device-based Multi-view Stereo" . | IEEE Transactions on Visualization and Computer Graphics (2024) .
APA Xie, J. , Zhang, C. , Wei, G. , Wang, P. , Liu, W. , Gu, M. et al. Tooth Motion Monitoring in Orthodontic Treatment by Mobile Device-based Multi-view Stereo . | IEEE Transactions on Visualization and Computer Graphics , 2024 .
Export to NoteExpress RIS BibTex

Version :

Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy
期刊论文 | 2024 , 13 (1) | Light:Science & Applications
Abstract&Keyword Cite

Abstract :

Diagnostic pathology, historically dependent on visual scrutiny by experts, is essential for disease detection. Advances in digital pathology and developments in computer vision technology have led to the application of artificial intelligence (AI) in this field. Despite these advancements, the variability in pathologists’ subjective interpretations of diagnostic criteria can lead to inconsistent outcomes. To meet the need for precision in cancer therapies, there is an increasing demand for accurate pathological diagnoses. Consequently, traditional diagnostic pathology is evolving towards “next-generation diagnostic pathology”, prioritizing on the development of a multi-dimensional, intelligent diagnostic approach. Using nonlinear optical effects arising from the interaction of light with biological tissues, multiphoton microscopy (MPM) enables high-resolution label-free imaging of multiple intrinsic components across various human pathological tissues. AI-empowered MPM further improves the accuracy and efficiency of diagnosis, holding promise for providing auxiliary pathology diagnostic methods based on multiphoton diagnostic criteria. In this review, we systematically outline the applications of MPM in pathological diagnosis across various human diseases, and summarize common multiphoton diagnostic features. Moreover, we examine the significant role of AI in enhancing multiphoton pathological diagnosis, including aspects such as image preprocessing, refined differential diagnosis, and the prognostication of outcomes. We also discuss the challenges and perspectives faced by the integration of MPM and AI, encompassing equipment, datasets, analytical models, and integration into the existing clinical pathways. Finally, the review explores the synergy between AI and label-free MPM to forge novel diagnostic frameworks, aiming to accelerate the adoption and implementation of intelligent multiphoton pathology systems in clinical settings.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shu Wang , Junlin Pan , Xiao Zhang et al. Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy [J]. | Light:Science & Applications , 2024 , 13 (1) .
MLA Shu Wang et al. "Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy" . | Light:Science & Applications 13 . 1 (2024) .
APA Shu Wang , Junlin Pan , Xiao Zhang , Yueying Li , Wenxi Liu , Ruolan Lin et al. Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy . | Light:Science & Applications , 2024 , 13 (1) .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 9 >

Export

Results:

Selected

to

Format:
Online/Total:169/10019661
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1