• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:林丽群

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 5 >
LightViD: Efficient Video Deblurring with Spatial-Temporal Feature Fusion Scopus
期刊论文 | 2024 , 34 (8) , 1-1 | IEEE Transactions on Circuits and Systems for Video Technology
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

Natural video capturing suffers from visual blurriness due to high-motion of cameras or objects. Until now, the video blurriness removal task has been extensively explored for both human vision and machine processing. However, its computational cost is still a critical issue and has not yet been fully addressed. In this paper, we propose a novel Lightweight Video Deblurring (LightViD) method that achieves the top-tier performance with an extremely low parameter size. The proposed LightViD consists of a blur detector and a deblurring network. In particular, the blur detector effectively separate blurriness regions, thus avoid both unnecessary computation and over-enhancement on non-blurriness regions. The deblurring network is designed as a lightweight model. It employs a Spatial Feature Fusion Block (SFFB) to extract hierarchical spatial features, which are further fused by ConvLSTM for effective spatial-temporal feature representation. Comprehensive experiments with quantitative and qualitative comparisons demonstrate the effectiveness of our LightViD method, which achieves competitive performances on GoPro and DVD datasets, with reduced computational costs of 1.63M parameters and 96.8 GMACs. Trained model available: https://github.com/wgp/LightVid. IEEE

Keyword :

Blur Detection Blur Detection Computational efficiency Computational efficiency Computational modeling Computational modeling Detectors Detectors Feature extraction Feature extraction Image restoration Image restoration Kernel Kernel Spatial-Temporal Feature Fusion Spatial-Temporal Feature Fusion Task analysis Task analysis Video Deblurring Video Deblurring

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, L. , Wei, G. , Liu, K. et al. LightViD: Efficient Video Deblurring with Spatial-Temporal Feature Fusion [J]. | IEEE Transactions on Circuits and Systems for Video Technology , 2024 , 34 (8) : 1-1 .
MLA Lin, L. et al. "LightViD: Efficient Video Deblurring with Spatial-Temporal Feature Fusion" . | IEEE Transactions on Circuits and Systems for Video Technology 34 . 8 (2024) : 1-1 .
APA Lin, L. , Wei, G. , Liu, K. , Feng, W. , Zhao, T. . LightViD: Efficient Video Deblurring with Spatial-Temporal Feature Fusion . | IEEE Transactions on Circuits and Systems for Video Technology , 2024 , 34 (8) , 1-1 .
Export to NoteExpress RIS BibTex

Version :

Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset Scopus
期刊论文 | 2024 , 26 , 1-12 | IEEE Transactions on Multimedia
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

Video compression leads to compression artifacts, among which Perceivable Encoding Artifacts (PEAs) degrade user perception. Most of existing state-of-the-art Video Compression Artifact Removal (VCAR) methods indiscriminately process all artifacts, thus leading to over-enhancement in non-PEA regions. Therefore, accurate detection and location of PEAs is crucial. In this paper, we propose the largest-ever Fine-grained PEA database (FPEA). First, we employ the popular video codecs, VVC and AVS3, as well as their common test settings, to generate four types of spatial PEAs (blurring, blocking, ringing and color bleeding) and two types of temporal PEAs (flickering and floating). Second, we design a labeling platform and recruit sufficient subjects to manually locate all the above types of PEAs. Third, we propose a voting mechanism and feature matching to synthesize all subjective labels to obtain the final PEA labels with fine-grained locations. Besides, we also provide Mean Opinion Score (MOS) values of all compressed video sequences. Experimental results show the effectiveness of FPEA database on both VCAR and compressed Video Quality Assessment (VQA). We envision that FPEA database will benefit the future development of VCAR, VQA and perception-aware video encoders. The FPEA database has been made publicly available. IEEE

Keyword :

Perceivable encoding artifact Perceivable encoding artifact video compression video compression video compression artifact removal video compression artifact removal video quality assessment video quality assessment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, L. , Wang, M. , Yang, J. et al. Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset [J]. | IEEE Transactions on Multimedia , 2024 , 26 : 1-12 .
MLA Lin, L. et al. "Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset" . | IEEE Transactions on Multimedia 26 (2024) : 1-12 .
APA Lin, L. , Wang, M. , Yang, J. , Zhang, K. , Zhao, T. . Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset . | IEEE Transactions on Multimedia , 2024 , 26 , 1-12 .
Export to NoteExpress RIS BibTex

Version :

"5G+人工智能"时代的教学新挑战
期刊论文 | 2024 , (40) , 42-46 | 教育教学论坛
Abstract&Keyword Cite

Abstract :

在"中国制造2025"的国家需求及福建省海西地方经济和产业升级需求的背景下,传统的信号与信息处理专业的培养方式对未来所需的人才品质存在不适应性.通过分析信号与信息处理专业教学体系现状,以福州大学为例,研究人工智能时代的信号专业教育教学改革机制,分别从学位点建设、课程建设、培养方案、培养目标、课程体系等方面探讨了教学改革机制,从而为高等院校培养信号与信息处理方向的综合型创新人才提供参考.

Keyword :

5G 5G 人工智能 人工智能 信号与信息处理专业 信号与信息处理专业 教学改革 教学改革 课程思政 课程思政

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 陈炜玲 , 林丽群 , 赵铁松 . "5G+人工智能"时代的教学新挑战 [J]. | 教育教学论坛 , 2024 , (40) : 42-46 .
MLA 陈炜玲 et al. ""5G+人工智能"时代的教学新挑战" . | 教育教学论坛 40 (2024) : 42-46 .
APA 陈炜玲 , 林丽群 , 赵铁松 . "5G+人工智能"时代的教学新挑战 . | 教育教学论坛 , 2024 , (40) , 42-46 .
Export to NoteExpress RIS BibTex

Version :

Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset SCIE
期刊论文 | 2024 , 26 , 10816-10827 | IEEE TRANSACTIONS ON MULTIMEDIA
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Video compression leads to compression artifacts, among which Perceivable Encoding Artifacts (PEAs) degrade user perception. Most of existing state-of-the-art Video Compression Artifact Removal (VCAR) methods indiscriminately process all artifacts, thus leading to over-enhancement in non-PEA regions. Therefore, accurate detection and location of PEAs is crucial. In this paper, we propose the largest-ever Fine-grained PEA database (FPEA). First, we employ the popular video codecs, VVC and AVS3, as well as their common test settings, to generate four types of spatial PEAs (blurring, blocking, ringing and color bleeding) and two types of temporal PEAs (flickering and floating). Second, we design a labeling platform and recruit sufficient subjects to manually locate all the above types of PEAs. Third, we propose a voting mechanism and feature matching to synthesize all subjective labels to obtain the final PEA labels with fine-grained locations. Besides, we also provide Mean Opinion Score (MOS) values of all compressed video sequences. Experimental results show the effectiveness of FPEA database on both VCAR and compressed Video Quality Assessment (VQA). We envision that FPEA database will benefit the future development of VCAR, VQA and perception-aware video encoders. The FPEA database has been made publicly available.

Keyword :

Databases Databases Distortion Distortion Encoding Encoding Image coding Image coding Perceivable encoding artifact Perceivable encoding artifact Quality assessment Quality assessment video compression video compression Video compression Video compression video compression artifact removal video compression artifact removal video quality assessment video quality assessment Video recording Video recording

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Liqun , Wang, Mingxing , Yang, Jing et al. Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 : 10816-10827 .
MLA Lin, Liqun et al. "Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset" . | IEEE TRANSACTIONS ON MULTIMEDIA 26 (2024) : 10816-10827 .
APA Lin, Liqun , Wang, Mingxing , Yang, Jing , Zhang, Keke , Zhao, Tiesong . Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset . | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 , 10816-10827 .
Export to NoteExpress RIS BibTex

Version :

Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset Scopus
期刊论文 | 2024 , 26 , 1-12 | IEEE Transactions on Multimedia
Toward Efficient Video Compression Artifact Detection and Removal: A Benchmark Dataset EI
期刊论文 | 2024 , 26 , 10816-10827 | IEEE Transactions on Multimedia
Multi-feature fusion for efficient inter prediction in versatile video coding SCIE
期刊论文 | 2024 , 21 (6) | JOURNAL OF REAL-TIME IMAGE PROCESSING
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Versatile Video Coding (VVC) introduces various advanced coding techniques and tools, such as QuadTree with nested Multi-type Tree (QTMT) partition structure, and outperforms High Efficiency Video Coding (HEVC) in terms of coding performance. However, the improvement of coding performance leads to an increase in coding complexity. In this paper, we propose a multi-feature fusion framework that integrates the rate-distortion-complexity optimization theory with deep learning techniques to reduce the complexity of QTMT partition for VVC inter-prediction. Firstly, the proposed framework extracts features of luminance, motion, residuals, and quantization information from video frames and then performs feature fusion through a convolutional neural network to predict the minimum partition size of Coding Units (CUs). Next, a novel rate-distortion-complexity loss function is designed to balance computational complexity and compression performance. Then, through this loss function, we can adjust various distributions of rate-distortion-complexity costs. This adjustment impacts the prediction bias of the network and sets constraints on different block partition sizes to facilitate complexity adjustment. Compared to anchor VTM-\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-$$\end{document}13.0, the proposed method saves the encoding time by 10.14% to 56.62%, with BDBR increase confined to a range of 0.31% to 6.70%. The proposed method achieves a broader range of complexity adjustments while ensuring coding performance, surpassing both traditional methods and deep learning-based methods.

Keyword :

Block partition Block partition CNN CNN Complexity optimization Complexity optimization Multi-feature fusion Multi-feature fusion Versatile video coding Versatile video coding

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wei, Xiaojie , Zeng, Hongji , Fang, Ying et al. Multi-feature fusion for efficient inter prediction in versatile video coding [J]. | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2024 , 21 (6) .
MLA Wei, Xiaojie et al. "Multi-feature fusion for efficient inter prediction in versatile video coding" . | JOURNAL OF REAL-TIME IMAGE PROCESSING 21 . 6 (2024) .
APA Wei, Xiaojie , Zeng, Hongji , Fang, Ying , Lin, Liqun , Chen, Weiling , Xu, Yiwen . Multi-feature fusion for efficient inter prediction in versatile video coding . | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2024 , 21 (6) .
Export to NoteExpress RIS BibTex

Version :

Multi-feature fusion for efficient inter prediction in versatile video coding Scopus
期刊论文 | 2024 , 21 (6) | Journal of Real-Time Image Processing
Multi-feature fusion for efficient inter prediction in versatile video coding EI
期刊论文 | 2024 , 21 (6) | Journal of Real-Time Image Processing
Face Super-Resolution Quality Assessment Based On Identity and Recognizability Scopus
期刊论文 | 2024 , 6 (3) , 1-1 | IEEE Transactions on Biometrics, Behavior, and Identity Science
SCOPUS Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Face Super-Resolution (FSR) plays a crucial role in enhancing low-resolution face images, which is essential for various face-related tasks. However, FSR may alter individuals&#x2019; identities or introduce artifacts that affect recognizability. This problem has not been well assessed by existing Image Quality Assessment (IQA) methods. In this paper, we present both subjective and objective evaluations for FSR-IQA, resulting in a benchmark dataset and a reduced reference quality metrics, respectively. First, we incorporate a novel criterion of identity preservation and recognizability to develop our Face Super-resolution Quality Dataset (FSQD). Second, we analyze the correlation between identity preservation and recognizability, and investigate effective feature extractions for both of them. Third, we propose a training-free IQA framework called Face Identity and Recognizability Evaluation of Super-resolution (FIRES). Experimental results using FSQD demonstrate that FIRES achieves competitive performance. IEEE

Keyword :

Biometrics Biometrics Face recognition Face recognition face super-resolution face super-resolution Feature extraction Feature extraction identity preservation identity preservation Image quality Image quality Image recognition Image recognition Image reconstruction Image reconstruction Measurement Measurement quality assessment quality assessment recognizability recognizability Superresolution Superresolution

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, W. , Lin, W. , Xu, X. et al. Face Super-Resolution Quality Assessment Based On Identity and Recognizability [J]. | IEEE Transactions on Biometrics, Behavior, and Identity Science , 2024 , 6 (3) : 1-1 .
MLA Chen, W. et al. "Face Super-Resolution Quality Assessment Based On Identity and Recognizability" . | IEEE Transactions on Biometrics, Behavior, and Identity Science 6 . 3 (2024) : 1-1 .
APA Chen, W. , Lin, W. , Xu, X. , Lin, L. , Zhao, T. . Face Super-Resolution Quality Assessment Based On Identity and Recognizability . | IEEE Transactions on Biometrics, Behavior, and Identity Science , 2024 , 6 (3) , 1-1 .
Export to NoteExpress RIS BibTex

Version :

Face Super-Resolution Quality Assessment Based on Identity and Recognizability
期刊论文 | 2024 , 6 (3) , 364-373 | IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE
Face Super-Resolution Quality Assessment Based on Identity and Recognizability EI
期刊论文 | 2024 , 6 (3) , 364-373 | IEEE Transactions on Biometrics, Behavior, and Identity Science
UKD-Net: efficient image enhancement with knowledge distillation SCIE
期刊论文 | 2024 , 33 (2) | JOURNAL OF ELECTRONIC IMAGING
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Underwater images often suffer from color distortion, blurred details, and low contrast. Therefore, more researchers are exploring underwater image enhancement (UIE) methods. However, UIE models based on deep learning suffer from high computational complexity, thus limiting their integration into underwater devices. In this work, we propose a lightweight UIE network based on knowledge distillation (UKD-Net), which includes a teacher network (T-Net) and a student network (S-Net). T-Net uses our designed multi-scale fusion block and parallel attention block to achieve excellent performance. We utilize knowledge distillation technology to transfer the rich knowledge of the T-Net onto a deployable S-Net. Additionally, S-Net employs blueprint separable convolutions and multistage distillation block to reduce parameter count and computational complexity. Results demonstrate that our UKD-Net successfully achieves a lightweight model design while maintaining superior enhanced performance.

Keyword :

knowledge distillation knowledge distillation lightweight lightweight underwater image enhancement underwater image enhancement

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Xiaoyan , Cai, Xiaowen , Xue, Ying et al. UKD-Net: efficient image enhancement with knowledge distillation [J]. | JOURNAL OF ELECTRONIC IMAGING , 2024 , 33 (2) .
MLA Zhao, Xiaoyan et al. "UKD-Net: efficient image enhancement with knowledge distillation" . | JOURNAL OF ELECTRONIC IMAGING 33 . 2 (2024) .
APA Zhao, Xiaoyan , Cai, Xiaowen , Xue, Ying , Liao, Yipeng , Lin, Liqun , Zhao, Tiesong . UKD-Net: efficient image enhancement with knowledge distillation . | JOURNAL OF ELECTRONIC IMAGING , 2024 , 33 (2) .
Export to NoteExpress RIS BibTex

Version :

UKD-Net: efficient image enhancement with knowledge distillation EI
期刊论文 | 2024 , 33 (2) | Journal of Electronic Imaging
UKD-Net: efficient image enhancement with knowledge distillation Scopus
期刊论文 | 2024 , 33 (2) | Journal of Electronic Imaging
基于感知和记忆的视频动态质量评价 CSCD PKU
期刊论文 | 2024 | 电子学报
Abstract&Keyword Cite Version(1)

Abstract :

由于网络环境的多变性,视频播放过程中容易出现卡顿、比特率波动等情况,严重影响了终端用户的体验质量. 为优化网络资源分配并提升用户观看体验,准确评估视频质量至关重要. 现有的视频质量评价方法主要针对短视频,普遍关注人眼视觉感知特性,较少考虑人类记忆特性对视觉信息的存储和表达能力,以及视觉感知和记忆特性之间的相互作用. 而用户观看长视频的时候,其质量评价需要动态评价,除了考虑感知要素外,还要引入记忆要素.为了更好地衡量长视频的质量评价,本文引入深度网络模型,深入探讨了视频感知和记忆特性对用户观看体验的影响,并基于两者特性提出长视频的动态质量评价模型. 首先,本文设计主观实验,探究在不同视频播放模式下,视觉感知特性和人类记忆特性对用户体验质量的影响,构建了基于用户感知和记忆的视频质量数据库(Video Quality Database with Perception and Memory,PAM-VQD);其次,基于 PAM-VQD 数据库,采用深度学习的方法,结合视觉注意力机制,提取视频的深层感知特征,以精准评估感知对用户体验质量的影响;最后,将前端网络输出的感知质量分数、播放状态以及自卡顿间隔作为三个特征输入长短期记忆网络,以建立视觉感知和记忆特性之间的时间依赖关系. 实验结果表明,所提出的质量评估模型在不同视频播放模式下均能准确预测用户体验质量,且泛化性能良好.

Keyword :

体验质量 体验质量 注意力机制 注意力机制 深度学习 深度学习 视觉感知特性 视觉感知特性 记忆效应 记忆效应

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 林丽群 , 暨书逸 , 何嘉晨 et al. 基于感知和记忆的视频动态质量评价 [J]. | 电子学报 , 2024 .
MLA 林丽群 et al. "基于感知和记忆的视频动态质量评价" . | 电子学报 (2024) .
APA 林丽群 , 暨书逸 , 何嘉晨 , 赵铁松 , 陈炜玲 , 郭宗明 . 基于感知和记忆的视频动态质量评价 . | 电子学报 , 2024 .
Export to NoteExpress RIS BibTex
Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory EI
期刊论文 | 2024 , 52 (11) , 3727-3740 | Acta Electronica Sinica
Abstract&Keyword Cite Version(1)

Abstract :

Due to the variability of the network environment, video playback is prone to lag and bit rate fluctuations, which seriously affects the quality of end-user experience. In order to optimize network resource allocation and enhance user viewing experience, it is crucial to accurately evaluate video quality. Existing video quality evaluation methods mainly focus on the visual perception characteristics of short videos, with less consideration of the ability of human memory characteristics to store and express visual information, and the interaction between visual perception and memory characteristics. In contrast, when users watch long videos, video quality evaluation needs dynamic evaluation, which needs to consider both perceptual and memory elements. To better measure the quality evaluation of long videos, we introduce a deep network model to deeply explore the impact of video perception and memory characteristics on users' viewing experience, and proposes a dynamic quality evaluation model for long videos based on these two characteristics. Firstly, we design subjective experiments to investigate the influence of visual perceptual features and human memory features on user experience quality under different video playback modes, and constructs a video quality database with perception and memory (PAM-VQD) based on user perception and memory. Secondly, based on the PAM-VQD database, a deep learning methodology is utilized to extract deep perceptual features of videos, combined with visual attention mechanism, in order to accurately evaluate the impact of perception on user experience quality. Finally, the three features of perceptual quality score, playback status and self-lag interval output from the front-end network are fed into the long short-term memory network to establish the temporal dependency between visual perception and memory features. The experimental results show that the proposed quality assessment model can accurately predict the user experience quality under different video playback modes with good generalization performance. © 2024 Chinese Institute of Electronics. All rights reserved.

Keyword :

Long short-term memory Long short-term memory Memory architecture Memory architecture Resource allocation Resource allocation Video analysis Video analysis Video recording Video recording

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Li-Qun , Ji, Shu-Yi , He, Jia-Chen et al. Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory [J]. | Acta Electronica Sinica , 2024 , 52 (11) : 3727-3740 .
MLA Lin, Li-Qun et al. "Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory" . | Acta Electronica Sinica 52 . 11 (2024) : 3727-3740 .
APA Lin, Li-Qun , Ji, Shu-Yi , He, Jia-Chen , Zhao, Tie-Song , Chen, Wei-Ling , Guo, Chong-Ming . Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory . | Acta Electronica Sinica , 2024 , 52 (11) , 3727-3740 .
Export to NoteExpress RIS BibTex

Version :

Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory; [基于感知和记忆的视频动态质量评价] Scopus
期刊论文 | 2024 , 52 (11) , 3727-3740 | Acta Electronica Sinica
LightViD: Efficient Video Deblurring With Spatial-Temporal Feature Fusion SCIE
期刊论文 | 2024 , 34 (8) , 7430-7439 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
WoS CC Cited Count: 2
Abstract&Keyword Cite Version(2)

Abstract :

Natural video capturing suffers from visual blurriness due to high-motion of cameras or objects. Until now, the video blurriness removal task has been extensively explored for both human vision and machine processing. However, its computational cost is still a critical issue and has not yet been fully addressed. In this paper, we propose a novel Lightweight Video Deblurring (LightViD) method that achieves the top-tier performance with an extremely low parameter size. The proposed LightViD consists of a blur detector and a deblurring network. In particular, the blur detector effectively separate blurriness regions, thus avoid both unnecessary computation and over-enhancement on non-blurriness regions. The deblurring network is designed as a lightweight model. It employs a Spatial Feature Fusion Block (SFFB) to extract hierarchical spatial features, which are further fused by ConvLSTM for effective spatial-temporal feature representation. Comprehensive experiments with quantitative and qualitative comparisons demonstrate the effectiveness of our LightViD method, which achieves competitive performances on GoPro and DVD datasets, with reduced computational costs of 1.63M parameters and 96.8 GMACs. Trained model available: https://github.com/wgp/LightVid.

Keyword :

blur detection blur detection Computational efficiency Computational efficiency Computational modeling Computational modeling Detectors Detectors Feature extraction Feature extraction Image restoration Image restoration Kernel Kernel spatial-temporal feature fusion spatial-temporal feature fusion Task analysis Task analysis Video deblurring Video deblurring

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Liqun , Wei, Guangpeng , Liu, Kanglin et al. LightViD: Efficient Video Deblurring With Spatial-Temporal Feature Fusion [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (8) : 7430-7439 .
MLA Lin, Liqun et al. "LightViD: Efficient Video Deblurring With Spatial-Temporal Feature Fusion" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34 . 8 (2024) : 7430-7439 .
APA Lin, Liqun , Wei, Guangpeng , Liu, Kanglin , Feng, Wanjian , Zhao, Tiesong . LightViD: Efficient Video Deblurring With Spatial-Temporal Feature Fusion . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (8) , 7430-7439 .
Export to NoteExpress RIS BibTex

Version :

LightViD: Efficient Video Deblurring With Spatial-Temporal Feature Fusion EI
期刊论文 | 2024 , 34 (8) , 7430-7439 | IEEE Transactions on Circuits and Systems for Video Technology
LightViD: Efficient Video Deblurring with Spatial-Temporal Feature Fusion Scopus
期刊论文 | 2024 , 34 (8) , 1-1 | IEEE Transactions on Circuits and Systems for Video Technology
10| 20| 50 per page
< Page ,Total 5 >

Export

Results:

Selected

to

Format:
Online/Total:145/9853593
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1