• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:陈炜玲

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 6 >
STFF: Spatio-Temporal and Frequency Fusion for Video Compression Artifact Removal SCIE
期刊论文 | 2025 , 71 (2) , 542-554 | IEEE TRANSACTIONS ON BROADCASTING
Abstract&Keyword Cite

Abstract :

Video compression artifact removal focuses on enhancing the visual quality of compressed videos by mitigating visual distortions. However, existing methods often struggle to effectively capture spatio-temporal features and recover high-frequency details, due to their suboptimal adaptation to the characteristics of compression artifacts. To overcome these limitations, we propose a novel Spatio-Temporal and Frequency Fusion (STFF) framework. STFF incorporates three key components: Feature Extraction and Alignment (FEA), which employs SRU for effective spatiotemporal feature extraction; Bidirectional High-Frequency Enhanced Propagation (BHFEP), which integrates HCAB to restore high-frequency details through bidirectional propagation; and Residual High-Frequency Refinement (RHFR), which further enhances high-frequency information. Extensive experiments demonstrate that STFF achieves superior performance compared to state-of-the-art methods in both objective metrics and subjective visual quality, effectively addressing the challenges posed by video compression artifacts. Trained model available: https://github.com/Stars-WMX/STFF.

Keyword :

Degradation Degradation Feature extraction Feature extraction Image coding Image coding Image restoration Image restoration Motion compensation Motion compensation Optical flow Optical flow Quality assessment Quality assessment Spatiotemporal phenomena Spatiotemporal phenomena Transformers Transformers video coding video coding Video compression Video compression Video compression artifact removal Video compression artifact removal video enhancement video enhancement video quality video quality

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Mingxing , Liao, Yipeng , Chen, Weiling et al. STFF: Spatio-Temporal and Frequency Fusion for Video Compression Artifact Removal [J]. | IEEE TRANSACTIONS ON BROADCASTING , 2025 , 71 (2) : 542-554 .
MLA Wang, Mingxing et al. "STFF: Spatio-Temporal and Frequency Fusion for Video Compression Artifact Removal" . | IEEE TRANSACTIONS ON BROADCASTING 71 . 2 (2025) : 542-554 .
APA Wang, Mingxing , Liao, Yipeng , Chen, Weiling , Lin, Liqun , Zhao, Tiesong . STFF: Spatio-Temporal and Frequency Fusion for Video Compression Artifact Removal . | IEEE TRANSACTIONS ON BROADCASTING , 2025 , 71 (2) , 542-554 .
Export to NoteExpress RIS BibTex

Version :

Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition SCIE
期刊论文 | 2025 , 27 , 455-465 | IEEE TRANSACTIONS ON MULTIMEDIA
Abstract&Keyword Cite

Abstract :

Unlike vanilla long-tailed recognition trains on imbalanced data but assumes a uniform test class distribution, test-agnostic long-tailed recognition aims to handle arbitrary test class distributions. Existing methods require prior knowledge of test sets for post-adjustment through multi-stage training, resulting in static decisions at the dataset-level. This pipeline overlooks instance diversity and is impractical in real situations. In this work, we introduce Prototype Alignment with Dedicated Experts (PADE), a one-stage framework for test-agnostic long-tailed recognition. PADE tackles unknown test distributions at the instance-level, without depending on test priors. It reformulates the task as a domain detection problem, dynamically adjusting the model for each instance. PADE comprises three main strategies: 1) parameter customization strategy for multi-experts skilled at different categories; 2) normalized target knowledge distillation for mutual guidance among experts while maintaining diversity; 3) re-balanced compactness learning with momentum prototypes, promoting instance alignment with the corresponding class centroid. We evaluate PADE on various long-tailed recognition benchmarks with diverse test distributions. The results verify its effectiveness in both vanilla and test-agnostic long-tailed recognition.

Keyword :

Long-tailed classification Long-tailed classification prototypical learning prototypical learning test-agnostic recognition test-agnostic recognition

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Guo, Chen , Chen, Weiling , Huang, Aiping et al. Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2025 , 27 : 455-465 .
MLA Guo, Chen et al. "Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition" . | IEEE TRANSACTIONS ON MULTIMEDIA 27 (2025) : 455-465 .
APA Guo, Chen , Chen, Weiling , Huang, Aiping , Zhao, Tiesong . Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition . | IEEE TRANSACTIONS ON MULTIMEDIA , 2025 , 27 , 455-465 .
Export to NoteExpress RIS BibTex

Version :

Unified No-Reference Quality Assessment for Sonar Imaging and Processing SCIE
期刊论文 | 2025 , 63 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Sonar technology has been widely used in underwater surface mapping and remote object detection for its light-independent characteristics. Recently, the booming of artificial intelligence further surges sonar image (SI) processing and understanding techniques. However, the intricate marine environments and diverse nonlinear postprocessing operations may degrade the quality of SIs, impeding accurate interpretation of underwater information. Efficient image quality assessment (IQA) methods are crucial for quality monitoring in sonar imaging and processing. Existing IQA methods overlook the unique characteristics of SIs or focus solely on typical distortions in specific scenarios, which limits their generalization capability. In this article, we propose a unified sonar IQA method, which overcomes the challenges posed by diverse distortions. Though degradation conditions are changeable, ideal SIs consistently require certain properties that must be task-centered and exhibit attribute consistency. We derive a comprehensive set of quality attributes from both the task background and visual content of SIs. These attribute features are represented in just ten dimensions and ultimately mapped to the quality score. To validate the effectiveness of our method, we construct the first comprehensive SI dataset. Experimental results demonstrate the superior performance and robustness of the proposed method.

Keyword :

Attribute consistency Attribute consistency Degradation Degradation Distortion Distortion Image quality Image quality image quality assessment (IQA) image quality assessment (IQA) Imaging Imaging Noise Noise Nonlinear distortion Nonlinear distortion no-reference (NR) no-reference (NR) Quality assessment Quality assessment Silicon Silicon Sonar Sonar sonar imaging and processing sonar imaging and processing Sonar measurements Sonar measurements

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cai, Boqin , Chen, Weiling , Zhang, Jianghe et al. Unified No-Reference Quality Assessment for Sonar Imaging and Processing [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
MLA Cai, Boqin et al. "Unified No-Reference Quality Assessment for Sonar Imaging and Processing" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) .
APA Cai, Boqin , Chen, Weiling , Zhang, Jianghe , Junejo, Naveed Ur Rehman , Zhao, Tiesong . Unified No-Reference Quality Assessment for Sonar Imaging and Processing . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
Export to NoteExpress RIS BibTex

Version :

Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory EI
期刊论文 | 2024 , 52 (11) , 3727-3740 | Acta Electronica Sinica
Abstract&Keyword Cite

Abstract :

Due to the variability of the network environment, video playback is prone to lag and bit rate fluctuations, which seriously affects the quality of end-user experience. In order to optimize network resource allocation and enhance user viewing experience, it is crucial to accurately evaluate video quality. Existing video quality evaluation methods mainly focus on the visual perception characteristics of short videos, with less consideration of the ability of human memory characteristics to store and express visual information, and the interaction between visual perception and memory characteristics. In contrast, when users watch long videos, video quality evaluation needs dynamic evaluation, which needs to consider both perceptual and memory elements. To better measure the quality evaluation of long videos, we introduce a deep network model to deeply explore the impact of video perception and memory characteristics on users' viewing experience, and proposes a dynamic quality evaluation model for long videos based on these two characteristics. Firstly, we design subjective experiments to investigate the influence of visual perceptual features and human memory features on user experience quality under different video playback modes, and constructs a video quality database with perception and memory (PAM-VQD) based on user perception and memory. Secondly, based on the PAM-VQD database, a deep learning methodology is utilized to extract deep perceptual features of videos, combined with visual attention mechanism, in order to accurately evaluate the impact of perception on user experience quality. Finally, the three features of perceptual quality score, playback status and self-lag interval output from the front-end network are fed into the long short-term memory network to establish the temporal dependency between visual perception and memory features. The experimental results show that the proposed quality assessment model can accurately predict the user experience quality under different video playback modes with good generalization performance. © 2024 Chinese Institute of Electronics. All rights reserved.

Keyword :

Long short-term memory Long short-term memory Memory architecture Memory architecture Resource allocation Resource allocation Video analysis Video analysis Video recording Video recording

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Li-Qun , Ji, Shu-Yi , He, Jia-Chen et al. Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory [J]. | Acta Electronica Sinica , 2024 , 52 (11) : 3727-3740 .
MLA Lin, Li-Qun et al. "Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory" . | Acta Electronica Sinica 52 . 11 (2024) : 3727-3740 .
APA Lin, Li-Qun , Ji, Shu-Yi , He, Jia-Chen , Zhao, Tie-Song , Chen, Wei-Ling , Guo, Chong-Ming . Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory . | Acta Electronica Sinica , 2024 , 52 (11) , 3727-3740 .
Export to NoteExpress RIS BibTex

Version :

Perception-and-Cognition-Inspired Quality Assessment for Sonar Image Super-Resolution SCIE
期刊论文 | 2024 , 26 , 6398-6410 | IEEE TRANSACTIONS ON MULTIMEDIA
WoS CC Cited Count: 7
Abstract&Keyword Cite

Abstract :

Due to the light-independent imaging characteristics, sonar images play a crucial role in fields such as underwater detection and rescue. However, the resolution of sonar images is negatively correlated with the imaging distance. To overcome this limitation, Super-Resolution (SR) techniques have been introduced into sonar image processing. Nevertheless, it is not always guaranteed that SR maintains the utility of the image. Therefore, quantifying the utility of SR reconstructed Sonar Images (SRSIs) can facilitate their optimization and usage. Existing Image Quality Assessment (IQA) methods are inadequate for evaluating SRSIs as they fail to consider both the unique characteristics of sonar images and reconstruction artifacts while meeting task requirements. In this paper, we propose a Perception-and-Cognition-inspired quality Assessment method for Sonar image Super-resolution (PCASS). Our approach incorporates a hierarchical feature fusion-based framework inspired by the cognitive process in the human brain to comprehensively evaluate SRSIs' quality under object recognition tasks. Additionally, we select features at each level considering visual perception characteristics introduced by SR reconstruction artifacts such as texture abundance, contour details, and semantic information to measure image quality accurately. Importantly, our method does not require training data and is suitable for scenarios with limited available images. Experimental results validate its superior performance.

Keyword :

hierarchical feature fusion hierarchical feature fusion image quality assessment (IQA) image quality assessment (IQA) Sonar image Sonar image super-resolution (SR) super-resolution (SR) task-oriented task-oriented

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Weiling , Cai, Boqin , Zheng, Sumei et al. Perception-and-Cognition-Inspired Quality Assessment for Sonar Image Super-Resolution [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 : 6398-6410 .
MLA Chen, Weiling et al. "Perception-and-Cognition-Inspired Quality Assessment for Sonar Image Super-Resolution" . | IEEE TRANSACTIONS ON MULTIMEDIA 26 (2024) : 6398-6410 .
APA Chen, Weiling , Cai, Boqin , Zheng, Sumei , Zhao, Tiesong , Gu, Ke . Perception-and-Cognition-Inspired Quality Assessment for Sonar Image Super-Resolution . | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 , 6398-6410 .
Export to NoteExpress RIS BibTex

Version :

Underwater image quality optimization: Researches, challenges, and future trends SCIE
期刊论文 | 2024 , 146 | IMAGE AND VISION COMPUTING
Abstract&Keyword Cite

Abstract :

Underwater images serve as crucial mediums for conveying marine information. Nevertheless, due to the inherent complexity of the underwater environment, underwater images often suffer from various quality degradation phenomena such as color deviation, low contrast, and non-uniform illumination. These degraded underwater images fail to meet the requirements of underwater computer vision applications. Consequently, effective quality optimization of underwater images is of paramount research and analytical value. Based on whether they rely on underwater physical imaging models, underwater image quality optimization techniques can be categorized into underwater image enhancement and underwater image restoration methods. This paper provides a comprehensive review of underwater image enhancement and restoration algorithms, accompanied by a brief introduction to underwater imaging model. Then, we systematically analyze publicly available underwater image datasets and commonly-used quality assessment methodologies. Furthermore, extensive experimental comparisons are carried out to assess the performance of underwater image optimization algorithms and their practical impact on high-level vision tasks. Finally, the challenges and future development trends in this field are discussed. We hope that the efforts made in this paper will provide valuable references for future research and contribute to the innovative advancement of underwater image optimization.

Keyword :

Image quality assessment Image quality assessment Underwater image datasets Underwater image datasets Underwater image enhancement Underwater image enhancement Underwater image restoration Underwater image restoration

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Mingjie , Zhang, Keke , Wei, Hongan et al. Underwater image quality optimization: Researches, challenges, and future trends [J]. | IMAGE AND VISION COMPUTING , 2024 , 146 .
MLA Wang, Mingjie et al. "Underwater image quality optimization: Researches, challenges, and future trends" . | IMAGE AND VISION COMPUTING 146 (2024) .
APA Wang, Mingjie , Zhang, Keke , Wei, Hongan , Chen, Weiling , Zhao, Tiesong . Underwater image quality optimization: Researches, challenges, and future trends . | IMAGE AND VISION COMPUTING , 2024 , 146 .
Export to NoteExpress RIS BibTex

Version :

Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment SCIE
期刊论文 | 2024 , 34 (7) , 5897-5907 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Abstract&Keyword Cite

Abstract :

Super-Resolution (SR) algorithms aim to enhance the resolutions of images. Massive deep-learning-based SR techniques have emerged in recent years. In such case, a visually appealing output may contain additional details compared with its reference image. Accordingly, fully referenced Image Quality Assessment (IQA) cannot work well; however, reference information remains essential for evaluating the qualities of SR images. This poses a challenge to SR-IQA: How to balance the referenced and no-reference scores for user perception? In this paper, we propose a Perception-driven Similarity-Clarity Tradeoff (PSCT) model for SR-IQA. Specifically, we investigate this problem from both referenced and no-reference perspectives, and design two deep-learning-based modules to obtain referenced and no-reference scores. We present a theoretical analysis based on Human Visual System (HVS) properties on their tradeoff and also calculate adaptive weights for them. Experimental results indicate that our PSCT model is superior to the state-of-the-arts on SR-IQA. In addition, the proposed PSCT model is also capable of evaluating quality scores in other image enhancement scenarios, such as deraining, dehazing and underwater image enhancement. The source code is available at https://github.com/kekezhang112/PSCT.

Keyword :

Adaptation models Adaptation models Distortion Distortion Feature extraction Feature extraction Image quality assessment Image quality assessment image super-resolution image super-resolution Measurement Measurement perception-driven perception-driven Quality assessment Quality assessment similarity-clarity tradeoff similarity-clarity tradeoff Superresolution Superresolution Task analysis Task analysis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Keke , Zhao, Tiesong , Chen, Weiling et al. Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (7) : 5897-5907 .
MLA Zhang, Keke et al. "Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34 . 7 (2024) : 5897-5907 .
APA Zhang, Keke , Zhao, Tiesong , Chen, Weiling , Niu, Yuzhen , Hu, Jinsong , Lin, Weisi . Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (7) , 5897-5907 .
Export to NoteExpress RIS BibTex

Version :

Distillation-Based Utility Assessment for Compacted Underwater Information SCIE
期刊论文 | 2024 , 31 , 481-485 | IEEE SIGNAL PROCESSING LETTERS
Abstract&Keyword Cite

Abstract :

The limited bandwidth of underwater acoustic channels poses a challenge to the efficiency of multimedia information transmission. To improve efficiency, the system aims to transmit less data while maintaining image utility at the receiving end. Although assessing utility within compressed information is essential, the current methods exhibit limitations in addressing utility-driven quality assessment. Therefore, this letter built a Utility-oriented compacted Image Quality Dataset (UCIQD) that contains utility qualities of reference images and their corresponding compcated information at different levels. The utility score is derived from the average confidence of various object detection models. Then, based on UCIQD, we introduce a Distillation-based Compacted Information Quality assessment metric (DCIQ) for utility-oriented quality evaluation in the context of underwater machine vision. In DCIQ, utility features of compacted information are acquired through transfer learning and mapped using a Transformer. Besides, we propose a utility-oriented cross-model feature fusion mechanism to address different detection algorithm preferences. After that, a utility-oriented feature quality measure assesses compacted feature utility. Finally, we utilize distillation to compress the model by reducing its parameters by 55%. Experiment results effectively demonstrate that our proposed DCIQ can predict utility-oriented quality within compressed underwater information.

Keyword :

Compacted underwater information Compacted underwater information distillation distillation utility-oriented quality assessment utility-oriented quality assessment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liao, Honggang , Jiang, Nanfeng , Chen, Weiling et al. Distillation-Based Utility Assessment for Compacted Underwater Information [J]. | IEEE SIGNAL PROCESSING LETTERS , 2024 , 31 : 481-485 .
MLA Liao, Honggang et al. "Distillation-Based Utility Assessment for Compacted Underwater Information" . | IEEE SIGNAL PROCESSING LETTERS 31 (2024) : 481-485 .
APA Liao, Honggang , Jiang, Nanfeng , Chen, Weiling , Wei, Hongan , Zhao, Tiesong . Distillation-Based Utility Assessment for Compacted Underwater Information . | IEEE SIGNAL PROCESSING LETTERS , 2024 , 31 , 481-485 .
Export to NoteExpress RIS BibTex

Version :

Multi-feature fusion for efficient inter prediction in versatile video coding SCIE
期刊论文 | 2024 , 21 (6) | JOURNAL OF REAL-TIME IMAGE PROCESSING
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Versatile Video Coding (VVC) introduces various advanced coding techniques and tools, such as QuadTree with nested Multi-type Tree (QTMT) partition structure, and outperforms High Efficiency Video Coding (HEVC) in terms of coding performance. However, the improvement of coding performance leads to an increase in coding complexity. In this paper, we propose a multi-feature fusion framework that integrates the rate-distortion-complexity optimization theory with deep learning techniques to reduce the complexity of QTMT partition for VVC inter-prediction. Firstly, the proposed framework extracts features of luminance, motion, residuals, and quantization information from video frames and then performs feature fusion through a convolutional neural network to predict the minimum partition size of Coding Units (CUs). Next, a novel rate-distortion-complexity loss function is designed to balance computational complexity and compression performance. Then, through this loss function, we can adjust various distributions of rate-distortion-complexity costs. This adjustment impacts the prediction bias of the network and sets constraints on different block partition sizes to facilitate complexity adjustment. Compared to anchor VTM-\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-$$\end{document}13.0, the proposed method saves the encoding time by 10.14% to 56.62%, with BDBR increase confined to a range of 0.31% to 6.70%. The proposed method achieves a broader range of complexity adjustments while ensuring coding performance, surpassing both traditional methods and deep learning-based methods.

Keyword :

Block partition Block partition CNN CNN Complexity optimization Complexity optimization Multi-feature fusion Multi-feature fusion Versatile video coding Versatile video coding

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wei, Xiaojie , Zeng, Hongji , Fang, Ying et al. Multi-feature fusion for efficient inter prediction in versatile video coding [J]. | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2024 , 21 (6) .
MLA Wei, Xiaojie et al. "Multi-feature fusion for efficient inter prediction in versatile video coding" . | JOURNAL OF REAL-TIME IMAGE PROCESSING 21 . 6 (2024) .
APA Wei, Xiaojie , Zeng, Hongji , Fang, Ying , Lin, Liqun , Chen, Weiling , Xu, Yiwen . Multi-feature fusion for efficient inter prediction in versatile video coding . | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2024 , 21 (6) .
Export to NoteExpress RIS BibTex

Version :

FUVC: A Flexible Codec for Underwater Video Transmission SCIE
期刊论文 | 2024 , 62 , 18-18 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

Smart oceanic exploration has greatly benefitted from AI-driven underwater image and video processing. However, the volume of underwater video content is subject to narrow-band and time-varying underwater acoustic channels. How to support high-utility video transmission at such a limited capacity is still an open issue. In this article, we propose a Flexible Underwater Video Codec (FUVC) with separate designs for targets-of-interest regions and backgrounds. The encoder locates all targets of interest, compresses their corresponding regions with x.265, and, if bandwidth allows, compresses the background with a lower bitrate. The decoder reconstructs both streams, identifies clean targets of interest, and fuses them with the background via a mask detection and background recovery (MDBR) network. When the background stream is unavailable, the decoder adapts all targets of interest to a virtual background via Poisson blending. Experimental results show that FUVC outperforms other codecs with a lower bitrate at the same quality. It also supports a flexible codec for underwater acoustic channels. The database and the source code are available at https://github.com/z21110008/FUVC.

Keyword :

Ocean exploration Ocean exploration smart oceans smart oceans underwater image processing underwater image processing video coding video coding video compression video compression

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zheng, Yannan , Luo, Jiawei , Chen, Weiling et al. FUVC: A Flexible Codec for Underwater Video Transmission [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 : 18-18 .
MLA Zheng, Yannan et al. "FUVC: A Flexible Codec for Underwater Video Transmission" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) : 18-18 .
APA Zheng, Yannan , Luo, Jiawei , Chen, Weiling , Li, Zuoyong , Zhao, Tiesong . FUVC: A Flexible Codec for Underwater Video Transmission . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 , 18-18 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 6 >

Export

Results:

Selected

to

Format:
Online/Total:642/10840997
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1