• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:印佳丽

Refining:

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 3 >
Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks EI
会议论文 | 2025 , 39 (9) , 9508-9516 | 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Abstract&Keyword Cite Version(1)

Abstract :

Backdoor attacks and adversarial attacks are two major security threats to deep neural networks (DNNs), with the former one is a training-time data poisoning attack that aims to implant backdoor triggers into models by injecting trigger patterns into training samples, and the latter one is a testing-time attack trying to generate adversarial examples (AEs) from benign images to mislead a well-trained model. While previous works generally treat these two attacks separately, the inherent connection between these two attacks is rarely explored. In this paper, we focus on bridging backdoor and adversarial attacks and observe two intriguing phenomena when applying adversarial attacks on an infected model implanted with backdoors: 1) the sample is harder to be turned into an AE when the trigger is presented; 2) the AEs generated from backdoor samples are highly likely to be predicted as its true labels. Inspired by these observations, we proposed a novel backdoor defense method, dubbed Adversarial-Inspired Backdoor Defense (AIBD), to isolate the backdoor samples by leveraging a progressive top-q scheme and break the correlation between backdoor samples and their target labels using adversarial labels. Through extensive experiments on various datasets against six state-of-the-art backdoor attacks, the AIBD-trained models on poisoned data demonstrate superior performance over the existing defense methods. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Keyword :

Backpropagation Backpropagation Deep neural networks Deep neural networks

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Jia-Li , Wang, Weijian , Lyhwa et al. Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks [C] . 2025 : 9508-9516 .
MLA Yin, Jia-Li et al. "Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks" . (2025) : 9508-9516 .
APA Yin, Jia-Li , Wang, Weijian , Lyhwa , Lin, Wei , Liu, Ximeng . Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks . (2025) : 9508-9516 .
Export to NoteExpress RIS BibTex

Version :

Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks Scopus
其他 | 2025 , 39 (9) , 9508-9516 | Proceedings of the AAAI Conference on Artificial Intelligence
Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline EI
会议论文 | 2024 , 4786-4794 | 32nd ACM International Conference on Multimedia, MM 2024
Abstract&Keyword Cite Version(1)

Abstract :

Adversarial examples (AEs), which are maliciously hand-crafted by adding perturbations to benign images, reveal the vulnerability of deep neural networks (DNNs) and have been used as a benchmark for evaluating model robustness. With great efforts have been devoted to generating AEs with stronger attack ability, the visual quality of AEs is generally neglected in previous studies. The lack of a good quality measure of AEs makes it very hard to compare the relative merits of attack techniques and is hindering technological advancement. How to evaluate the visual quality of AEs remains an understudied and unsolved problem. In this work, we make the first attempt to fill the gap by presenting an image quality assessment method specifically designed for AEs. Towards this goal, we first construct a new database, called AdvDB, developed on diverse adversarial examples with elaborated annotations. We also propose a detection-based structural similarity index (AdvDSS) for adversarial example perceptual quality assessment. Specifically, the visual saliency for capturing the near-threshold adversarial distortions is first detected via human visual system (HVS) techniques and then the structural similarity is extracted to predict the quality score. Moreover, we further propose AEQA for overall adversarial example quality assessment by integrating the perceptual quality and attack intensity of AEs. Extensive experiments validate that the proposed AdvDSS achieves state-of-the-art performance which is more consistent with human opinions. © 2024 ACM.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Jia-Li , Chen, Menghao , Han, Jin et al. Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline [C] . 2024 : 4786-4794 .
MLA Yin, Jia-Li et al. "Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline" . (2024) : 4786-4794 .
APA Yin, Jia-Li , Chen, Menghao , Han, Jin , Chen, Bo-Hao , Liu, Ximeng . Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline . (2024) : 4786-4794 .
Export to NoteExpress RIS BibTex

Version :

Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline Scopus
其他 | 2024 , 4786-4794 | MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
基于一致性感知特征融合的高动态范围成像方法
期刊论文 | 2024 , 47 (10) , 2352-2367 | 计算机学报
Abstract&Keyword Cite Version(1)

Abstract :

高动态范围成像(High Dynamic Range Imaging,HDRI)技术是指通过融合多张低动态范围(Low Dynamic Range,LDR)图像拓展图像动态范围、完整图像内容的方法,其为解决由于相机传感器动态范围有限而导致所拍摄图像内容丢失的问题提供了实际的解决方案.通过数十年的研究,众多有效的HDRI方法已被提出,并在无物体运动、内容曝光良好的静态场景中取得接近最优的性能.然而,现实场景中物体移动和相机偏移无法避免,直接使用传统HDRI方法会在融合后的HDR图像中产生严重的重影和伪影.这使得仅包含简单融合过程的HDRI方法并不适用于实际应用,现实场景中的HDRI任务仍然具有一定挑战.因此,针对动态场景下的HDRI研究迅速发展.近期的方法集中在借助深度卷积神经网络(Convolutional Neural Network,CNN)的力量以期实现更好的性能.在这些基于CNN的方法中,特征融合对于恢复图像完整内容、消除图像伪影方面起着至关重要的作用.传统的特征融合方法通过借助跳跃连接或注意力模块,首先将LDR图像的特征进行拼接,并通过堆叠的卷积操作逐渐关注不同的局部特征.然而,此类方案通常忽略了 LDR图像序列之间丰富的上下文依赖关系,且未充分利用特征之间的纹理一致性.为解决这一问题,本文提出了一种全新的一致性感知特征融合(Coherence-Aware Feature Aggregation,CAFA)方案,该方案在卷积过程中对输入特征中位于不同空间位置但具有相同上下文信息的特征信息进行采样,从而显式地将上下文一致性纳入特征融合中.基于CAFA,本文进一步提出了一种结合CAFA的动态场景下一致性感知高动态范围成像网络CAHDRNet.为更好地嵌合CAFA方案,本文通过设计三个额外的可学习模块来构建CAHDRNet.首先,使用基于在ImageNet上预训练的VGG-19构建可学习特征提取器,并在模型训练期间不断更新该特征提取器的参数.这种设计可实现LDR图像的联合特征学习,为CAFA中的上下文一致性评估奠定了坚实基础.接着,应用所提出的CAFA模块,通过在图像特征中采样具有相同上下文的信息进行特征融合.最后,本文提出使用一种多尺度残差补全模块来处理融合后的特征,利用不同扩张率进行特征学习,以实现更强大的特征表示并在图像缺失区域中进行可信细节填充.同时,设计一个软注意力模块来学习不同图像区域的重要性,以便在跳跃连接期间获得与参考图像互补的所需特征.多种实验验证了 CAHDRNet的有效性并证实其优于现有最先进的方法.具体而言,本文所提出的CAHDRNet在Kalantari数据集上HDR-VDP-2和PSNR-L等指标相较于次好方法AHDRNet分别提升了 1.61和0.68.

Keyword :

上下文一致性 上下文一致性 卷积采样 卷积采样 图像融合 图像融合 特征融合 特征融合 高动态范围成像 高动态范围成像

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 印佳丽 , 韩津 , 陈斌 et al. 基于一致性感知特征融合的高动态范围成像方法 [J]. | 计算机学报 , 2024 , 47 (10) : 2352-2367 .
MLA 印佳丽 et al. "基于一致性感知特征融合的高动态范围成像方法" . | 计算机学报 47 . 10 (2024) : 2352-2367 .
APA 印佳丽 , 韩津 , 陈斌 , 刘西蒙 . 基于一致性感知特征融合的高动态范围成像方法 . | 计算机学报 , 2024 , 47 (10) , 2352-2367 .
Export to NoteExpress RIS BibTex

Version :

基于一致性感知特征融合的高动态范围成像方法
期刊论文 | 2024 , 47 (10) , 2352-2367 | 计算机学报
An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability CPCI-S
期刊论文 | 2023 , 4466-4475 | CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV
WoS CC Cited Count: 18
Abstract&Keyword Cite Version(1)

Abstract :

While the transferability property of adversarial examples allows the adversary to perform black-box attacks (i.e., the attacker has no knowledge about the target model), the transfer-based adversarial attacks have gained great attention. Previous works mostly study gradient variation or image transformations to amplify the distortion on critical parts of inputs. These methods can work on transferring across models with limited differences, i.e., from CNNs to CNNs, but always fail in transferring across models with wide differences, such as from CNNs to ViTs. Alternatively, model ensemble adversarial attacks are proposed to fuse outputs from surrogate models with diverse architectures to get an ensemble loss, making the generated adversarial example more likely to transfer to other models as it can fool multiple models concurrently. However, existing ensemble attacks simply fuse the outputs of the surrogate models evenly, thus are not efficacious to capture and amplify the intrinsic transfer information of adversarial examples. In this paper, we propose an adaptive ensemble attack, dubbed AdaEA, to adaptively control the fusion of the outputs from each model, via monitoring the discrepancy ratio of their contributions towards the adversarial objective. Furthermore, an extra disparity-reduced filter is introduced to further synchronize the update direction. As a result, we achieve considerable improvement over the existing ensemble attacks on various datasets, and the proposed AdaEA can also boost existing transfer-based attacks, which further demonstrates its efficacy and versatility. The source code: https://github.com/CHENBIN99/AdaEA

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Bin , Yin, Jiali , Chen, Shukai et al. An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability [J]. | CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV , 2023 : 4466-4475 .
MLA Chen, Bin et al. "An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability" . | CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV (2023) : 4466-4475 .
APA Chen, Bin , Yin, Jiali , Chen, Shukai , Chen, Bohao , Liu, Ximeng . An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability . | CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV , 2023 , 4466-4475 .
Export to NoteExpress RIS BibTex

Version :

An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability EI
会议论文 | 2023 , 4466-4475
MetaFBP: Learning to Learn High-Order Predictor for Personalized Facial Beauty Prediction CPCI-S
期刊论文 | 2023 , 6072-6080 | PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Predicting individual aesthetic preferences holds significant practical applications and academic implications for human society. However, existing studies mainly focus on learning and predicting the commonality of facial attractiveness, with little attention given to Personalized Facial Beauty Prediction (PFBP). PFBP aims to develop a machine that can adapt to individual aesthetic preferences with only a few images rated by each user. In this paper, we formulate this task from a meta-learning perspective that each user corresponds to a meta-task. To address such PFBP task, we draw inspiration from the human aesthetic mechanism that visual aesthetics in society follows a Gaussian distribution, which motivates us to disentangle user preferences into a commonality and an individuality part. To this end, we propose a novel MetaFBP framework, in which we devise a universal feature extractor to capture the aesthetic commonality and then optimize to adapt the aesthetic individuality by shifting the decision boundary of the predictor via a meta-learning mechanism. Unlike conventional meta-learning methods that may struggle with slow adaptation or overfitting to tiny support sets, we propose a novel approach that optimizes a high-order predictor for fast adaptation. In order to validate the performance of the proposed method, we build several PFBP bench-marks by using existing facial beauty prediction datasets rated by numerous users. Extensive experiments on these benchmarks demonstrate the effectiveness of the proposed MetaFBP method.

Keyword :

Dynamic Network Dynamic Network Facial Beauty Prediction Facial Beauty Prediction Meta Learning Meta Learning Personalized Recommendation Personalized Recommendation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Luojun , Shen, Zhifeng , Yin, Jia-Li et al. MetaFBP: Learning to Learn High-Order Predictor for Personalized Facial Beauty Prediction [J]. | PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 , 2023 : 6072-6080 .
MLA Lin, Luojun et al. "MetaFBP: Learning to Learn High-Order Predictor for Personalized Facial Beauty Prediction" . | PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 (2023) : 6072-6080 .
APA Lin, Luojun , Shen, Zhifeng , Yin, Jia-Li , Liu, Qipeng , Yu, Yuanlong , Chen, Weijie . MetaFBP: Learning to Learn High-Order Predictor for Personalized Facial Beauty Prediction . | PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 , 2023 , 6072-6080 .
Export to NoteExpress RIS BibTex

Version :

MetaFBP: Learning to Learn High-Order Predictor for Personalized Facial Beauty Prediction Scopus
其他 | 2023 , 6072-6080 | MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia
MetaFBP: Learning to Learn High-Order Predictor for Personalized Facial Beauty Prediction EI
会议论文 | 2023 , 6072-6080
SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation CPCI-S
期刊论文 | 2023 , 3852-3860 | THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3
Abstract&Keyword Cite

Abstract :

As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which transfers knowledge learned from a rich-label dataset to the unlabeled target dataset, is gaining increasingly more popularity. While extensive studies have been devoted to improving the model accuracy on target domain, an important issue of model robustness is neglected. To make things worse, conventional adversarial training (AT) methods for improving model robustness are inapplicable under UDA scenario since they train models on adversarial examples that are generated by supervised loss function. In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models. Based on self-training paradigm, SRoUDA starts with pre-training a source model by applying UDA baseline on source labeled data and taraget unlabeled data with a developed random masked augmentation (RMA), and then alternates between adversarial target model training on pseudo-labeled target data and fine-tuning source model by a meta step. While self-training allows the direct incorporation of AT in UDA, the meta step in SRoUDA further helps in mitigating error propagation from noisy pseudo labels. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SRoUDA where it achieves significant model robustness improvement without harming clean accuracy.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhu, Wanqing , Yin, Jia-Li , Chen, Bo-Hao et al. SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation [J]. | THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3 , 2023 : 3852-3860 .
MLA Zhu, Wanqing et al. "SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation" . | THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3 (2023) : 3852-3860 .
APA Zhu, Wanqing , Yin, Jia-Li , Chen, Bo-Hao , Liu, Ximeng . SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation . | THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3 , 2023 , 3852-3860 .
Export to NoteExpress RIS BibTex

Version :

Global Learnable Attention for Single Image Super-Resolution SCIE
期刊论文 | 2023 , 45 (7) , 8453-8465 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
WoS CC Cited Count: 13
Abstract&Keyword Cite Version(2)

Abstract :

Self-similarity is valuable to the exploration of non-local textures in single image super-resolution (SISR). Researchers usually assume that the importance of non-local textures is positively related to their similarity scores. In this paper, we surprisingly found that when repairing severely damaged query textures, some non-local textures with low-similarity which are closer to the target can provide more accurate and richer details than the high-similarity ones. In these cases, low-similarity does not mean inferior but is usually caused by different scales or orientations. Utilizing this finding, we proposed a Global Learnable Attention (GLA) to adaptively modify similarity scores of non-local textures during training instead of only using a fixed similarity scoring function such as the dot product. The proposed GLA can explore non-local textures with low-similarity but more accurate details to repair severely damaged textures. Furthermore, we propose to adopt Super-Bit Locality-Sensitive Hashing (SB-LSH) as a preprocessing method for our GLA. With the SB-LSH, the computational complexity of our GLA is reduced from quadratic to asymptotic linear with respect to the image size. In addition, the proposed GLA can be integrated into existing deep SISR models as an efficient general building block. Based on the GLA, we constructed a Deep Learnable Similarity Network (DLSN), which achieves state-of-the-art performance for SISR tasks of different degradation types (e.g., blur and noise). Our code and a pre-trained DLSN have been uploaded to GitHub(dagger) for validation.

Keyword :

Computational modeling Computational modeling Convolution Convolution deep learning deep learning Degradation Degradation Feature extraction Feature extraction Image reconstruction Image reconstruction non-local attention non-local attention Self-similarity Self-similarity single image super-resolution single image super-resolution Superresolution Superresolution Task analysis Task analysis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Su, Jian-Nan , Gan, Min , Chen, Guang-Yong et al. Global Learnable Attention for Single Image Super-Resolution [J]. | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2023 , 45 (7) : 8453-8465 .
MLA Su, Jian-Nan et al. "Global Learnable Attention for Single Image Super-Resolution" . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 45 . 7 (2023) : 8453-8465 .
APA Su, Jian-Nan , Gan, Min , Chen, Guang-Yong , Yin, Jia-Li , Chen, C. L. Philip . Global Learnable Attention for Single Image Super-Resolution . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2023 , 45 (7) , 8453-8465 .
Export to NoteExpress RIS BibTex

Version :

Global Learnable Attention for Single Image Super-Resolution EI
期刊论文 | 2023 , 45 (7) , 8453-8465 | IEEE Transactions on Pattern Analysis and Machine Intelligence
Global Learnable Attention for Single Image Super-Resolution Scopus
期刊论文 | 2023 , 45 (7) , 8453-8465 | IEEE Transactions on Pattern Analysis and Machine Intelligence
Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness SCIE
期刊论文 | 2023 , 18 , 2119-2131 | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
Abstract&Keyword Cite Version(2)

Abstract :

In response to the threat of adversarial examples, adversarial training provides an attractive option for improving robustness by training models on online-augmented adversarial examples. However, most existing adversarial training methods focus on improving the model's robust accuracy by strengthening the adversarial examples but neglecting the increasing shift between natural data and adversarial examples, leading to a decrease in natural accuracy. To maintain the trade-off between natural and robust accuracy, we alleviate the shift from the perspective of feature adaption and propose a Feature Adaptive Adversarial Training (FAAT) optimizing the class-conditional feature adaption across natural data and adversarial examples. Specifically, we propose to incorporate a class-conditional discriminator to encourage the features to become (1) class-discriminative and (2) invariant to the change of adversarial attacks. The novel FAAT framework enables the trade-off between natural and robust accuracy by generating features with similar distribution across natural and adversarial data within the same class and achieves higher overall robustness benefiting from the class-discriminative feature characteristics. Experiments on various datasets demonstrate that FAAT produces more discriminative features and performs favorably against state-of-the-art methods.

Keyword :

Adaptation models Adaptation models Adversarial example Adversarial example adversarial training adversarial training Data models Data models feature adaption feature adaption model robustness model robustness Robustness Robustness Training Training

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Jia-Li , Chen, Bin , Zhu, Wanqing et al. Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness [J]. | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2023 , 18 : 2119-2131 .
MLA Yin, Jia-Li et al. "Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness" . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 18 (2023) : 2119-2131 .
APA Yin, Jia-Li , Chen, Bin , Zhu, Wanqing , Chen, Bo-Hao , Liu, Ximeng . Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2023 , 18 , 2119-2131 .
Export to NoteExpress RIS BibTex

Version :

Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness Scopus
期刊论文 | 2023 , 18 , 2119-2131 | IEEE Transactions on Information Forensics and Security
Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness EI
期刊论文 | 2023 , 18 , 2119-2131 | IEEE Transactions on Information Forensics and Security
Two Exposure Fusion Using Prior-Aware Generative Adversarial Network SCIE
期刊论文 | 2022 , 24 , 2841-2851 | IEEE TRANSACTIONS ON MULTIMEDIA
WoS CC Cited Count: 12
Abstract&Keyword Cite Version(1)

Abstract :

Producing a high dynamic range (HDR) image from two low dynamic range (LDR) images with extreme exposures is challenging due to the lack of well-exposed contents. Existing works either use pixel fusion based on weighted quantization or conduct feature fusion using deep learning techniques. In contrast to these methods, our core idea is to progressively incorporate the pixel domain knowledge of LDR images into the feature fusion process. Specifically, we propose a novel Prior-Aware Generative Adversarial Network (PA-GAN), along with a new dual-level loss for two exposure fusion. The proposed PA-GAN is composed of a content-prior-guided encoder and a detail-prior-guided decoder, respectively in charge of content fusion and detail calibration. We further train the network using a dual-level loss that combines the semantic-level loss and pixel-level loss. Extensive qualitative and quantitative evaluations on diverse image datasets demonstrate that our proposed PA-GAN has superior performance than state-of-the-art methods.

Keyword :

Calibration Calibration Decoding Decoding deep learning deep learning Dynamic range Dynamic range exposure fusion exposure fusion Generative adversarial networks Generative adversarial networks High dynamic range image High dynamic range image Image fusion Image fusion Quantization (signal) Quantization (signal) Semantics Semantics

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Jia-Li , Chen, Bo-Hao , Peng, Yan-Tsung . Two Exposure Fusion Using Prior-Aware Generative Adversarial Network [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2022 , 24 : 2841-2851 .
MLA Yin, Jia-Li et al. "Two Exposure Fusion Using Prior-Aware Generative Adversarial Network" . | IEEE TRANSACTIONS ON MULTIMEDIA 24 (2022) : 2841-2851 .
APA Yin, Jia-Li , Chen, Bo-Hao , Peng, Yan-Tsung . Two Exposure Fusion Using Prior-Aware Generative Adversarial Network . | IEEE TRANSACTIONS ON MULTIMEDIA , 2022 , 24 , 2841-2851 .
Export to NoteExpress RIS BibTex

Version :

Two Exposure Fusion Using Prior-Aware Generative Adversarial Network EI
期刊论文 | 2022 , 24 , 2841-2851 | IEEE Transactions on Multimedia
Actor-Critic Bilateral Filter for Noise-Robust Image Smoothing EI
会议论文 | 2022 , 273-277 | 24th IEEE International Symposium on Multimedia, ISM 2022
Abstract&Keyword Cite

Abstract :

Bilateral filters have been used for achieving excellent edge-preserving image smoothing. However, most studies have focused on the acceleration of bilateral filtering but not on the stability of filtering process in regard to small perturbations to its inputs. In this paper, we propose a novel actor-critic bilateral filter trained with a multistep learning scheme for high-stability edge-preserving image smoothing. We first designed an edge-preserving smoothing process as a Markov decision process that involves adjusting the width setting for the range kernel of a bilateral filter. Next, we trained our actor-critic bilateral filter in a multistep manner to learn the optimal sequence of width settings. Through extensive experiments on five benchmark datasets, we determined that the proposed actor-critic bilateral filter produced satisfactory edge-preserving smoothing results. © 2022 IEEE.

Keyword :

Markov processes Markov processes Nonlinear filtering Nonlinear filtering Reinforcement learning Reinforcement learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Yi-Jie , Wang, Yen-Chiao , Chen, Bo-Hao et al. Actor-Critic Bilateral Filter for Noise-Robust Image Smoothing [C] . 2022 : 273-277 .
MLA Chen, Yi-Jie et al. "Actor-Critic Bilateral Filter for Noise-Robust Image Smoothing" . (2022) : 273-277 .
APA Chen, Yi-Jie , Wang, Yen-Chiao , Chen, Bo-Hao , Cheng, Hsiang-Yin , Yin, Jia-Li . Actor-Critic Bilateral Filter for Noise-Robust Image Smoothing . (2022) : 273-277 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 3 >

Export

Results:

Selected

to

Format:
Online/Total:203/10369445
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1