• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:印佳丽

Refining:

Source

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 4 >
Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks CPCI-S
期刊论文 | 2025 , 9508-9516 | THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 9
Abstract&Keyword Cite

Abstract :

Backdoor attacks and adversarial attacks are two major security threats to deep neural networks (DNNs), with the former one is a training-time data poisoning attack that aims to implant backdoor triggers into models by injecting trigger patterns into training samples, and the latter one is a testing-time attack trying to generate adversarial examples (AEs) from benign images to mislead a well-trained model. While previous works generally treat these two attacks separately, the inherent connection between these two attacks is rarely explored. In this paper, we focus on bridging backdoor and adversarial attacks and observe two intriguing phenomena when applying adversarial attacks on an infected model implanted with backdoors: 1) the sample is harder to be turned into an AE when the trigger is presented; 2) the AEs generated from backdoor samples are highly likely to be predicted as its true labels. Inspired by these observations, we proposed a novel backdoor defense method, dubbed Adversarial-Inspired Backdoor Defense (AIBD), to isolate the backdoor samples by leveraging a progressive top-q scheme and break the correlation between backdoor samples and their target labels using adversarial labels. Through extensive experiments on various datasets against six state-of-the-art backdoor attacks, the AIBD-trained models on poisoned data demonstrate superior performance over the existing defense methods.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Jia-Li , Wang, Weijian , Lyhwa et al. Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks [J]. | THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 9 , 2025 : 9508-9516 .
MLA Yin, Jia-Li et al. "Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks" . | THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 9 (2025) : 9508-9516 .
APA Yin, Jia-Li , Wang, Weijian , Lyhwa , Lin, Wei , Liu, Ximeng . Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks . | THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 9 , 2025 , 9508-9516 .
Export to NoteExpress RIS BibTex

Version :

Towards Adversarial-Robust Class-Incremental Learning via Progressively Volume-Up Perturbation Generation CPCI-S
期刊论文 | 2025 , 15032 , 61-75 | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT II
Abstract&Keyword Cite Version(2)

Abstract :

Class-incremental learning (CIL) has been widely applied in the real world due to its flexibility and scalability. Recent advancements in CIL have achieved outstanding performance. However, deep neural networks, including CIL models, face challenges in resisting adversarial attacks. Presently, the majority of research in CIL focuses on alleviating catastrophic overfitting, while lacking comprehensive exploration into enhancing adversarial robustness. To this end, we introduce a novel CIL framework called the Perturbation Volume-up Framework (PVF). This framework divides each epoch into multiple iterations, wherein three main tasks are performed sequentially: intensifying adversarial data, extracting new knowledge, and reinforcing old knowledge. To intensify adversarial data, we propose the Fused Robustness Augmentation (FRA) approach. This method incorporates more generalized knowledge into the adversarial data by randomly blending data and leveraging finely-tuned Jensen-Shannon (JS) divergence. For the remaining two tasks, we introduce a set of regularization techniques known as Knowledge Inspiration Regularization (KIR). This regularization employs innovative classification and distillation losses to enhance the model's generalization performance while preserving previously learned knowledge. Extensive experiments have demonstrated the effectiveness of our method in enhancing adversarial robustness of CIL models.

Keyword :

Adversarial robustness Adversarial robustness Class-incremental learning Class-incremental learning Data Augmentation Data Augmentation Knowledge distillation Knowledge distillation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 You, Yeliang , Chen, Bin , Yin, Jia-li et al. Towards Adversarial-Robust Class-Incremental Learning via Progressively Volume-Up Perturbation Generation [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT II , 2025 , 15032 : 61-75 .
MLA You, Yeliang et al. "Towards Adversarial-Robust Class-Incremental Learning via Progressively Volume-Up Perturbation Generation" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT II 15032 (2025) : 61-75 .
APA You, Yeliang , Chen, Bin , Yin, Jia-li , Liu, Ximeng , Lin, Wei . Towards Adversarial-Robust Class-Incremental Learning via Progressively Volume-Up Perturbation Generation . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT II , 2025 , 15032 , 61-75 .
Export to NoteExpress RIS BibTex

Version :

Towards Adversarial-Robust Class-Incremental Learning via Progressively Volume-Up Perturbation Generation EI
会议论文 | 2025 , 15032 LNCS , 61-75
Towards Adversarial-Robust Class-Incremental Learning via Progressively Volume-Up Perturbation Generation Scopus
其他 | 2025 , 15032 LNCS , 61-75 | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks EI
会议论文 | 2025 , 39 (9) , 9508-9516 | 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Abstract&Keyword Cite Version(1)

Abstract :

Backdoor attacks and adversarial attacks are two major security threats to deep neural networks (DNNs), with the former one is a training-time data poisoning attack that aims to implant backdoor triggers into models by injecting trigger patterns into training samples, and the latter one is a testing-time attack trying to generate adversarial examples (AEs) from benign images to mislead a well-trained model. While previous works generally treat these two attacks separately, the inherent connection between these two attacks is rarely explored. In this paper, we focus on bridging backdoor and adversarial attacks and observe two intriguing phenomena when applying adversarial attacks on an infected model implanted with backdoors: 1) the sample is harder to be turned into an AE when the trigger is presented; 2) the AEs generated from backdoor samples are highly likely to be predicted as its true labels. Inspired by these observations, we proposed a novel backdoor defense method, dubbed Adversarial-Inspired Backdoor Defense (AIBD), to isolate the backdoor samples by leveraging a progressive top-q scheme and break the correlation between backdoor samples and their target labels using adversarial labels. Through extensive experiments on various datasets against six state-of-the-art backdoor attacks, the AIBD-trained models on poisoned data demonstrate superior performance over the existing defense methods. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Keyword :

Backpropagation Backpropagation Deep neural networks Deep neural networks

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Jia-Li , Wang, Weijian , Lyhwa et al. Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks [C] . 2025 : 9508-9516 .
MLA Yin, Jia-Li et al. "Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks" . (2025) : 9508-9516 .
APA Yin, Jia-Li , Wang, Weijian , Lyhwa , Lin, Wei , Liu, Ximeng . Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks . (2025) : 9508-9516 .
Export to NoteExpress RIS BibTex

Version :

Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks Scopus
其他 | 2025 , 39 (9) , 9508-9516 | Proceedings of the AAAI Conference on Artificial Intelligence
Q-TrHDRI: A Qurey-Based Transformer for High Dynamic Range Imaging with Dynamic Scenes CPCI-S
期刊论文 | 2024 , 14435 , 301-312 | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI
Abstract&Keyword Cite Version(2)

Abstract :

In the absence of well-exposed contents in images, high dynamic range image (HDRI) provides an attractive option that fuses stacked low dynamic range (LDR) images into an HDR image. Existing HDRI methods utilized convolutional neural networks (CNNs) to model local correlations, which can perform well on LDR images with static scenes, but always failed on dynamic scenes where large motions exist. Here we focus on the dynamic scenarios in HDRI, and propose a Query-based Transformer framework, called Q-TrHDRI. To avoid ghosting artifacts induced by moving content fusion, Q-TrHDRI uses Transformer instead of CNNs for feature enhancement and fusion, allowing global interactions across different LDR images. To further improve performance, we investigate comprehensively different strategies of transformers and propose a query-attention scheme for finding related contents across LDR images and a linear fusion scheme for skillfully borrowing complementary contents from LDR images. All these efforts make Q-TrHDRI a simple yet solid transformer-based HDRI baseline. The thorough experiments also validate the effectiveness of the proposed QTrHDRI, where it achieves superior performances over state-of-the-art methods on various challenging datasets.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Bin , Yin, Jia-Li , Chen, Bo-Hao et al. Q-TrHDRI: A Qurey-Based Transformer for High Dynamic Range Imaging with Dynamic Scenes [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI , 2024 , 14435 : 301-312 .
MLA Chen, Bin et al. "Q-TrHDRI: A Qurey-Based Transformer for High Dynamic Range Imaging with Dynamic Scenes" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI 14435 (2024) : 301-312 .
APA Chen, Bin , Yin, Jia-Li , Chen, Bo-Hao , Liu, Ximeng . Q-TrHDRI: A Qurey-Based Transformer for High Dynamic Range Imaging with Dynamic Scenes . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI , 2024 , 14435 , 301-312 .
Export to NoteExpress RIS BibTex

Version :

Q-TrHDRI: A Qurey-Based Transformer for High Dynamic Range Imaging with Dynamic Scenes EI
会议论文 | 2024 , 14435 LNCS , 301-312
Q-TrHDRI: A Qurey-Based Transformer for High Dynamic Range Imaging with Dynamic Scenes Scopus
其他 | 2024 , 14435 LNCS , 301-312 | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
基于一致性感知特征融合的高动态范围成像方法
期刊论文 | 2024 , 47 (10) , 2352-2367 | 计算机学报
Abstract&Keyword Cite Version(1)

Abstract :

高动态范围成像(High Dynamic Range Imaging,HDRI)技术是指通过融合多张低动态范围(Low Dynamic Range,LDR)图像拓展图像动态范围、完整图像内容的方法,其为解决由于相机传感器动态范围有限而导致所拍摄图像内容丢失的问题提供了实际的解决方案.通过数十年的研究,众多有效的HDRI方法已被提出,并在无物体运动、内容曝光良好的静态场景中取得接近最优的性能.然而,现实场景中物体移动和相机偏移无法避免,直接使用传统HDRI方法会在融合后的HDR图像中产生严重的重影和伪影.这使得仅包含简单融合过程的HDRI方法并不适用于实际应用,现实场景中的HDRI任务仍然具有一定挑战.因此,针对动态场景下的HDRI研究迅速发展.近期的方法集中在借助深度卷积神经网络(Convolutional Neural Network,CNN)的力量以期实现更好的性能.在这些基于CNN的方法中,特征融合对于恢复图像完整内容、消除图像伪影方面起着至关重要的作用.传统的特征融合方法通过借助跳跃连接或注意力模块,首先将LDR图像的特征进行拼接,并通过堆叠的卷积操作逐渐关注不同的局部特征.然而,此类方案通常忽略了 LDR图像序列之间丰富的上下文依赖关系,且未充分利用特征之间的纹理一致性.为解决这一问题,本文提出了一种全新的一致性感知特征融合(Coherence-Aware Feature Aggregation,CAFA)方案,该方案在卷积过程中对输入特征中位于不同空间位置但具有相同上下文信息的特征信息进行采样,从而显式地将上下文一致性纳入特征融合中.基于CAFA,本文进一步提出了一种结合CAFA的动态场景下一致性感知高动态范围成像网络CAHDRNet.为更好地嵌合CAFA方案,本文通过设计三个额外的可学习模块来构建CAHDRNet.首先,使用基于在ImageNet上预训练的VGG-19构建可学习特征提取器,并在模型训练期间不断更新该特征提取器的参数.这种设计可实现LDR图像的联合特征学习,为CAFA中的上下文一致性评估奠定了坚实基础.接着,应用所提出的CAFA模块,通过在图像特征中采样具有相同上下文的信息进行特征融合.最后,本文提出使用一种多尺度残差补全模块来处理融合后的特征,利用不同扩张率进行特征学习,以实现更强大的特征表示并在图像缺失区域中进行可信细节填充.同时,设计一个软注意力模块来学习不同图像区域的重要性,以便在跳跃连接期间获得与参考图像互补的所需特征.多种实验验证了 CAHDRNet的有效性并证实其优于现有最先进的方法.具体而言,本文所提出的CAHDRNet在Kalantari数据集上HDR-VDP-2和PSNR-L等指标相较于次好方法AHDRNet分别提升了 1.61和0.68.

Keyword :

上下文一致性 上下文一致性 卷积采样 卷积采样 图像融合 图像融合 特征融合 特征融合 高动态范围成像 高动态范围成像

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 印佳丽 , 韩津 , 陈斌 et al. 基于一致性感知特征融合的高动态范围成像方法 [J]. | 计算机学报 , 2024 , 47 (10) : 2352-2367 .
MLA 印佳丽 et al. "基于一致性感知特征融合的高动态范围成像方法" . | 计算机学报 47 . 10 (2024) : 2352-2367 .
APA 印佳丽 , 韩津 , 陈斌 , 刘西蒙 . 基于一致性感知特征融合的高动态范围成像方法 . | 计算机学报 , 2024 , 47 (10) , 2352-2367 .
Export to NoteExpress RIS BibTex

Version :

基于一致性感知特征融合的高动态范围成像方法
期刊论文 | 2024 , 47 (10) , 2352-2367 | 计算机学报
Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline EI
会议论文 | 2024 , 4786-4794 | 32nd ACM International Conference on Multimedia, MM 2024
Abstract&Keyword Cite Version(1)

Abstract :

Adversarial examples (AEs), which are maliciously hand-crafted by adding perturbations to benign images, reveal the vulnerability of deep neural networks (DNNs) and have been used as a benchmark for evaluating model robustness. With great efforts have been devoted to generating AEs with stronger attack ability, the visual quality of AEs is generally neglected in previous studies. The lack of a good quality measure of AEs makes it very hard to compare the relative merits of attack techniques and is hindering technological advancement. How to evaluate the visual quality of AEs remains an understudied and unsolved problem. In this work, we make the first attempt to fill the gap by presenting an image quality assessment method specifically designed for AEs. Towards this goal, we first construct a new database, called AdvDB, developed on diverse adversarial examples with elaborated annotations. We also propose a detection-based structural similarity index (AdvDSS) for adversarial example perceptual quality assessment. Specifically, the visual saliency for capturing the near-threshold adversarial distortions is first detected via human visual system (HVS) techniques and then the structural similarity is extracted to predict the quality score. Moreover, we further propose AEQA for overall adversarial example quality assessment by integrating the perceptual quality and attack intensity of AEs. Extensive experiments validate that the proposed AdvDSS achieves state-of-the-art performance which is more consistent with human opinions. © 2024 ACM.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Jia-Li , Chen, Menghao , Han, Jin et al. Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline [C] . 2024 : 4786-4794 .
MLA Yin, Jia-Li et al. "Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline" . (2024) : 4786-4794 .
APA Yin, Jia-Li , Chen, Menghao , Han, Jin , Chen, Bo-Hao , Liu, Ximeng . Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline . (2024) : 4786-4794 .
Export to NoteExpress RIS BibTex

Version :

Adversarial Example Quality Assessment: A Large-scale Dataset and Strong Baseline Scopus
其他 | 2024 , 4786-4794 | MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
MEAT: MEDIAN-ENSEMBLE ADVERSARIAL TRAINING FOR IMPROVING ROBUSTNESS AND GENERALIZATION CPCI-S
期刊论文 | 2024 , 5600-5604 | 2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024
Abstract&Keyword Cite Version(2)

Abstract :

Self-ensemble adversarial training methods improve model robustness by ensembling models at different training epochs, such as model weight averaging (WA). However, previous research has shown that self-ensemble defense methods in adversarial training (AT) still suffer from robust overfitting, which severely affects the generalization performance. Empirically, in the late phases of training, the AT becomes more overfitting to the extent that the individuals for weight averaging also suffer from overfitting and produce anomalous weight values, which causes the self-ensemble model to continue to undergo robust overfitting due to the failure in removing the weight anomalies. To solve this problem, we aim to tackle the influence of outliers in the weight space in this work and propose an easy-to-operate and effective Median-Ensemble Adversarial Training (MEAT) method to solve the robust overfitting phenomenon existing in self-ensemble defense from the source by searching for the median of the historical model weights. Experimental results show that MEAT achieves the best robustness against the powerful AutoAttack and can effectively allievate the robust overfitting. We further demonstrate that most defense methods can improve robust generalization and robustness by combining with MEAT.

Keyword :

Adversarial robustness Adversarial robustness adversarial training adversarial training robust generalization robust generalization self-ensemble self-ensemble

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hu, Zhaozhe , Yin, Jia-Li , Chen, Bin et al. MEAT: MEDIAN-ENSEMBLE ADVERSARIAL TRAINING FOR IMPROVING ROBUSTNESS AND GENERALIZATION [J]. | 2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024 , 2024 : 5600-5604 .
MLA Hu, Zhaozhe et al. "MEAT: MEDIAN-ENSEMBLE ADVERSARIAL TRAINING FOR IMPROVING ROBUSTNESS AND GENERALIZATION" . | 2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024 (2024) : 5600-5604 .
APA Hu, Zhaozhe , Yin, Jia-Li , Chen, Bin , Lin, Luojun , Chen, Bo-Hao , Liu, Ximeng . MEAT: MEDIAN-ENSEMBLE ADVERSARIAL TRAINING FOR IMPROVING ROBUSTNESS AND GENERALIZATION . | 2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024 , 2024 , 5600-5604 .
Export to NoteExpress RIS BibTex

Version :

MEAT: MEDIAN-ENSEMBLE ADVERSARIAL TRAINING FOR IMPROVING ROBUSTNESS AND GENERALIZATION EI
会议论文 | 2024 , 5600-5604
MEAT: MEDIAN-ENSEMBLE ADVERSARIAL TRAINING FOR IMPROVING ROBUSTNESS AND GENERALIZATION Scopus
其他 | 2024 , 5600-5604 | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Global Learnable Attention for Single Image Super-Resolution SCIE
期刊论文 | 2023 , 45 (7) , 8453-8465 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
WoS CC Cited Count: 13
Abstract&Keyword Cite Version(2)

Abstract :

Self-similarity is valuable to the exploration of non-local textures in single image super-resolution (SISR). Researchers usually assume that the importance of non-local textures is positively related to their similarity scores. In this paper, we surprisingly found that when repairing severely damaged query textures, some non-local textures with low-similarity which are closer to the target can provide more accurate and richer details than the high-similarity ones. In these cases, low-similarity does not mean inferior but is usually caused by different scales or orientations. Utilizing this finding, we proposed a Global Learnable Attention (GLA) to adaptively modify similarity scores of non-local textures during training instead of only using a fixed similarity scoring function such as the dot product. The proposed GLA can explore non-local textures with low-similarity but more accurate details to repair severely damaged textures. Furthermore, we propose to adopt Super-Bit Locality-Sensitive Hashing (SB-LSH) as a preprocessing method for our GLA. With the SB-LSH, the computational complexity of our GLA is reduced from quadratic to asymptotic linear with respect to the image size. In addition, the proposed GLA can be integrated into existing deep SISR models as an efficient general building block. Based on the GLA, we constructed a Deep Learnable Similarity Network (DLSN), which achieves state-of-the-art performance for SISR tasks of different degradation types (e.g., blur and noise). Our code and a pre-trained DLSN have been uploaded to GitHub(dagger) for validation.

Keyword :

Computational modeling Computational modeling Convolution Convolution deep learning deep learning Degradation Degradation Feature extraction Feature extraction Image reconstruction Image reconstruction non-local attention non-local attention Self-similarity Self-similarity single image super-resolution single image super-resolution Superresolution Superresolution Task analysis Task analysis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Su, Jian-Nan , Gan, Min , Chen, Guang-Yong et al. Global Learnable Attention for Single Image Super-Resolution [J]. | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2023 , 45 (7) : 8453-8465 .
MLA Su, Jian-Nan et al. "Global Learnable Attention for Single Image Super-Resolution" . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 45 . 7 (2023) : 8453-8465 .
APA Su, Jian-Nan , Gan, Min , Chen, Guang-Yong , Yin, Jia-Li , Chen, C. L. Philip . Global Learnable Attention for Single Image Super-Resolution . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2023 , 45 (7) , 8453-8465 .
Export to NoteExpress RIS BibTex

Version :

Global Learnable Attention for Single Image Super-Resolution EI
期刊论文 | 2023 , 45 (7) , 8453-8465 | IEEE Transactions on Pattern Analysis and Machine Intelligence
Global Learnable Attention for Single Image Super-Resolution Scopus
期刊论文 | 2023 , 45 (7) , 8453-8465 | IEEE Transactions on Pattern Analysis and Machine Intelligence
SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation EI
会议论文 | 2023 , 37 , 3852-3860 | 37th AAAI Conference on Artificial Intelligence, AAAI 2023
Abstract&Keyword Cite

Abstract :

As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which transfers knowledge learned from a rich-label dataset to the unlabeled target dataset, is gaining increasingly more popularity. While extensive studies have been devoted to improving the model accuracy on target domain, an important issue of model robustness is neglected. To make things worse, conventional adversarial training (AT) methods for improving model robustness are inapplicable under UDA scenario since they train models on adversarial examples that are generated by supervised loss function. In this paper, we present a new meta selftraining pipeline, named SRoUDA, for improving adversarial robustness of UDA models. Based on self-training paradigm, SRoUDA starts with pre-training a source model by applying UDA baseline on source labeled data and taraget unlabeled data with a developed random masked augmentation (RMA), and then alternates between adversarial target model training on pseudo-labeled target data and fine-tuning source model by a meta step. While self-training allows the direct incorporation of AT in UDA, the meta step in SRoUDA further helps in mitigating error propagation from noisy pseudo labels. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SRoUDA where it achieves significant model robustness improvement without harming clean accuracy. Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Keyword :

Artificial intelligence Artificial intelligence Benchmarking Benchmarking

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhu, Wanqing , Yin, Jia-Li , Chen, Bo-Hao et al. SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation [C] . 2023 : 3852-3860 .
MLA Zhu, Wanqing et al. "SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation" . (2023) : 3852-3860 .
APA Zhu, Wanqing , Yin, Jia-Li , Chen, Bo-Hao , Liu, Ximeng . SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation . (2023) : 3852-3860 .
Export to NoteExpress RIS BibTex

Version :

SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation Scopus
其他 | 2023 , 37 , 3852-3860
Abstract&Keyword Cite

Abstract :

As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which transfers knowledge learned from a rich-label dataset to the unlabeled target dataset, is gaining increasingly more popularity. While extensive studies have been devoted to improving the model accuracy on target domain, an important issue of model robustness is neglected. To make things worse, conventional adversarial training (AT) methods for improving model robustness are inapplicable under UDA scenario since they train models on adversarial examples that are generated by supervised loss function. In this paper, we present a new meta selftraining pipeline, named SRoUDA, for improving adversarial robustness of UDA models. Based on self-training paradigm, SRoUDA starts with pre-training a source model by applying UDA baseline on source labeled data and taraget unlabeled data with a developed random masked augmentation (RMA), and then alternates between adversarial target model training on pseudo-labeled target data and fine-tuning source model by a meta step. While self-training allows the direct incorporation of AT in UDA, the meta step in SRoUDA further helps in mitigating error propagation from noisy pseudo labels. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SRoUDA where it achieves significant model robustness improvement without harming clean accuracy. Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhu, W. , Yin, J.-L. , Chen, B.-H. et al. SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation [未知].
MLA Zhu, W. et al. "SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation" [未知].
APA Zhu, W. , Yin, J.-L. , Chen, B.-H. , Liu, X. . SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation [未知].
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 4 >

Export

Results:

Selected

to

Format:
Online/Total:1238/13827375
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1