Indexed by:
Abstract:
Deep neural networks typically require a large number of accurately labeled images for training with cross-entropy loss, and often overfit noisy labels. Contrastive learning has proven impressive noisy label learning because it can learn discrimination representations. However, the weak correlation between the samples and their semantic class, which ignores the correlation between the instances and labels, as well as the instance semantic divergence with the same label, may inevitably lead to class collisions, hampering the label correction. To address these problems, this study proposes a noisy label learning framework that performs label correction and constructs a contrastive prototypical classifier cooperatively. In particular, the prototypical classifier maximizes the distance between the instances and class prototypes to improve the intraclass compactness using contrastive prototypical loss. Furthermore, we provide a theoretical guarantee that the contrastive prototypical loss has a smaller Lipschitz constant and boosts the robustness. Motivated by the theoretical analysis, this framework performs label correction using the prediction of a contrastive prototypical classifier. Extensive experiments demonstrate that the proposed framework achieves superior classification accuracy on synthetic datasets with various noise patterns and levels.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
INFORMATION SCIENCES
ISSN: 0020-0255
Year: 2023
Volume: 649
0 . 0
JCR@2023
0 . 0 0 0
JCR@2023
CAS Journal Grade:1
Cited Count:
WoS CC Cited Count: 1
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: