Indexed by:
Abstract:
Self-attention-based approaches that leverage global context information for hyperspectral image (HSI) classification have gained increasing prominence. Nevertheless, due to the assignment of equivalent attention weight to all the tokens (pixels or patches), the existing self-attention mechanism inadvertently prioritizes the nonlabel-specified information over the instinct label-specified information, which generates attention shifts and redundancy in HSI classification. To alleviate the mentioned barrier, we propose the center-specific perception transformer (CP-Transformer) network, which is the first attempt to perform class-guided attention and filter interference factors for HSI classification feature representation. Specifically, the central-pixel focus attention (CFA) module is presented to compute the label-related attention between the center and other pixels. In this manner, CFA reduces computational complexity and closely aligns with the center-pixel labeling strategy. Besides, the spectral saliency focus attention (SSFA) module is developed to capture the spectral correlation by focusing salient bands to provide a beneficial supplement for spatial features. Moreover, the hierarchical integration network (HIN) constructs the inference network to integrate and rectify spatial-spectral features for HSI classification. The experiment results on four popular HSI datasets demonstrate that the proposed method achieves robust performance compared to other state-of-the-art methods. Our code will be released at https://github.com/Chirsycy/CP-Transformer
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
ISSN: 0196-2892
Year: 2025
Volume: 63
7 . 5 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: