Indexed by:
Abstract:
Attention mechanisms have been introduced to exploit deep-level information for image restoration by capturing feature dependencies. However, existing attention mechanisms often have limited perceptual capabilities and are incompatible with low-power devices due to computational resource constraints. Therefore, we propose a feature enhanced cascading attention network (FECAN) that introduces a novel feature enhanced cascading attention (FECA) mechanism, consisting of enhanced shuffle attention (ESA) and multi-scale large separable kernel attention (MLSKA). Specifically, ESA enhances high-frequency texture features in the feature maps, and MLSKA executes the further extraction. The rich and fine-grained high-frequency information are extracted and fused from multiple perceptual layers, thus improving super-resolution (SR) performance. To validate FECAN's effectiveness, we evaluate it with different complexities by stacking different numbers of high-frequency enhancement modules (HFEM) that contain FECA. Extensive experiments on benchmark datasets demonstrate that FECAN outperforms state-of-the-art lightweight SR networks in terms of objective evaluation metrics and subjective visual quality. Specifically, at a x 4 scale with a 121 K model size, compared to the second-ranked MAN-tiny, FECAN achieves a 0.07 dB improvement in average peak signal-to-noise ratio (PSNR), while reducing network parameters by approximately 19% and FLOPs by 20%. This demonstrates a better trade-off between SR performance and model complexity.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
SCIENTIFIC REPORTS
ISSN: 2045-2322
Year: 2025
Issue: 1
Volume: 15
3 . 8 0 0
JCR@2023
Cited Count:
WoS CC Cited Count: 1
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: