Indexed by:
Abstract:
Blind super-resolution (BlindSR) has recently attracted attention in the field of remote sensing. Due to the lack of paired data, most works assume that the acquired remote sensing images are high-resolution (HR) and use predefined degradation models to synthesize low-resolution (LR) images for training and evaluation. However, these acquired remote sensing images are often degraded by various factors, which still require super-resolution (SR) reconstruction to meet practical needs. Using them as ground-truth (GT) images will limit the model's ability to restore fine details, resulting in blurry and noisy reconstructions. To overcome these limitations, we propose an unsupervised degradation-aware network which transforms natural images into the degraded domain as real-world remote sensing images. It uses natural images containing rich texture information as a reference for fine-grained restoration of the network, enabling the network to produce clearer reconstructions. Furthermore, we discovered the remarkable capability of the patchwise discriminator to perceive the degradation type of different regions within the acquired remote sensing image. Inspired by this finding, we design a novel degradation representation module (DRM) that can estimate the degradation information from LR images and guide the network to perform adaptive restoration. Comprehensive experimental results demonstrate that our proposed unsupervised blind super-resolution framework achieves state-of-the-art (SOTA) restoration performance. Our code and pretrained models have been uploaded to GitHub (https://github.com/55Dupup/UDASR) for validation.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
ISSN: 0196-2892
Year: 2024
Volume: 62
7 . 5 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: