Indexed by:
Abstract:
Haze removal from images is a fundamental computer vision challenge with significant implications for various applications. Currently, there are numerous growing demands for 4K image processing methods on the mobile and IOT (Internet of Things) devices. Absence of large-scale 4K benchmark datasets hampers progress in this area, especially for the task of dehazing. The challenges of building an ultra-high definition (UHD) dehazed dataset are the lack of estimation methods for UHD depth maps and the migration strategy from synthetic to real domains for UHD with haze images in constrained computational resources. To address these problems, we develop a novel synthetic method to simulate 4K hazy images (including nighttime and daytime scenes) from clear images, which first estimates the scene depth, simulates the light rays and object reflectance, then migrates the synthetic images to real domains by using a GAN, and finally yeilds the hazy effects on 4K resolution images. We wrap these synthesized images into a benchmark called the 4K-HAZE dataset. Specifically, we design the CS-Mixer (an MLP-based model that integrates Channel domain and Spatial domain) to estimate the depth map of 4K clear images, the GU-Net to migrate a 4K synthetic image to the real hazy domain. The most appealing aspect of our approach is the capability to run a 4K image on a single GPU with 24G RAM in real-time (33fps). Additionally, this work presents an objective assessment of several state-of-the-art single image dehazing methods that were evaluated using 4K-HAZE dataset. At the end of the paper, we discuss the limitations of the 4K-HAZE dataset and its social implications. © 2025 Elsevier B.V.
Keyword:
Reprint 's Address:
Email:
Source :
Neurocomputing
ISSN: 0925-2312
Year: 2025
Volume: 650
5 . 5 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: