Indexed by:
Abstract:
The task of low-light image enhancement aims to reconstruct details and visual information from degraded low-light images. However, existing deep learning methods for feature processing usually lack feature differentiation or fail to implement reasonable differentiation handling, which can limit the quality of the enhanced images, leading to issues like color distortion and blurred details. To address these limitations, we propose Dual-Range Information Guidance Network (DRIGNet). Specifically, we develop an efficient U-shaped architecture Dual-Range Information Guided Framework (DGF). DGF decouples traditional image features into dual-range information while integrating stage-specific feature properties with the proposed dual-range information. We design the Global Dynamic Enhancement Module (GDEM) using channel interaction and the Detail Focus Module (DFM) with three-directional filter, both embedded in DGF to model long-range and short-range features respectively. Additionally, we introduce a feature fusion strategy, Attention-Guided Fusion Module (AGFM), which merges dual-range information, facilitating complementary enhancement. In the encoder, DRIGNet extracts coherent long-range information and enhances the global structure of the image; in the decoder, DRIGNet captures short-range information and fuse dual-rage information to restore detailed areas. Finally, we conduct extensive quantitative and qualitative experiments to demonstrate that the proposed DRIGNet outperforms the current State-of-the-Art (SOTA) methods across ten datasets. © 2025 Elsevier B.V.
Keyword:
Reprint 's Address:
Email:
Source :
Displays
ISSN: 0141-9382
Year: 2025
Volume: 90
3 . 7 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: