Emergent Mind

Abstract

Low-light image enhancement (LLIE) aims at improving the illumination and visibility of dark images with lighting noise. To handle the real-world low-light images often with heavy and complex noise, some efforts have been made for joint LLIE and denoising, which however only achieve inferior restoration performance. We attribute it to two challenges: 1) in real-world low-light images, noise is somewhat covered by low-lighting and the left noise after denoising would be inevitably amplified during enhancement; 2) conversion of raw data to sRGB would cause information loss and also more noise, and hence prior LLIE methods trained on raw data are unsuitable for more common sRGB images. In this work, we propose a novel Low-light Enhancement & Denoising Network for real-world low-light images (RLED-Net) in the sRGB color space. In RLED-Net, we apply a plug-and-play differentiable Latent Subspace Reconstruction Block (LSRB) to embed the real-world images into low-rank subspaces to suppress the noise and rectify the errors, such that the impact of noise during enhancement can be effectively shrunk. We then present an efficient Crossed-channel & Shift-window Transformer (CST) layer with two branches to calculate the window and channel attentions to resist the degradation (e.g., speckle noise and blur) caused by the noise in input images. Based on the CST layers, we further present a U-structure network CSTNet as backbone for deep feature recovery, and construct a feature refine block to refine the final features. Extensive experiments on both real noisy images and public image databases well verify the effectiveness of the proposed RLED-Net for RLLIE and denoising simultaneously.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.