Low-light images are generally produced by shooting in a low light environment or a tricky shooting angle, which not only affect people's perception, but also leads to the bad performance of some artificial intelligence algorithms, such as object detection, super-resolution, and so on. There are two difficulties in the low-light enhancement algorithm: in the first place, applying image processing algorithms independently to each low-light image often leads to the color distortion; the second is the need to restore the texture of the extremely low-light area. To address these issues, we present two novel and general approaches: firstly, we propose a new loss function to constrain the ratio between the corresponding RGB pixel values on the low-light image and the high-light image; secondly, we propose a new framework named GLNet, which uses the dense residual connection block to obtain the deep features of the low-light images, and design a gray scale channel network branch to guide the texture restoration on the RGB channels by enhancing the grayscale image. The ablation experiments have demonstrated the effectiveness of the proposed module in this paper. Extensive quantitative and perceptual experiments show that our approach obtains state-of-the-art performance on the public dataset.
The defocus deblurring raised from the finite aperture size and exposure time is an essential problem in the shooting process, which seriously affects the quality of the images. However, studies based on defocus deblurring in monocular images yielded good results, while those on binocular images are rare. The current methods directly merge the left and right views regardless of their unique features. Objects within the camera’s DoF will not have a difference in phase, while light rays from outside the DoF will have a relative shift that is directly correlated with the amount of defocus blur. In this paper, we firstly proposed an enhanced multi-stage network for defocus deblurring using dual-pixel Images. Taking into account the parallax between the left and right views, the first two stages learn the information of them, respectively, and correct the deviation of the images under the supervision of the ground truth. The third stage consists of EERG and ERGS. It merges with the feature map of the previous stage, so that the left and right views are mutually enhanced, and a good restored image is obtained. ERGS uses the residual block as the basic unit to restore the details of the blurred area while maintaining the clear. Experimental results show that our proposed network can achieve better accuracy than state-of-the-art approaches on the public DPD dataset.
The moiré pattern refers to the interference fringes generated by two equal amplitude sinusoids with close frequencies. In digital imaging, images collected in some scenes are vulnerable to moiré, such as images taken from knitted fabrics and images taken on LED screens, the visual quality of which is always damaged seriously. The difficulty of demoiréing lies in the moiré patterns distributed through different bands of frame frequencies and vary in colors and shapes. To fully learn the global information of the moiré images and remove the moiré patterns in a wide range of frequency bands, we proposed a multi-stage and multi-patch network, which can recover non-homogeneous moire images by aggregating the features of different spatial regions of patches in different stages. To increase the receptive field, we also introduce a novel Atrous Fusion Module in different atrous rates to learn multi-scale information. By taking advantage of these improvements, our proposed network can achieve superior accuracy than state-of-the-art approaches on the public dataset in the NTIRE2020 Single Image Demoiréing Challenge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.