Model-based methods utilize atmospheric scattering model to effectively dehaze images but introduce unwanted artifacts. By contrast, recent model-free methods directly restore dehazed images by an end-to-end network and avoid artificial errors. However, their dehazing ability is limited. To address this problem, we combine the advantages of supervised and unsupervised learning and propose a semisupervised knowledge distillation network for single image dehazing named SSKDN. Specially, we respectively build a supervised learning branch and an unsupervised learning branch by four attention-guided feature extraction blocks. In the supervised learning branch, the network is optimized by synthetic images. In the unsupervised learning branch, we dehaze real-world images by dark channel prior and refine dehazing network (RefineDNet) (another dehazing method) and use these dehazed images as fake ground truths to optimize network using prior information and knowledge distillation. Experimental results on synthetic and real-world images demonstrate that the proposed SSKDN performs better than state-of-the-art methods and owns powerful generalization ability.
Single-image deraining is a classical problem in the field of low-level computer tasks. Most of the recent state-of-the-art image rain removal methods are trained on synthetic images, which have the problems of incomplete rain removal on real images and inability to process complex rain conditions. Based on these limitations, we propose a single-image deraining network based on multistage feature fusion (SIDNMFF). The network performs rain removal in four stages, with the first three stages using an improved encoder–decoder subnetwork for feature fusion to extract global and local information from the image, resulting in more detailed texture information of the obtained image. In the last stage, the network fuses the feature information extracted in the first three stages, performs feature extraction on the original resolution image, and finally outputs the final result of multistage deraining. The proposed method conducts a large number of comparative experiments on synthetic and real datasets as well as experiments on real datasets using no-reference metrics and the target detection method as the evaluation basis. Experimental results confirm that the proposed method achieves a satisfactory rain removal effect and outperforms the other methods.
Single-image dehazing is a critical problem since haze existence degrades the quality of images and hinders most advanced computer vision tasks. Early methods solve this problem via the atmospheric scattering model, which estimate the intermediate parameters and then recover a clear image by low-level priors or learning on synthetic datasets. However, these model-based methods do not hold in various scenes. Recently, many learning-based methods have directly recovered dehazed images from inputs, but these methods fail to deal with dense haze and always lead to color distortion. To solve this problem, we build a recurrent grid network with an attention mechanism, named RGNAM. Specifically, we propose a recurrent feature extraction block, which repeats a local residual structure to enhance feature representation and adopts a spatial attention module to focus on dense haze. To alleviate color distortion, we extract local features (e.g., structures and edges) and global features (e.g., colors and textures) from a grid network and propose a feature fusion module combining trainable weights and channel attention mechanisms to merge these complementary features effectively. We train our model with smooth L1 loss and structural similarity loss. The experimental results demonstrate that our proposed RGNAM surpasses previous state-of-the-art single-image dehazing methods on both synthetic and real haze datasets.
The research of traditional shadow detection is mainly based on the stationary camera. As the dual PTZ camera system can obtain both the multi-view and multi-resolution information, it has received more and more attention in real surveillance applications. However, few works about shadow detection and removal with such system were found in literature. In this paper, we propose a novel framework to automatically detect and remove shadow regions in real surveillance scenes from dual-PTZ-camera system. Our method consists of two stages. (1) In the first stage, the initial shadow regions are detected by comparing the similarities of pixel gray between two camera images after the homography transformation. We have demonstrated that the corresponding shadow points on a reference plane are related by a time-variant homography constraint as the camera parameters changing. (2) In the second stage, the detection of shadow region is treated as a superpixel classification problem, the predicted shadow candidates in the first stage are fed to a statistical model based on multi-feature fusion. We prove the effectiveness of the proposed shadow detection method by incorporating it with a dual-PTZ camera tracking system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.