Vision gets obscured in adverse weather conditions, such as heavy downpours, dense fog, haze, snowfall, etc., which increase the number of road accidents yearly. Modern methodologies are being developed at various academics and laboratories to enhance visibility in such adverse weather with the help of technologies. We review different dehazing techniques in many applications, such as outdoor surveillance, underwater navigation, intelligent transportation systems, object detection, etc. Dehazing is achieved in four primary steps: the capture of hazy images, estimation of atmospheric light with transmission map, image enhancement, and restoration. These four dehazing procedures allow for a step-by-step method for resolving the complicated haze removal issue. Furthermore, it also explores the limitations of existing deep learning-based methods with the available datasets and the challenges of the algorithms for enhancing visibility in adverse weather. Reviewed techniques reveal gaps in the case of remote sensing, satellite, and telescopic imaging. In the experimental analysis of various image dehazing approaches, one can learn the effectiveness of each phase in the image dehazing process and create more effective dehazing techniques. |
1.IntroductionVision through adverse weather is the most important issue to be resolved for any vision-based application, be it transportation system, navigation, or surveillance. During winter, land transport faces a tremendous problem of low visibility due to fog, and it affects the daily lives of people.1 During low visibility, drivers rely mainly on the headlight of the vehicle. But during unfavorable weather (rain, haze, snowfalls, and fog), the headlight fails to enhance the clarity of visibility due to the scattering of light by the precipitation. Precipitations scatter the light across a wide range of angles, and this disturbs the vision of the driver.1–3 As a result, accidents happen, sometimes causing loss of life. Hence, there is a compelling demand for vision algorithms capable of maintaining robust performance in challenging real-world scenarios characterized by adverse weather and varying lighting conditions. 1.1.Constituents for Adverse WeatherThe knowledge about the constituents of adverse weather, their formation, and concentrations are essential to carrying out research in the domain.2–5 The major constituents are air, haze, fog, cloud, and rain. The weather conditions differ mainly due to particle size, type, and concentration in the space. Particle size and concentration are the two most important parameters that affect the variation in weather conditions. Particles with larger size and lesser concentration may lead to similar conditions caused due to smaller size and higher concentration. The weather conditions can degrade or fluctuate because of the larger sizes of particles present in the atmosphere, as presented in Table 1. Table 1Adverse weather, as well as the varieties and concentrations of particles associated.
Haze is responsible for producing a dark or pale blue tint, which has an impact on visibility. It is a composition of smoke and road–dust mixed in a scattered manner from set sources, including plant hydrolysis, volcanic dust, sea salt, and combustible materials.2 The condensation of water vapor forms fog into tiny water droplets, which remain suspended in the air.3,4 The distinguishing features between the cloud and the fog can be accurately observed at more significant elevations rather than at ground level. The majority of the clouds are composed of haze-like water crystals, whereas others are formed of lengthy snow chunks and glacier dust particles.5 Rain is the form of water droplets that condense from the atmospheric water vapor and then become heavy enough to fall on the earth’s surface under gravity. Optical features of the weather particles make irregular spatial and temporal changes in the images.6 But this change is challenging to analyze in case of heavy rain and snowfall. 1.2.Mathematical Expression of Atmospheric Scattering for the Creation of HazeClear vision is dependent on two important factors contrast and brightness. Airlight and attenuation are the resulting phenomena due to the scattering of atmospheric particles. Airlight affects the brightness of the scene. Attenuation diminishes the color contrast of the region of interest. The inverse relationship between the airlight and attenuation can provide a theoretical basis for degradation mechanisms for hazy images.2 That is why vision in adverse weather should be described using the airlight model [Fig. 1(a)] and the light attenuation model. Scattering is the reflection of electromagnetic wave/energy by small particles suspended in a medium of different refractive indexes.2,4,5 There are three types of scattering, as described in Table 2. The ratio of particle diameter to light wavelength determines the type and efficiency of scattering.4 Table 2Different types of scattering.
Figure 1(b) can be observed for an overview of simple illumination and detection geometry. The usage of different suspended particles in a scattering medium is exposed by spectral irradiance per cross-sectional area. In the direction of is the observer, the radiant intensity of the unit volume equals to The radiant intensity is the flux radiated per unit of solid angle, per unit volume of the medium. Where is the angular scattering coefficient,2 and denotes the spectral irradiance per cross-sectional area.2 The total flux interacting over the scattered shape can be described as Airlight is a phenomenon caused by the effect of scattering of environmental illumination by the particles suspended in the atmosphere when the atmosphere acts as a light source.7 Environmental illumination can come from a variety of sources, including diffused skylight, direct sunshine, and light reflected from the surface. Airlight increases the apparent brightness of a scene point with depth. Attenuation refers to the change in the brightness of a light source as the distance increases through a transmission medium.4,6,8 The cross-sectional view of a collimated beam of light that incidents on the atmospheric medium passes through a very thin sheet with a thickness of . The fractional change in irradiance at can be calculated as follows: Between the boundaries and , integrated both sides where is the irradiance at the source . This is Bouguer’s attenuation exponential law.2 Attenuation owing to scattering is sometimes described in terms of is the optical thickness. It is commonly assumed that the coefficient is constant for horizontal paths. The scattering coefficient is independent of distance in this situation; thus, the attenuation law can be modified as follows:The radiant intensity4,5 of the point sources is . All scattered flux has been considered to be extracted from the absorption coefficient. A transmission rate is the amount of energy that is represented by Eq. (5). Hence, the mitigation of haze in images requires the estimation of an airlight map, as highlighted.2–5,8 Airlight estimation is essential for determining the depth information within hazy images, relying on scene-specific characteristics. The initial stage of any haze removal method typically involves contrast enhancement and image restoration.9,10 The second category of image dehazing methods leverages conventional and non-learning-based image degradation processes for depth information.9,11 These approaches have captured multiple real-time images under different weather conditions of the same scene to enhance the actual image by reversing the degradation process.9–13 This paper presents a review of the prior research on various techniques of image enhancement and restoration employed to ensure visual clarity under hazy conditions. Apart from this, the article also reviews various sensor-based atmospheric scattering models of a hazy image. In addition, different learning-based methodologies have been assessed based on their outcomes and thoroughly examined by different haze-removal datasets. The significant contributions of the review paper made the following:
The current article is organized as follows; Sec. 2 elaborates on the different aspects of image dehazing and a basic framework of haze removal. The complete review and mathematical model of haze removal techniques with their limitations are discussed in Sec. 3. An experimental evaluation of different image dehazing techniques on benchmarking datasets are provided in Sec. 4. Finally, challenges and future research trends are discussed in Secs. 5 and 6, respectively. 2.Aspects of Image DehazingOutside images are generally susceptible to different atmospheric conditions, especially haze, fog, and heavy rainfall. The resulting images generated by these atmospheric conditions are the images with low contrast, distorted colors, and reduced original scenes. Image enhancement with depth map estimation9 is a very active research area to provide the basic framework for haze removal, as shown in Fig. 2. This depth map estimation can be derived in terms of an airlight map and transmission map. Some methods have estimated the depth of information from the scene properties, as shown in Fig. 2. The scene properties can be shading function or contrast-based cost function. Once depth information is estimated, it is easier to restore the image using the fog model.9 The first stage in any haze removal process is to capture a picture from the real world. The camera and image sensors are used to capture this picture. The acquisition procedure takes advantage of the sensor plane.1 Several sensors have been used along with the camera modules for improving visibility, as shown in Fig. 3, and environmental models due to adverse weather conditions.5 Reputed automotive navigation systems mainly depend on a large number of different sensors to increase visibility. In adverse weather conditions, sensors usually provide reliable scanned data that can be fed to vision-based algorithms for object detection, depth estimation, or semantic scene understanding in order to improve safety and avoid accidents. Dannheim et al.7 used LiDAR technology to enhance visibility in different weather conditions. This technology is highly sensitive to light and has a perfect solution for providing robust information to the command station for controlling an autonomous vehicle in adverse weather. LiDAR and IR cameras first acquire the data from the environment with adverse weather. After that, apply sensor fusion technology for clear vision in adverse weather conditions. Dong et al.6 proposed a methodology that has a huge positive impact on advanced driver-assistant systems and autonomous navigation systems. The proposed method used the extended Kalman filter for detection and tracking obstacles using sensor fusion technology. Rablau et al.14 introduced an image-processing technique to detect vehicles in a foggy environment for collision avoidance. LiDAR and IR cameras have been used to increase the performance ratio. Image frames from the camera have been captured and retrieved from adverse weather. Image enhancement has been performed on hazy images using the adaptive Gaussian filtering technique. This process yields a clear new image by changing the threshold values. Loce et al.15 proposed a sensor fusion technology that can improve the sensor’s performance and efficiency by optimizing the mathematical error of the sensor readings. The fusion of multiple-sensor data always provides improved accuracy than single-sensor data in the case of controlling an autonomous vehicle. Rasshofer et al.16 proposed a multisensory mechanism for autonomous driving assistance systems. Laser, radar, and lidar have been used for the present mechanism to obtain a more accurate distance. The RF-based signal transmission and reception operate these types of long-range finders. Transmission models are applied for signal transmission in the new model-based sensor system. Pinchon et al.17 presented a comparative study on vision systems, traffic signal control, and lane detection for safe autonomous driving. The authors have tried to draw attention to visualization through camera and distance measurement using sensors (such as Lidar, radar, etc.) for experiments in adverse weather conditions. The sensors mentioned in Table 3 are used to get the values of the standard parameters for constructing an environmental model that enhances visibility in degraded weather conditions and climate factors, such as fog, rain, and snow significant impact. In adverse weather conditions, the sensor data provide a quick interpretation of an environmental condition to the drivers respond in a variety of ways, including accelerating and operating in the proper lane, among other things. The challenge lies in the interpretation of sensor data, minimization of the response time of the driver, and providing a precise view of the road rather than simply installing them. Table 3Application of some environment monitoring sensors.
After the image acquisition, the atmospheric scattering model is commonly applied in image processing and computer vision to characterize the creation of a hazy image, as shown in Fig. 1(a). The mathematical model of the haze component is given as where the color index of the RGB channel is , indicates atmospheric light, is a color image without haze, is the transmission medium, gives the attenuation constraint, is the airlight, is unknown depth information where refers to the transmission factor and refers to the detail layer.3.Haze Removal MethodsThis section provides a thorough analysis of the most effective haze reduction methods. The classifications of the haze removal methods are shown in Fig. 4. Different approaches have been utilized for image dehazing, where the dehazing algorithm is employed to determine an estimate of the scene depth and quantify haze thickness.18–20 Single and multiple image dehazing are the two main categories under which image dehazing is divided, as shown in Fig. 4. Two or more frames are utilized in multiple image dehazing to estimate scene depth and other parameters of images taken in various environmental conditions.15 The most difficult thing is to recover the depth from a single image and present the single image dehazing. 3.1.Conventional Single and Multiple-Image Dehazing MethodsDehazing techniques are designed to deal with the problems faced by the transport industries so that accidents can be avoided. In this sense, a system can be designed to fulfil various kinds of traffic management and reduce accidents. The correlation between airlight and direct attenuation models has offered numerous conventional techniques; nevertheless, the primary problem with most of them is the requirement for multiple images of the same scene. It is necessary to find recent research trends on which new improvements will be incorporated as shown in Tables 4Table 5–6. Nayar et al.2 described the development of vision systems for outdoor applications. The knowledge of atmospheric optics helped to make the ideas about the size of atmospheric particles and their scattering model. Components of light scattering, absorption, and radiation are the three basic categories in which they can be classified. Based on that classification, the properties of the atmospheric conditions are measured (such as color, intensity, brightness). The algorithm has introduced a model for object (in the scene) detection in adverse weather without making assumptions about different atmospheric conditions. The proposed model is based on the dichromatic atmospheric scattering model. It has produced fog removal and depth estimation techniques by determining spectral distribution. The final spectral distribution received by the observer has been estimated from the summation of the airlight and direct transmission light. Table 4Comparison of various restoration-based methods and extracting techniques for haze removal.
Table 5Comparison of various restoration-based non-learning methods for fog removal.
Table 6Techniques adopted for non-learning based different haze removal with applications.
Narasimhan et al.3 developed a system to enhance vision in bad weather. This system becomes robust by incorporating the knowledge of the atmospheric scattering model and the size of the atmospheric particles (haze, dust, rain fog, etc.). The color model has been used to change with the scattered light in the atmosphere. Authors have improved the dichromatic model to make it capable even when scene color varies due to different but unknown atmospheric conditions.2 The color of airlight has been calculated by averaging a dark object on a hazy day or foggy or from scene sites with a black direct transmission color.3 Proposed methods have been applied to photos of the site taken under extreme weather conditions. The structure of the arbitrary scene has been constructed from two unknown adverse weather conditions with two different horizon brightness values. Cozman et al.4 proposed to use depth from scattering methods to make it applicable for both indoor and outdoor environments. Attenuation of power and sky intensity are the two most dependable phenomena for atmospheric scattering. Due to the linear characteristics of the light propagation, the combined effects of scattering are used to measure the object intensity. This method provides better results but fails to cover the outdoor environment of larger dimensions. Sun et al.55 tried to draw the researcher’s attention to a survey of vision-based vehicle detection for transportation systems using optical sensors. According to this survey, the use of optical sensors for on-road vehicle detection can reduce injury and death statistics in cases of vehicle crashes. Singh et al.56 used the image-restoring technique for a vision system for different adverse weather conditions. This is applicable to image extraction of outdoor transportation systems, object tracking, and detection. Samer et al.57 described detecting rain streaks and restoring an image from the camera using a bilateral filter, guided filter, and morphological component analysis (MCA),23 as summarized in Table 4. Zheng et al.10 pointed out clear weather as an important condition for navigation and tracking applications. This paper focuses on removing fog from hazy weather by applying image enhancement followed by image restoration. Rakovic et al.42 proposed a polarization method based on the effects of scattering on light polarization, which has been utilized as an edge to remove fog in images. Coulson et al.18 mentioned that photographers often employ polarizing filters to eliminate haziness in landscape photographs because the illumination from the landscapes is usually unpolarized. Jobson et al.19 incorporated Retinex theory, the most common method used for histogram equalization and its variants. However, this method does not always maintain good color fidelity. Oakley et al.20 described a contrast enhancement-based model where the scene geometry is known. The main cause behind contrast degradation is the atmospheric particles, such as haze and fog. To address these problems, a temporal filter structure is presented. The low spatial frequency will be removed if an enhancement-based method is applied locally. A considerable improvement in image quality is achieved using the contrast enhancement approach with a temporal filter. Walker et al.58 focused on the polarization-based method to reduce haze in images. The main area of interest is underwater imaging. In most underwater imaging systems, the object is illuminated by the light source, which reduces visibility. An image-subtraction approach has been adopted here. This method has assumed that extra reverse scattered light primarily degraded the inverse image. Bu et al.59 proposed an approach of using a statistical model to detect the existence of airlight in any image. This method can be applied to both gray and color images. It can correct the contrast loss by estimating the airlight level. Monte Carlo simulation with a synthetic image model has been used to validate the accuracy of the method. This algorithm will produce an accurate result with the assumption that the airlight is constant throughout the image. But it fails in the case of non-uniform airlight over the image. Hautiere et al.33 developed a fast visibility restoration algorithm. The algorithm has addressed the depth maps ill-posed problem. This optimization problem has been formalized under the constraint , where is airlight, and is a minimal intensity component for each pixel. The speed of this algorithm is its primary benefit. Its difficulty is just on the order of the number of image pixels. This speed is the reason for using it for real-time implementation of the fog removal algorithm. However, the recovered image quality is insufficient when there are disruptions in the scene depth. Narasimhan et al.11 proposed an interactive method using user-defined depth and sky intensity information. Here, two types of user input have to be provided to interpolate scene point distance. One is the approximate location of the point at the largest distance, along with the distance increasing in nature. Next, the user has to provide approximate minimum and maximum distances. Other scene points distances can be interpolated as where refers to a tiny image distance of a pixel to the vanishing point. For , , and , . These interactive methods are practically not applicable for images where no depth information is available.Kumar et al.34 improved the method proposed by Oakly and Bu59 in the case of non-uniform airlight over the image. As airlight contribution can be varied according to the region, this method uses region segmentation in order to estimate the airlight for each region. The RGB color is essentially required during airlight estimation. Three color components have been fused to generate a luminance image. A human visual model-based cost function has been then applied to the luminance image in order to estimate the airlight. This estimation can reflect the depth variation within the image. Linear regression is used to generate the airlight map subtracted from the foggy image to restore. This algorithm projects better results but does not cover a wide range of scene depth. Xuan et al.35 used graph-based image segmentation to get segments of the underwater color image. According to the blackbody theory, the transmission map has been derived at the initial level. The refinement of this map has been done by using a bilateral filter. The choice of control parameters for segmentation has become difficult in the case of foggy images. Tables 4 and 5 summarize the comparison of different conventional-based restoration methods with their outcomes and extracting techniques for image dehazing. Table 6 represents the other adopted techniques for non-learning-based methods on a review of different haze removal applications. Zhang et al.43 applied polarization as a fog removal technique. Two or more pictures with varying degrees of polarization have been selected through which a fog removal technique is applied. There are no major or minor effects of any weather conditions on this method. He et al.36 proposed an approach based on the dark channel prior (DCP) and soft matting, as summarized in Table 5. The DCP is based on the fact that most local areas in foggy outdoor images have pixels with low intensities in at least one color component. As a result (Fig. 5), the dark channel for an image is defined as where is a local patch in the image and , . But the assumptions of this algorithm become invalid if the intensity of the scene objects is equal to the ambient light.Color attenuation prior60 technique estimates the transmission map and removes haze through the atmospheric scattering model. Color changes and sky regions in the dehazed image appear significantly noisy when using DCP. Zhang et al.61 proposed an improved DCP-based technique that described the resolution of this problem by identifying the sky regions of the haze and computing the variability of atmospheric light and DCP. Finally, the brightness and contrast of the outcomes of reconstructed images are increased. After that, the transmission of non-sky and sky regions are separately evaluated as shown in Fig. 6. Xu et al.37 proposed a strategy based on the utilization of a three-dimensional (3D) model of the scene, as summarized in Table 5. After measuring the width at each pixel, reducing the influence of blur is as simple as applying a mathematical framework: The initial intensity reflected toward the camera from the corresponding scene point is , is the airlight, and is the depth-dependent attenuation intensity indicated as a function of distance owing to scattering. González et al.44 developed an approach that involves taking many photographs of the same scene in various weather conditions. Xu et al.37 proposed a strategy based on the utilization of a 3D model of the scene, as summarized in Table 5. Modifications in scene pixel intensities across various weather conditions give easy restrictions for detecting scene depth disruptions and computing image structure. Tripathi et al.38 developed a fog removal algorithm with two major steps: airlight map estimation followed by map refinement, as summarized in Table 5. The DCP has been selected for estimating the airlight, and refinement has been carried out using anisotropic diffusion, as presented in Fig. 7. Different objects may be at various angles and distances from the camera. As a function of those distances, airlight should vary for different objects. The requirement of above-distance inequality and inter-region smoothing can be fulfilled by anisotropic diffusion. The algorithm requires histogram equalization and stretching as pre-processing and post-processing, respectively. In case of an extreme color image (presence of pixel intensities 0 and 255), histogram stretching fails to produce a processed image. Tan et al.62 suggested a method for converting a single color image to a multicolor image based on spatial regularization, as shown in Fig. 8. The author removed the fog by maximizing the contrast of the direct transmission while assuming a smooth layer of airlight. Here, the fog model is assumed as follows: where is the observed intensity of foggy image, is the scene radiance, is the atmospheric light, and is the medium transmission. Based on this model, the author assumed that for a patch with uniform transmission , visibility is reduced by the fog since :The result is regularized using the Markov random field model. Here, the restored image looks saturated and produces some halos of depth discontinuities in the scene. 3.2.Learning-Based Methods for Single-Image DehazingLearning-based image dehazing is a crucial technique that involves the elimination of fog or haze from images. It plays a significant role in various applications, including visualization, surveillance, outdoor photography, and autonomous driving. The article defines some potential future directions for the advancement of image dehazing. 3.2.1.Deep learning-based methodsThe field of image dehazing has significant advancements with the emergence of deep learning techniques. The results achieved so far have paved the way for developing even more advanced deep-learning models for this task in the future. The transmission map is estimated using a learning-based algorithm without prior knowledge. DehazeNet uses a convolutional neural network (CNN) to evaluate the transmission map.63 Coarse-scale network and fine-scale network predicted transmission maps are estimated locally in the novel hybrid approach multi-scale convolutional neural network (MSCNN).64 Fundamentals of the suggested single-image dehazing algorithm create hazy images and associated transmission maps using a depth image dataset to train the multi-scale network.64 In the test stage, employing the model that was trained to estimate the transmission map of the input hazy image, then generating the transmission map to produce the dehazed image. MSCNN64 uses a coarse-scale network to predict a comprehensive transmission map from a hazy image and transmits it to the fine-scale network to produce an enhanced transmission map. The coarse-scale network has been developed into four major parts, which are CNN, max-pooling, up-sampling, and linear combination. The convolutional module with a feature map can be defined as where and denote features maps of layer and the next layer. is the kernel with the size , and denotes the bias with as the rectified linear unit (ReLU)65 activation function. is the feature map of the pixel value at the location with a max-pooling size of in the upsampling layer. CNN architecture for deep learning, as opposed to end-to-end mapping.66 All-in-one dehazing (AOD) net employs linear mapping to integrate transmission maps () and atmospheric light () and uses CNN to learn its parameter.67 The mathematical equation is formulated on the AOD-net67 methods as where and are combined into the new module and this module is dependent on the input image , where is the constant bias and is the output dehazed image.A multiclass CNN68 is employed to select an optimal window range. Subsequently, a vectored minimum mean value-based detection technique is applied to the pixel currently under operation within a specific image kernel to identify noise and choose the best window size around the pixel. The affected pixel is processed using an adaptive vector median filter69 integrated with particle swarm optimization (PSO)70 if it is determined that the pixel is corrupted after differentiating between haze and haze-free pixels. The novel fusion method directly restores a clean image from a foggy input image.71 Frants et al.72 developed a novel quaternion neural network (QCNN-H) that demonstrated their improved performance capacity for single image dehazing. The methods provided a novel quaternion encoder–decoder structure with multilevel feature fusion and quaternion instance normalization. Quaternion operations73 enable modeling spatial relations involving rotation for real-time computer vision and deep learning applications. The advantages of QCNNs74 provide a desirable option for improving efficiency on different computational image processing and visualization tasks, especially when combined with a newly developed effective quaternion convolutions technique75,76 built around matrix decompositions. Quaternion convolution is characterized by the Hamiltonian product of a quaternion input, denoted as , and a quaternion convolutional kernel, represented by . Real-valued convolution is used to operate on the quaternion feature map’s constituent parts. The real component of the input quaternion feature maps is captured by the first group, which consists of feature maps. There are then three more groups with feature maps each, which are used to represent the hypothetical components that correspond to the , and quaternion elements, accordingly. The method performs quaternion-valued convolution on the input quaternion , which is written as and kernel is defined as The method increases training time and decreases the risk of vanishing gradients by correctly randomizing the network weights . The technique enhances efficiency in both uniform and nonuniform hazy weather conditions. Retinex-based13 suggested method such as robust Retinex decomposition network (RRDNet)77 has been implemented for an image’s restoration process. The reflection, illumination, and haze are each estimated using one of the three parts of RRDNet. Gamma transformation78,79 is computed to modify the brightness map and haze-free reflectance. The restored output is produced when the modified illumination and restored reflectance are combined. The mathematical expression of the robust Retinex model has been defined as where , , , and are denoted as underexposed image, reflectance, illumination, and haze, respectively. The illuminating element is modified during the restoration process using a gamma transformation as where denotes the adjustable parameter. The haze-free reflectance can be defined as after combining the modified illuminating and the outcome can be calculated asThe hazy images are divided into detail, and the base components are further improved; hazy and haze-free base parts are mapped.80 These models are expected to incorporate more advanced architectures, such as attention mechanisms and generative adversarial networks, in order to enhance their performance further. Attention mechanismsIncorporating attention mechanisms into dehazing models can prove highly beneficial since they enable the model to concentrate selectively on specific regions of the input image. Different approaches, such as spatial and channel attention, can be employed to integrate attention mechanisms into dehazing models to minimize the feature loss between encoder and decoder modules.81–83 An illustration of attention mechanism-based dehazing methods includes the AOD-Net67 and AdaFM-Net.12 The AOD-Net67 adopts a multi-scale CNN architecture64 that incorporates a spatial attention module. This mechanism allows the model to selectively attend to significant regions of the input image. On the other hand, the AdaFM-Net works as an adaptive feature-based modulation technique to amplify or suppress features based on their relevance, providing the model with the flexibility to adjust to changing levels of importance in different regions of the image.12 The methodology suggested a continuous modulation technique by incorporating an adaptive feature modification layer associated with the modulating approach, also intended to add a module to the system for adjusting the statistics of the filters to a different restoration level. Following each convolution layer, before applying the activation function ReLU,65 include a depth-wise convolution layer, which is formulated as where denotes the input feature map of the image, and are the filter and bias, and denotes the number of image features. The batch normalization (BN) layer is also incorporated in the AdaFM-Net module, which is formulated as follows: where and are the standard deviations and mean of the batch size, and are denoted as affine parameters.Zhou et al.84 proposed an attention-based feature fusion network (AFF-Net) for low-light image dehazing. The AFF-Net comprises a feature extraction module, an attention-based residual dense block (ABRDB), and a reconstruction module. The ABRDB includes a spatial attention mechanism that focuses on the selective regions of the input image while suppressing the non-selective regions. The attention mechanism is learned through trainable weights, which are updated during training. The light invariant dehazing network (LIDN)85 has been introduced for end-to-end real-time image dehazing. In order to train the LIDN, the quadruplet loss coefficient86 is implemented, which results in a sharper dehazed image and fewer artifacts. The method has a faster inference time and produces accurate accuracy. Xiao et al.87 developed a blind image dehazing technique based on deep CNN. The network consists of three modules, which are the perceptual enhancement, feature extraction, and regression network module. The technique can estimate the accuracy of scores of real-time image dehazing and effectively develop visual representations of features. The perceptual section with the attention module and multiscale convolution module has been used to extract the perceptual feature for image dehazing prediction. The perceptual enhancement module () with an input image feature denotes and size . The channel-wise statistics with average pooling can be defined as where denotes the weights of the attention module and is an elementwise multiplication operator. The final score () of the feature fusion module for image dehazing is calculated as where , , and are denote quality score, sample image features, and weight of image features module, respectively.Yin et al.88 proposed a spatial and channel-wise feature fusion model based on Adam’s hierarchy for image dehazing. The network consists of a lightweight spatial attention module followed by an Adams module and a combining hierarchical feature fusion module. The Adams module (multi-step optimal control method) uses a gating mechanism to selectively filter out the haze in the input image, whereas the channel-wise attention module is designed to enhance the features in the input image by selectively weighting the feature channels. Liu et al.89 introduced an attention-based local multi-scale feature aggregation network (LMFAN) to solve image dehazing. The LMFAN consists of three main components: a multi-scale feature extraction module, a multi-scale attention module, and a reconstruction module. The multi-scale attention module comprises a global attention mechanism that detects the overall haze distribution in the input image and a local attention mechanism that identifies the local texture information. For each channel, distinctive horizontal and vertical encodings are applied to a specific input , utilizing spatial ranges defined by the pooling kernel as or . Consequently, the output of channel at height can be expressed as where and are the positions of the pixel, and is the width of the channel . The feature map of the network has cascaded with convolution, and the feature map () is defined as where is the RELU activation function, and the middle feature map is .Attention mechanism-based methods have exhibited promising results, and continued research in this field has the potential to enhance real-time image dehazing techniques for numerous applications. The significant advantages and limitations are listed as shown in Table 7. Table 7Advantages and limitations of different attention-based networks for image enhancement and restoration.
In scenarios for real-time applications where the fog is excessively dense, attention-based mechanisms may not perform well since they depend on the visible features in the input image. The lack of adequate information in the real-time input images can hinder the ability of the attention mechanism to effectively dehaze the images. This is particularly challenging in extreme all-weather conditions. Generative adversarial networks based image dehazingGenerative adversarial nets (GAN) have several uses, including text-to-picture and image-to-image translation.66,95,96 It has been suggested to use a U-net architecture97 for the generator, which directly maps input to output image and aids in restoring signal independence from noise. WaterGAN is a technique that produces an accurate depth map from an underwater image.98 Hierarchically nested GANS enhance both picture fidelity and visual constancy.99 Dehaze-GAN, a reformulation of GANs into an atmospheric scattering model, has been developed by Zhu et al.100 Dehaze-GAN comprises a generator, denoted as , and a discriminator, denoted as , which undergoes alternate training to engage in a competitive process. is refined to effectively discern synthesized images from genuine ones, and is trained to by generating counterfeit images. More specifically, the optimal states of and are achieved through participation in the following two-player minimax game: where and denote the input image and random noise, and is the final GAN, which is typically expressed as .CycleGAN, which enhances visual quality, has been created by combining sensory loss and cycle consistency.101 Integrated GAN developed by booster EPDN has been proposed by Qu et al.102 An extension of information theory, GAN, which can develop disentangled representations unsupervised has been proposed.103 Disentangled representative modeling GAN allows for the learning of discriminative and generative representation. It is frequently employed in face and emotion recognition. Due to the limitations of one-to-one image translation, multimodal unsupervised image-to-image translation architecture has been developed that breaks down images into style and information codes. A novel strategy that leverages adversarial training for physical prototype translation has been presented to address the shortcomings of image-to-image translation.104 Yang et al.105 proposed a unique deep-learning-based technique using dark channel and transmission map on the haze model. This is achieved by creating an energy model in a proximal dehazeNet as shown in Fig. 9. Cai et al.63 described an end-to-end system on dehazeNet for transmission estimation. The layers of CNN in the dehazeNet are designed to stand for prior image dehazing. BRelu (bilateral rectified linear unit) based non-linear activation functions have also been used to enhance the quality of recovered images. The medium of transmission map estimation is introduced to achieve haze removal in dense haze conditions, as presented in Fig. 10. The GAN-based techniques have shown great potential in generating high-quality dehazed images for simple scenes. GANs have demonstrated encouraging outcomes in addressing image dehazing tasks. However, GAN-powered methods for image dehazing possess certain limitations. Their performance may not be as impressive for more complex scenes that contain multiple objects or structures. This is because the generator employed in such cases may not capture all the intricate details in the input image. 4.Experimental Evaluation and Dehazing DatasetThe study of different dehazing techniques offers experimental measurement and the effectiveness of state-of-the-art methods. The quantitative and qualitative measures to compare the effectiveness of the dehaze methods are evaluated. Among the different haze removal, the most frequently used methods include Fattal,39 Tarel,40 He,36 MSCNN,64 AOD Net,67 dehazeNet,63 Dehaze-GAN,100 SCR-Net,93 QCNN-H,72 Deep CNN,87 LIDN,85 and RRANet,13 which have been reviewed and experimental evaluation carried out on the different available dehazing datasets. 4.1.Dehazing DatasetsDehazing datasets comprise a set of indoor and outdoor images utilized to train and evaluate algorithms to eliminate fog or haze from real-time images. Usually, these datasets contain a series of paired images, consisting of hazy or foggy versions and corresponding ground truth clear images, which are employed for evaluation and training purposes. However, obtaining a substantial real-world dataset for dehazing purposes is challenging due to the difficulty in collecting dehazed images, thereby limiting the size of the available dataset. Table 8 presents information on the commonly employed dehazing datasets, with specifics on their respective details. Table 8Summary of most relevant dehazing datasets.
4.2.Implementation DetailsThe different state-of-the-art methods are compared on a Win10 operating system, utilizing an Intel® Core™ i5-8265U CPU, 16 GB RAM, and GPU NVIDIA GeForce MX250 with a 32 GB memory capacity. The deep learning library with Pytorch1.8.1 is used, here Adam serving as the model optimizer. We set the initial learning rate at 0.0001 and the total training epoch to 60. To optimize GPU memory and computational efficiency, we set the batch size to 20. Upon completion of the training phase, we conduct model inference using half-precision to conserve GPU memory and enhance processing efficiency. 4.3.Quantitative MeasurementsAdverse weather causes a number of road and rail accidents each year. For example, seven people were killed when a vehicle collided due to heavy fog in Haryana.124 Two cars coming from Chandigarh were hit by another car. The accident took place as there was limited visibility due to heavy fog.124 Figure 11 presents the number of accidents caused due to adverse weather in India during 2017 to 2021.125–128 Image dehazing is a challenging problem, requiring refined algorithms that can effectively estimate the depth of the scene, and remove the scattering and absorption effects of the haze. These algorithms must be able to operate in real-time, with limited computational resources, and be robust to variations in lighting, weather conditions, and other environmental factors. The quality of the haze removal algorithm is evaluated using quantitative measurements, as shown in Table 9. Table 9Comparison of the quantitative experiment of different dehazing methods for single image dehazing.
In haze removal methods, quantitative measurements are divided into two types: a ground truth image is provided, with another is not provided ground truth image. The ground truth image is provided or not provided for the quality measurements such as peak signal-to-noise ratio (PSNR),129,130 mean squared error (MSE),130 structural similarity index metric (SSIM),61,129 natural image quality evaluator (NIQE),131 visibility index (VI),132–137 and realness index (RI).72,85,87 Ground truth image is a haze-free image of the same original hazy image. However, haze-free is produced when some haze removal techniques are applied to the hazy image datasets. MSE130 has measured the error that is evaluated to compare the ground truth and dehaze image. The mathematical expression of MSE is as follows: represents the pixel intensities of the ground truth image and is the pixel values of the dehaze image, where and denote the feature value of image pixel coordinates, respectively. PSNR129,130 is applied to evaluate MSE after applying dehazing methods. Maximum PSNR values represent the visibility of the image is enhanced, and PSNR can be represented as follows: The SSIM129 is the similarity between images with and without haze. SSIM always lies between 0 and 1. When the SSIM value is close to 1, the two images are mostly similar. The SSIM score around the pixels can be calculated as follows: where and represent the mean and standard deviation of the dehaze image , respectively. Similarly, and represent the mean and standard deviation of the haze image , respectively, and represent the covariance between and .The NIQE131 is built around developing a high standard aware set of statistical attributes that use a simple but effective space area on a typical scene. The NIQE131 measured is a technique for evaluating an image’s naturalness that uses a model of the human visual technique’s response to natural scene images. The NIQE calculates an image’s naturalness by comparing its statistical features with natural scene images. The NIQE can be expressed numerically as where are the weights that are supported by each images feature: where , , and denote the expected values of images feature, scaling factors, and standard deviation, respectively.The VI132–138 evaluates the quality of an image that has been hazed or dehazed. It compares the visibility of the image in question to a clear reference image. This resemblance is calculated by analyzing transmission and gradients. Koschmieder’s law133,134 reveals that the degree of haze is directly proportional to transmission. As a result, the similarity between the transmission maps of the hazy image and the reference can be used to estimate the amount of haze present. The transmission information135 of the dehaze and hazy images are and at pixel , the transmission similarity is defined as where is a constant with a positive value chosen to improve stability. The transmission map is defined as where and are extinction coefficients and observing distances of . where is represented by the gradient module that can explore the gradient features of images. Mathematically, it is defined as where and is the feature of images. and are partial derivatives of the image at . and are the gradient modules of the dehaze and haze images, is the adjustable parameters between the gradient module and transmission map. Several dehazing techniques13,72,85,87,132–137,139–147 introduce artifacts or distort the image, which reduces visibility. As an outcome, while the initial hazy images are original images without these degradations, some methods also have to consider the accuracy of the dehazing outcomes when evaluating a dehazing technique. Thus, the RI72,85,87,138 evaluates the dehazed image’s reality by utilizing the similarity between the dehazed image and the haze-free reference in feature spaces. The RI is defined as where is the phase congruency module145 of the feature similarity index (FSIM),146 which is calculated as where and are the chrominance features of the different images and is the total similarity of chrominance features, which is defined as where , , , and are chrominance features extracted from two different images, and , are positive constants and is the FSIM146 weights of maximum chrominance features in multiple orientations, where is the adjustable parameter between the phase congruency and chrominance features module.The quantitative measurements of the different dehazing techniques are described in Table 9. A comparison to a wide range of cutting-edge methods, that include Fattal,39 Tarel,40 He,36 MSCNN,64 AOD Net,67 dehazeNet,63 Dehaze-GAN,100 SCR-Net,93 QCNN-H,72 Deep CNN,87 LIDN,85 and RRANet13 was conducted on different benchmark datasets. The authors provided some models based on information techniques trained on indoor and outdoor dehazing datasets. The NIQE131 has been employed to measure the naturalness of the haze-free image, in which lower results showed more accurate visual efficiency, and the VI132–137 and RI72,85,87 were employed for additional application to measurement the accuracy of real-time dehaze images. Increased SSIM, PSNR, VI, and RI scores demonstrate greater efficiency. In the case of NIQE, a lower result indicates improved visibility. Table 9 shows the PSNR, SSIM, VI, and RI methodologies with the highest visibility restoration on different benchmark datasets. He,36 dehazeNet,63 Dehaze-GAN,100 SCR-Net,93 LIDN,85 QCNN-H,72 RRANet,13 and Deep CNN87 on the NYU2,67,105,106 Make3D,107 RESIDE,71,108–113 HazeRD,114–116 SOTS,117,118 O-Haze,113,119–121 D-Hazy,113 I-Haze,113,120,121 and NH-Haze120–122 datasets can be attained higher SSIM and PSNR values (bold) also outperform on perception metrics NIQE, VI, and RI (bold). Table 9 also demonstrates that recently introduced methodologies, such as QCNN-H,72 Deep CNN,87 LIDN,85 and RRANet,13 surpass other dehazing methods (Fattal,39 Tarel,40 He,36 AOD Net,67 dehazeNet63) in terms of VI and RI by effectively eliminating non-homogeneous weather and enhancing sharpness through its trained datasets. The time-consumption comparison of benchmarking dehazing methods with resolutions of images 640 × 480 is also performed in Table 9 on different indoor and outdoor datasets. RRANet13 method has a faster processing time on GPU NVIDIA GeForce MX250 with NYU2,67,105,106 I-Haze,113,120,121 and NH-Haze120–122 datasets. Similarly, QCNN-H72 and SCR-Net93 methods also have faster processing time on Make3D,107 SOTS,117,118 RESIDE,71,108–113 and HazeRD114–116 datasets that are executed using the same GPUs for real-time dehazing applications. 4.4.Qualitative MeasurementsSeveral haze removal techniques from the aforementioned categories have been chosen to be evaluated primarily for performance analysis to compare the effects of various techniques and test the efficiency of qualitative evaluation (Figs. 5Fig. 6Fig. 7Fig. 8Fig. 9–10). Different real-time outdoor haze image has been selected as an experimental evaluation of images by comparing the other methods, including Fattal,39 Tarel,40 He,36 AOD Net,67 dehazeNet,63 Dehaze-GAN100 and SCR-Net,93 QCNN-H,72 deep CNN,87 LIDN,85 and RRANet13 in Figs. 12Fig. 13Fig. 14Fig. 15Fig. 16Fig. 17–18 on the NYU2,67,105,106 RESIDE,71,108–113 HazeRD,114–116 SOTS,117,118 O-Haze,113,119–121 D-Hazy,113 and NH-Haze120–122 datasets, respectively. Figures 12(c) and 12(e) demonstrate that He36 and AOD Net67 generate unwanted noise while also losing the real original colors of the dehazing images. On the other hand, the Dehaze-GAN,100 QCNN-H,72 and Deep CNN87 methods effectively remove haze from the real-time image without color loss, as evidenced by Figs. 12(g), 12(i), and 12(j), but these methodologies are not working correctly in non-uniform weather conditions on the NYU267,105,106 dataset. LIDN85 frequently leaves hazy areas behind and is inconsistent in removing haze, as shown in Fig. 12(k). When there is a dense haze, RRANet13 has trouble, as shown in Fig. 12(l). As shown in Figs. 13(b)–13(d), non-CNN methods such as He,36 Fattal,39 and Tarel40 are more likely to excessively boost the contrast of hazy images. As a result, the dehazing methods generate a large number of artifacts that significantly reduce the reality of the restored images. On the other hand, as shown in Figs. 13(e)–13(l), CNN-based techniques such as AOD Net,67 QCNN-H,72 LIDN,85 Deep CNN,87 RRANet,13 dehazeNet,63 SCR-Net93 and Dehaze-GAN100 can generate outcomes that are very similar to real-world images. As a result, virtually all CNN-based methods perform better than non-CNN-based methods in terms of RI and NIQE. In Figs. 13(d), 13(f), 12(i), and 12(j) the Tarel,40 dehazeNet,63 QCNN-H,72 and Deep CNN87 methods were visually evaluated and demonstrate their ability to eliminate the color cast and fog from the real-time image. The fast visibility restoration40 method is based on the parameter changes of the transfer function using an optimized way. The dehazeNet63 method produces an enhanced image that only partially removes the color cast and is ineffective in recovering the genuine color information. As observed in Figs. 14(e)–14(h) utilizing AOD Net,67 dehazeNet,63 Dehaze-GAN,100 and SCR-Net93 methodologies, it becomes evident that the visibility of the images experiences significant enhancement. In contrast, when evaluated using SSIM and PSNR, most other measurement approaches fail to rank these improvements effectively. In addition, VI and RI are measured on QCNN-H,72 LIDN,85 Deep CNN,87 and RRANet,13 effectively increasing visibility in Figs. 14(i)–14(l) on outdoor heterogeneous images. The images in Figs. 14(i) and 14(k) have some artifacts and have been slightly degraded. The attention-based enhancement approach eliminates color cast and reinstates the original color features, as it considers both color and contrast as the primary parameters for enhancement. However, the methods fall short of enhancing the overall brightness of the degraded input image, as shown in Figs. 14(c), 14(d), and 14(g). The images of Figs. 14(h) and 14(j) also experience severe degradation and glaring halo effects. In contrast to other metrics, RI can evaluate these differences and produce results that are consistent with human perceptions. Figure 15 shows the evaluation of the real-time non-homogeneous image of different state-of-the-art methods on the SOTS117,118 dataset. But Fattal,39 He,36 and Tarel40 fail to produce fog-free images in non-homogeneous conditions that are shown in Figs. 15(b) and 15(d). The conventional enhancement methods are not appropriate for defogging because they are unable to address the degradation restored by fog, which is closely associated with the depth of the scene. While the QCNN-H,72 RRANet,13 and LIDN85 techniques exhibit strong performance on the O-Haze113,119–121 dataset as shown in Figs. 16(i), 16(l), and 16(k), this success is primarily attributed to overfitting. Unfortunately, when applied to the genuine SOTS117,118 dataset, these methods prove to be ineffective. Figures 16 and 17 show the real-time qualitative evaluation on dense foggy images, where the Tarel40 and AOD Net67 visibility range not more than 100 m and highly distorted the original image’s color. The SCR-Net93 and RRANet13 methods successfully eliminate fog while exhibiting fewer color distortions, as shown in Figs. 16(h), 16(l), 17(h), and 17(l). Moreover, the dehazed image produced by methods (Figs. 16 and 17) resembles the dehaze image on the O-Haze113,119–121 and D-Hazy113 datasets, respectively. Qualitative analysis of various methods on real-time dense foggy images is presented in Fig. 18 on the NH-Haze120–122 dataset. The outcomes of Fattal39 and dehazeNet63 exhibit color distortion, and the result produced by Dehaze-GAN100 in Fig. 18(g) suffers from over-brightening when compared to the original dehaze image. Although AOD Net67 and SCR-Net93 successfully removed the fog, some fog residue remains in the defogged output. It can be seen that the QCNN-H,72 LIDN,85 Deep CNN,87 and RRANet13 techniques execute better than all other approaches that were compared and have the ability to preserve the image’s color and contrast, as shown in Figs. 18(i)–18(l) on the NH-Haze120–122 dataset. The main focus of the qualitative experiment is to restore image visibility and enhance the quality of images using the available datasets that are described in Table 8. However, all the methods improve the visibility effect on different haze conditions. The literature also provided some dehazing applications on standard datasets that are used in learning-based end-to-end haze removal procedures. The application and dataset used in the learning-based haze removal methods are displayed in Table 10. Table 10Different dehazing methods validated on benchmark datasets and application.
5.Challenges and DiscussionsHaze removal techniques are suitable for various vision-based applications. The limitations of those existing techniques are already mentioned in Tables 4Table 5Table 6–7. So, the dehazing process is insufficient to produce a clear vision in adverse weather conditions. A clear view can only be obtained through airlight estimation and by creating an environmental model based on different weather sensors. The following assumptions have been made in all the reviews on image dehazing methods for clear visualization in adverse weather.
6.Conclusions and Future WorkThe progress of methods for removing haze from images is discussed in this study. Limitations and advantages for the removal of haze have been presented, which motivate future research. The haze removal technique is most suitable for many image processing areas of adverse weather conditions. Satellite imagery, intelligent transportation systems, underwater computer vision, image recognition, outdoor monitoring, object recognition, information extraction, and so on are some important broad areas where haze removal methods are used. This review article is divided into two major categories: single and multiple-image dehazing, which are further divided into two sub-categories. Single-image dehazing approaches are classified as non-learning and learning-based. However, multiple-image dehazing is categorized into polarization and scene depth. Furthermore, a step-by-step evaluation of standard methodologies is described for analyzing image dehazing and defogging performance. A survey of recently released image dehazing datasets is also summarized. An in-depth review and experimental results will assist readers in understanding various dehazing approaches and will aid the creation of more advanced dehazing procedures. As a result, future research will concentrate on improving depth estimation and visual quality restoration. Fast and accurate estimation of airlight information increases the speed and perceptual image quality. CNN and GAN have significantly succeeded in several higher-level image-processing applications. Recent research works are not only based upon the atmospheric scattering model of airlight and attenuation. Still, they involve an end-to-end attention-based model to learn the direct mapping from hazy to dehazing images. However, current learning-based techniques are unable to restore the fine details of fog-based sky images, particularly in non-homogeneous foggy situations. In the future, two different deep neural networks will be combined with a transformer and end-to-end attention module for a clear scene of the hazy image without any feature loss. Compliance with Ethical StandardsThe authors have no conflicts of interest to declare relevant to the content of this article. Data, Code, and Materials Availability StatementThe presented results of the article were created based on the real-time experiment using benchmark datasets such as NYU2, Make3D, RESIDE, HazeRD, SOTS, O-Haze, D-Hazy, I-Haze, and NH-Haze, which are publicly available and can be accessed by applying for the prior registration. Since, this article is an outcome of an ongoing R&D project work and owing to certain intellectual property right restrictions, we will be able to share the code of the experimental evaluation and relevant materials in a Github repository available at: https://github.com/sahadeb73 only after the effective completion of the project. ReferencesF. Hu et al.,
“Dehazing for images with sun in the sky,”
J. Electron. Imaging, 28 043016 https://doi.org/10.1117/1.JEI.28.4.043016 JEIME5 1017-9909
(2019).
Google Scholar
S. K. Nayar and G. Srinivasa Narasimhan,
“Vision in bad weather,”
in Proc. Seventh IEEE Int. Conf. Comput. Vis.,
820
–827
(1999). https://doi.org/10.1109/ICCV.1999.790306 Google Scholar
S. G. Narasimhan and S. K. Nayar,
“Chromatic framework for vision in bad weather,”
in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit. (CVPR 2000) (Cat. No. PR00662),
598
–605
(2000). https://doi.org/10.1109/CVPR.2000.855874 Google Scholar
F. G. Cozman and K. Eric,
“Depth from scattering,”
in Proc. IEEE Comput. Soc. Conf. Comput. Vis. and Pattern Recognit.,
801
–806
(1997). https://doi.org/10.1109/CVPR.1997.609419 Google Scholar
J. A. Ibáñez, S. Zeadally and J. Contreras-Castillo,
“Sensor technologies for intelligent transportation systems,”
Sensors, 18
(4), 1212 https://doi.org/10.3390/s18041212 SNSRES 0746-9462
(2018).
Google Scholar
Y. Dong et al.,
“Framework of degraded image restoration and simultaneous localization and mapping for multiple bad weather conditions,”
Opt. Eng., 62
(4), 048102 https://doi.org/10.1117/1.OE.62.4.048102
(2023).
Google Scholar
C. Dannheim et al.,
“Weather detection in vehicles by means of camera and LIDAR systems,”
in Sixth Int. Conf. Comput. Intell., Commun. Syst. and Netw.,
186
–191
(2014). https://doi.org/10.1109/CICSyN.2014.47 Google Scholar
A. M. Kurup and J. P. Bos,
“Winter adverse driving dataset for autonomy in inclement winter weather,”
Opt. Eng., 62
(3), 031207 https://doi.org/10.1117/1.OE.62.3.031207
(2023).
Google Scholar
S. Yang, G. Cui and J. Zhao,
“Remote sensing image uneven haze removal based on correction of saturation map,”
J. Electron. Imaging, 30
(6), 063033 https://doi.org/10.1117/1.JEI.30.6.063033 JEIME5 1017-9909
(2021).
Google Scholar
Z. Zheng et al.,
“Image restoration of hybrid time delay and integration camera system with residual motion,”
Opt. Eng., 50
(6), 067012 https://doi.org/10.1117/1.3593156
(2011).
Google Scholar
S. G. Narasimhan and S. K. Nayar,
“Interactive (De)weathering of an image using physics models,”
in IEEE Workshop on Color and Photometr. Methods in Comput. Vis., in Conjunction with ICCV,
(2003). Google Scholar
J. He, C. Dong and Y. Qiao,
“Modulating image restoration with continual levels via adaptive feature modification layers,”
in IEEE/CVF Conf. Comput. Vis. and Pattern Recognit. (CVPR),
11048
–11056
(2019). https://doi.org/10.1109/CVPR.2019.01131 Google Scholar
H. Du, Y. Wei and B. Tang,
“RRANet: low-light image enhancement based on Retinex theory and residual attention,”
Proc. SPIE, 12610 126101Q https://doi.org/10.1117/12.2671262 PSISDG 0277-786X
(2023).
Google Scholar
C. Rablau,
“LIDAR: a new (self-driving) vehicle for introducing optics to broader engineering and non-engineering audiences,”
in Educ. and Train. in Opt. and Photonics,
11143_138
(2019). Google Scholar
R. P. Loce et al.,
“Computer vision in roadway transportation systems: a survey,”
J. Electron. Imaging, 22
(4), 041121 https://doi.org/10.1117/1.JEI.22.4.041121 JEIME5 1017-9909
(2013).
Google Scholar
H. R. Rasshofer, M. Spies and H. Spies,
“Influences of weather phenomena on automotive laser radar systems,”
Adv. Radio Sci., 9 49
–60 https://doi.org/10.5194/ars-9-49-2011
(2011).
Google Scholar
N. Pinchon et al.,
“All-weather vision for automotive safety: which spectral band?,”
in Advanced Microsystems for Automotive Applications 2018,
(2019). https://doi.org/10.1007/978-3-319-99762-9_1 Google Scholar
K. L. Coulson,
“Polarization of light in the natural environment,”
Proc. SPIE, 1166 2
–10 https://doi.org/10.1117/12.962873 PSISDG 0277-786X
(1989).
Google Scholar
J. D. Jobson, Z. Rahman and G. A. Woodell,
“A multiscale retinex for bridging the gap between color images and the human observation of scenes,”
IEEE Trans. Image Process., 6
(7), 965
–976 https://doi.org/10.1109/83.597272 IIPRE4 1057-7149
(1997).
Google Scholar
J. P. Oakley and B. L. Satherley,
“Improving image quality in poor visibility conditions using a physical model for degradation,”
IEEE Trans. Image Process., 7
(2), 167
–179 https://doi.org/10.1109/83.660994 IIPRE4 1057-7149
(1998).
Google Scholar
Y. H. Fu et al.,
“Single-frame-based rain removal via image decomposition,”
in IEEE Int. Conf.,
1453
–1456
(2011). https://doi.org/10.1109/ICASSP.2011.5946766 Google Scholar
T. W. Huang and G. M. Su,
“Revertible guidance image based image detail enhancement,”
in IEEE Int. Conf. Image Process. (ICIP),
1704
–1708
(2021). https://doi.org/10.1109/ICIP42928.2021.9506374 Google Scholar
J. L. Starck et al.,
“Morphological component analysis,”
Proc. SPIE, 5914 209
–223 https://doi.org/10.1117/12.615237 PSISDG 0277-786X
(2005).
Google Scholar
D. A. Huang et al.,
“Context-aware single image rain removal,”
in IEEE Int. Conf. on Multimedia and Expo (ICME),
164
–169
(2012). https://doi.org/10.1109/ICME.2012.92 Google Scholar
J. Xu et al.,
“Removing rain and snow in a single image using guided filter,”
in IEEE Int. Conf. Comput. Sci. and Autom. Eng. (CSAE),
304
–307
(2012). https://doi.org/10.1109/CSAE.2012.6272780 Google Scholar
D. Y. Chen, C. C. Chen and L. W. Kang,
“Visual depth guided image rain streaks removal via sparse coding,”
in Int. Symp. Intell. Signal Process. and Commun. Syst.,
151
–156
(2012). https://doi.org/10.1109/ISPACS.2012.6473471 Google Scholar
L. Zhang et al.,
“Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,”
J. Electron. Imaging, 20
(2), 023016 https://doi.org/10.1117/1.3600632 JEIME5 1017-9909
(2011).
Google Scholar
D. Eigen, D. Krishnan and R. Fergus,
“Restoring an image taken through a window covered with dirt or rain,”
in IEEE Int. Conf. Comput. Vis.,
633
–640
(2013). https://doi.org/10.1109/ICCV.2013.84 Google Scholar
F. Sun et al.,
“Single-image dehazing based on dark channel prior and fast weighted guided filtering,”
J. Electron. Imaging, 30
(2), 021005 https://doi.org/10.1117/1.JEI.30.2.021005 JEIME5 1017-9909
(2021).
Google Scholar
Q. Zhang et al.,
“Dictionary learning method for joint sparse representation-based image fusion,”
Opt. Eng., 52
(5), 057006 https://doi.org/10.1117/1.OE.52.5.057006
(2013).
Google Scholar
Z. Chen, T. Ellis and S. A. Velastin,
“Vision-based traffic surveys in urban environments,”
J. Electron. Imaging, 25
(5), 051206 https://doi.org/10.1117/1.JEI.25.5.051206 JEIME5 1017-9909
(2016).
Google Scholar
Z. Zhao and G. Feng,
“Efficient algorithm for sparse coding and dictionary learning with applications to face recognition,”
J. Electron. Imaging, 24
(2), 023009 https://doi.org/10.1117/1.JEI.24.2.023009 JEIME5 1017-9909
(2015).
Google Scholar
N. Hautiere et al.,
“Blind contrast enhancement assessment by gradient rationing at visible edges,”
Image Anal. Stereol. J., 27
(2), 87
–95 https://doi.org/10.5566/ias.v27.p87-95
(2008).
Google Scholar
A. Kumar, R. K. Jha and N. K. Nishchal,
“Dynamic stochastic resonance and image fusion based model for quality enhancement of dark and hazy images,”
J. Electron. Imaging, 30
(6), 063008 https://doi.org/10.1117/1.JEI.30.6.063008 JEIME5 1017-9909
(2021).
Google Scholar
L. Xuan and Z. Mingjun,
“Underwater color image segmentation method via RGB channel fusion,”
Opt. Eng., 56
(2), 023101 https://doi.org/10.1117/1.OE.56.2.023101
(2017).
Google Scholar
K. He, J. Sun and X. Tang,
“Single image haze removal using dark channel prior,”
IEEE Trans. Pattern Anal. Mach. Intell., 33
(12), 2341
–2353 https://doi.org/10.1109/TPAMI.2010.168 ITPIDJ 0162-8828
(2010).
Google Scholar
S. Xu and X. P. Liu,
“Adaptive image contrast enhancement algorithm for point-based rendering,”
J. Electron. Imaging, 24
(2), 023033 https://doi.org/10.1117/1.JEI.24.2.023033 JEIME5 1017-9909
(2015).
Google Scholar
A. K. Tripathi and S. Mukhopadhyay,
“Single image fog removal using anisotropic diffusion,”
IET Image Process., 6
(7), 966
–975 https://doi.org/10.1049/iet-ipr.2011.0472
(2012).
Google Scholar
R. Fattal,
“Single image dehazing,”
in Int. Conf. on Comput. Graph. and Interactive Tech. Arch. ACM SIGGRAPH,
1
–9
(2008). Google Scholar
J. P. Tarel and N. Hautiere,
“Fast visibility restoration from a single color or grey level image,”
in IEEE Int. Conf. on Comput. Vis.,
2201
–2208
(2009). https://doi.org/10.1109/ICCV.2009.5459251 Google Scholar
J. Zhang et al.,
“Local albedo-insensitive single image dehazing,”
Vis. Comput., 26
(6–8), 761
–768 https://doi.org/10.1007/s00371-010-0444-z VICOE5 0178-2789
(2010).
Google Scholar
M. J. Rakovic et al.,
“Light backscattering polarization patterns from turbid media: theory and experiment,”
Appl. Opt., 38
(15), 3399
–408 https://doi.org/10.1364/AO.38.003399 APOPAI 0003-6935
(1999).
Google Scholar
W. Zhang et al.,
“Review of passive polarimetric dehazing methods,”
Opt. Eng., 60
(3), 030901 https://doi.org/10.1117/1.OE.60.3.030901
(2021).
Google Scholar
R. Luzón-González, J. Nieves and J. Romero,
“Recovering of weather degraded images based on RGB response ratio constancy,”
Appl. Opt., 54 B222
–B231 https://doi.org/10.1364/AO.54.00B222 APOPAI 0003-6935
(2015).
Google Scholar
Y. Wang and C. Fan,
“Multiscale fusion of depth estimations for haze removal,”
in IEEE Int. Conf. Digit. Signal Process. (DSP),
882
–886
(2015). https://doi.org/10.1109/ICDSP.2015.7252003 Google Scholar
Z. Rong and W. L. Jun,
“Improved wavelet transform algorithm for single image dehazing,”
Optik-Int. J. Light Electron. Opt., 125
(13), 3064
–3066 https://doi.org/10.1016/j.ijleo.2013.12.077
(2014).
Google Scholar
F. A. Dharejo et al.,
“A color enhancement scene estimation approach for single image haze removal,”
IEEE Geosci. Remote Sens. Lett., 17
(9), 1613
–1617 https://doi.org/10.1109/LGRS.2019.2951626
(2020).
Google Scholar
G. Mandal, P. De and D. Bhattacharya,
“A real-time fast defogging system to clear the vision of driver in foggy highway using minimum filter and gamma correction,”
Sādhanā, 45 40 https://doi.org/10.1007/s12046-020-1282-y
(2020).
Google Scholar
C. Xiao and J. Gan,
“Fast image dehazing using guided joint bilateral filter,”
Vis. Comput., 28
(6), 713
–721 https://doi.org/10.1007/s00371-012-0679-y VICOE5 0178-2789
(2012).
Google Scholar
Z. Li et al.,
“Weighted guided image filtering,”
IEEE Trans. Image Process., 24
(1), 120
–129 https://doi.org/10.1109/TIP.2014.2371234 IIPRE4 1057-7149
(2015).
Google Scholar
I. Riaz et al.,
“Single image dehazing via reliability guided fusion,”
J. Vis. Commun. Image Represent., 40 85
–97 https://doi.org/10.1016/j.jvcir.2016.06.011 JVCRE7 1047-3203
(2016).
Google Scholar
F. Fang, F. Li and T. Zeng,
“Single image dehazing and denoising: a fast variational approach,”
SIAM J. Imaging Sci., 7
(2), 969
–996 https://doi.org/10.1137/130919696
(2014).
Google Scholar
A. Galdran, J. Vazquez-Corral and D. Pardo,
“Enhanced variational image dehazing,”
SIAM J. Imaging Sci., 8
(3), 1519
–1546 https://doi.org/10.1137/15M1008889
(2015).
Google Scholar
F. Guo, H. Peng and J. Tang,
“Genetic algorithm-based parameter selection approach to single Image defogging,”
Inf. Process. Lett., 116
(10), 595
–602 https://doi.org/10.1016/j.ipl.2016.04.013 IFPLAT 0020-0190
(2016).
Google Scholar
Z. Sun, G. Bebis and R. Miller,
“On-road vehicle detection: a review,”
IEEE Trans. Pattern Anal. Mach. Intell., 28
(5), 694
–711 https://doi.org/10.1109/TPAMI.2006.104 ITPIDJ 0162-8828
(2006).
Google Scholar
R. Singh, A. K. Dubey and R. Kapoor,
“A review on image restoring techniques of bad weather images,”
in IJCA Proc. Int. Conf. Comput. Syst. and Math. Sci.,
23
–26
(2017). Google Scholar
S. M. Shorman and S. A. Pitchay,
“A review of rain streaks detection and removal techniques for outdoor single image,”
ARPN J. Eng. Appl. Sci., 11
(10), 6303
–6308
(2016).
Google Scholar
J. G. Walker, P. C. Y. Chang and K. I. Hopcraft,
“Visibility depth improvement in active polarization imaging in scattering media,”
Appl. Opt., 39 4933
–4941 https://doi.org/10.1364/AO.39.004933 APOPAI 0003-6935
(1995).
Google Scholar
H. Bu and J. P. Oakley,
“Correction of simple contrast lost in color images,”
IEEE Trans. Image Process., 16
(2), 511
–522 https://doi.org/10.1109/TIP.2006.887736 IIPRE4 1057-7149
(2007).
Google Scholar
Q. Zhu, J. Mai and L. Shao,
“A fast single image haze removal algorithm using color attenuation prior,”
IEEE Trans. Image Process., 24
(11), 3522
–3533 https://doi.org/10.1109/TIP.2015.2446191 IIPRE4 1057-7149
(2015).
Google Scholar
T. Zhang and Y. Chen,
“Single image dehazing based on improved dark channel prior,”
Lect. Notes Comput. Sci., 9142 205
–212 https://doi.org/10.1007/978-3-319-20469-7_23 LNCSD9 0302-9743
(2015).
Google Scholar
R. T. Tan,
“Visibility in bad weather from a single image,”
in IEEE Conf. Comput. Vis. and Pattern Recognit.,
18
(2008). https://doi.org/10.1109/CVPR.2008.4587643 Google Scholar
B. Cai et al.,
“DehazedNet: an end-to-end system for single image haze removal,”
IEEE Trans. Image Process., 25
(11), 5187C5198 https://doi.org/10.1109/TIP.2016.2598681 IIPRE4 1057-7149
(2016).
Google Scholar
W. Ren, S. Liu and H. Zhang,
“Single image dehazing via multiscale convolutional neural network,”
Lect. Notes Comput. Sci., 9906 154
–169 https://doi.org/10.1007/978-3-319-46475-6_10 LNCSD9 0302-9743
(2016).
Google Scholar
S. Dittmer, E. J. King and P. Maass,
“Singular values for ReLU layers,”
IEEE Trans. Neural Netw. Learn. Syst., 31
(9), 3594
–3605 https://doi.org/10.1109/TNNLS.2019.2945113
(2020).
Google Scholar
P. Kaushik et al.,
“Design and analysis of high-performance real-time image dehazing using convolutional neural and generative adversarial networks,”
Proc. SPIE, 12438 163
–170 https://doi.org/10.1117/12.2651023 PSISDG 0277-786X
(2023).
Google Scholar
B. Li et al.,
“All in one network for dehazing and beyond,”
in ICCV,
(2017). Google Scholar
A. Roy, L. D. Sharma and A. K. Shukla,
“Multiclass CNN-based adaptive optimized filter for removal of impulse noise from digital images,”
Vis. Comput., https://doi.org/10.1007/s00371-022-02697-7 VICOE5 0178-2789
(2022).
Google Scholar
A. Roy et al.,
“Combination of adaptive vector median filter and weighted mean filter for removal of high-density impulse noise from colour images,”
IET Image Process., 11 352
–361 https://doi.org/10.1049/iet-ipr.2016.0320
(2017).
Google Scholar
D. Tian and Z. Shi,
“MPSO: modified particle swarm optimization and its applications,”
Swarm Evol. Comput., 41 49
–68 https://doi.org/10.1016/j.swevo.2018.01.011
(2018).
Google Scholar
W. Ren et al.,
“Gated fusion network for single image dehazing,”
in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit.,
3253
–3261
(2018). https://doi.org/10.1109/CVPR.2018.00343 Google Scholar
V. Frants, S. Agaian and K. Panetta,
“QCNN-H: single-image dehazing using quaternion neural networks,”
IEEE Trans. Cybern., 53
(9), 5448
–5458 https://doi.org/10.1109/TCYB.3238640
(2023).
Google Scholar
V. Frants and S. Agaian,
“Weather removal with a lightweight quaternion Chebyshev neural network,”
Proc. SPIE, 12526 125260V https://doi.org/10.1117/12.2664858 PSISDG 0277-786X
(2023).
Google Scholar
V. Frants, S. Agaian and K. Panetta,
“QSAM-Net: rain streak removal by quaternion neural network with self-attention module,”
IEEE Trans. Multimedia, https://doi.org/10.1109/TMM.2023.3271829
(2023).
Google Scholar
A. Cariow and G. Cariowa,
“Fast algorithms for quaternion-valued convolutional neural networks,”
IEEE Trans. Neural Netw. Learn. Syst., 32
(1), 457
–462 https://doi.org/10.1109/TNNLS.2020.2979682
(2021).
Google Scholar
A. P. Giotis, G. Retsinas and C. Nikou,
“Quaternion generative adversarial networks for inscription detection in Byzantine monuments,”
in Proc. Pattern Recognit. ICPR Int. Workshops Challenges,
171
–184
(2021). https://doi.org/10.1007/978-3-030-68787-8_12 Google Scholar
A. Zhu et al.,
“Zero-shot restoration of underexposed images via robust retinex decomposition,”
in IEEE Int. Conf. Multimedia and Expo (ICME),
1
–6
(2020). https://doi.org/10.1109/ICME46284.2020.9102962 Google Scholar
P. Wang et al.,
“Parameter estimation of image gamma transformation based on zero-value histogram bin locations,”
Signal Process. Image Commun., 64 33
–45 https://doi.org/10.1016/j.image.2018.02.011 SPICEF 0923-5965
(2018).
Google Scholar
H. Zhou et al.,
“Image illumination adaptive correction algorithm based on a combined model of bottom-hat and improved gamma transformation,”
Arab. J. Sci. Eng., 48 3947
–3960 https://doi.org/10.1007/s13369-022-07368-2
(2023).
Google Scholar
C. H. Yeh et al.,
“Single image dehazing via deep learning based image restoration,”
in Proc. ASIPA Annu. Summit and Conf.,
(2018). https://doi.org/10.23919/APSIPA.2018.8659733 Google Scholar
S. Shit et al.,
“Real-time emotion recognition using end-to-end attention-based fusion network,”
J. Electron. Imaging, 32
(1), 013050 https://doi.org/10.1117/1.JEI.32.1.013050 JEIME5 1017-9909
(2023).
Google Scholar
S. Shit et al.,
“Encoder and decoder-based feature fusion network for single image dehazing,”
in 3rd Int. Conf. Artif. Intell. and Signal Process. (AISP),
1
–5
(2023). https://doi.org/10.1109/AISP57993.2023.10135067 Google Scholar
X. Li, Z. Hua and J. Li,
“Attention-based adaptive feature selection for multi-stage image dehazing,”
Vis. Comput., 39 663
–678 https://doi.org/10.1007/s00371-021-02365-2 VICOE5 0178-2789
(2023).
Google Scholar
Y. Zhou et al.,
“AFF-dehazing: attention-based feature fusion network for low-light image dehazing,”
Comput. Animation Virtual Worlds, 32
(3–4), e2011 https://doi.org/10.1002/cav.2011
(2021).
Google Scholar
A. Ali, A. Ghosh and S. S. Chaudhuri,
“LIDN: a novel light invariant image dehazing network,”
Eng. Appl. Artif. Intell., 126, Part A 106830 https://doi.org/10.1016/j.engappai.2023.106830 EAAIE6 0952-1976
(2023).
Google Scholar
W. Huang and Y. Wei,
“Single image dehazing via color balancing and quad-decomposition atmospheric light estimation,”
Optik, 275 170573 https://doi.org/10.1016/j.ijleo.2023.170573 OTIKAJ 0030-4026
(2023).
Google Scholar
X. Lv et al.,
“Blind dehazed image quality assessment: a deep CNN-based approach,”
IEEE Trans. Multimedia, https://doi.org/10.1109/TMM.2023.3252267
(2023).
Google Scholar
S. Yin et al.,
“Adams-based hierarchical features fusion network for image dehazing,”
Neural Netw., 163 379
–394 https://doi.org/10.1016/j.neunet.2023.03.021 NNETEB 0893-6080
(2023).
Google Scholar
Y. Liu and X. Hou,
“Local multi-scale feature aggregation network for real-time image dehazing,”
Pattern Recognit., 141 109599 https://doi.org/10.1016/j.patcog.2023.109599
(2023).
Google Scholar
D. Zhou et al.,
“MCRD-Net: an unsupervised dense network with multi-scale convolutional block attention for multi-focus image fusion,”
IET Image Process., 16 1558
–1574 https://doi.org/10.1049/ipr2.12430
(2022).
Google Scholar
Q. Qi,
“A multi-path attention network for non-uniform blind image deblurring,”
Multimedia Tools Appl., https://doi.org/10.1007/s11042-023-14470-6
(2023).
Google Scholar
J. Go and J. Ryu,
“Spatial bias for attention-free non-local neural networks,”
(2023). Google Scholar
D. Lei et al.,
“SCRNet: an efficient spatial channel attention residual network for spatiotemporal fusion,”
J. Appl. Remote Sens., 16
(3), 036512 https://doi.org/10.1117/1.JRS.16.036512
(2022).
Google Scholar
F. Yu et al.,
“Deep layer aggregation,”
in Proc. of the IEEE Conf. on Comput. Vis. and Pattern Recognit.,
2403
–2412
(2018). https://doi.org/10.1109/CVPR.2018.00255 Google Scholar
I. Goodfellow et al.,
“Generative adversarial nets,”
in NIPS,
2672C2680
(2014). Google Scholar
X. Su et al.,
“Enhancing haze removal and super-resolution in real-world images: a cycle generative adversarial network-based approach for synthesizing paired hazy and clear images,”
Opt. Eng., 62
(6), 063101 https://doi.org/10.1117/1.OE.62.6.063101
(2023).
Google Scholar
P. Isola et al.,
“Image to image translation with conditional adversarial networks,”
in CVPR,
5967
–5967
(2017). https://doi.org/10.1109/CVPR.2017.632 Google Scholar
L. Jie et al.,
“WaterGAN: unsupervised generative network to enable real-time color correction of monovular underwater images,”
(2017). Google Scholar
Z. Zhang, Y. Xie and L. Yang,
“Photographic text-to image synthesis with a hierarchically nested adversarial network,”
in Conf. Comput. Vis. and Pattern Recognit.,
(2018). https://doi.org/10.1109/CVPR.2018.00649 Google Scholar
H. Zhu et al.,
“DehazeGAN: when image dehazing meets differential programming,”
in Proc. Twenty-Seventh Int. Joint Conf. Artif. Intell. (IJCAI-18),
(2018). Google Scholar
D. Engin, A. Genc and H. K. Eknel,
“Cycle dehaze: enhanced CycleGAN for single image dehazing,”
in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit. Workshops,
825
–833
(2018). https://doi.org/10.1109/CVPRW.2018.00127 Google Scholar
Y. Qu et al.,
“Enhanced Pix2Pix dehazing network,”
in IEEE Conf. Comput. Vis. and Pattern Recognit. (CVPR),
8160
–8168
(2019). https://doi.org/10.1109/CVPR.2019.00835 Google Scholar
X. Chen et al.,
“InfoGAN: interpretable representation learning by information maximizing generative adversarial nets,”
(2016). Google Scholar
X. Huang et al.,
“Multimodal unsupervised image to image translation,”
in Comput. Vis. and Pattern Recognit.,
(2018). https://doi.org/10.1109/IJCNN55064.2022.9892018 Google Scholar
D. Yang and J. Sun,
“Proximal dehaze-net: a prior learning-based deep network for single image dehazing,”
in Proc. Eur. Conf. Comput. Vis. (ECCV),
702
–717
(2018). Google Scholar
X. Zhang et al.,
“Single image dehazing via dual-path recurrent network,”
IEEE Trans. Image Process., 30 5211
–5222 https://doi.org/10.1109/TIP.2021.3078319 IIPRE4 1057-7149
(2021).
Google Scholar
Y. Ding and S. Guo,
“Conditional generative adversarial networks: introduction and application,”
Proc. SPIE, 12348 258
–266 https://doi.org/10.1117/12.2641409 PSISDG 0277-786X
(2022).
Google Scholar
M. H. Sheu et al.,
“FIBS-Unet: feature integration and block smoothing network for single image dehazing,”
IEEE Access, 10 71764
–71776 https://doi.org/10.1109/ACCESS.2022.3188860
(2022).
Google Scholar
S. Zhao et al.,
“RefineDNet: a weakly supervised refinement framework for single image dehazing,”
IEEE Trans. Image Process., 30 3391
–3404 https://doi.org/10.1109/TIP.2021.3060873 IIPRE4 1057-7149
(2021).
Google Scholar
H. H. Yang and Y. Fu,
“Wavelet U-net and the chromatic adaptation transform for single image dehazing,”
in IEEE Int. Conf. Image Process. (ICIP),
2736
–2740
(2019). https://doi.org/10.1109/ICIP.2019.8803391 Google Scholar
K. Yuan et al.,
“Single image dehazing via NIN-DehazeNet,”
IEEE Access, 7 181348
–181356 https://doi.org/10.1109/ACCESS.2019.2958607
(2019).
Google Scholar
H. H. Yang, C. H. H. Yang and Y. C. J. Tsai,
“Y-Net: multi-scale feature aggregation network with wavelet structure similarity loss function for single image dehazing,”
in ICASSP 2020-2020 IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP),
2628
–2632
(2020). https://doi.org/10.1109/ICASSP40776.2020.9053920 Google Scholar
T. Wang et al.,
“Haze concentration adaptive network for image dehazing,”
Neurocomputing, 439 75
–85 https://doi.org/10.1016/j.neucom.2021.01.042 NRCGEO 0925-2312
(2021).
Google Scholar
L. Li et al.,
“Semi-supervised image dehazing,”
IEEE Trans. Image Process., 29 2766
–2779 https://doi.org/10.1109/TIP.2019.2952690 IIPRE4 1057-7149
(2019).
Google Scholar
Z. Deng et al.,
“Deep multi-model fusion for single-image dehazing,”
in Proc. IEEE Int. Conf. Comput. Vis.,
2453
–2462
(2019). https://doi.org/10.1109/ICCV.2019.00254 Google Scholar
T. Guo and V. Monga,
“Reinforced depth-aware deep learning for single image dehazing,”
in ICASSP 2020-2020 IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP),
8891
–8895
(2020). https://doi.org/10.1109/ICASSP40776.2020.9054504 Google Scholar
D. Yang and J. Sun,
“A model-driven deep dehazing approach by learning deep priors,”
IEEE Access, 9 108542
–108556 https://doi.org/10.1109/ACCESS.2021.3101319
(2021).
Google Scholar
Z. Chen, Z. He and Z. Lu,
“DEA-Net: single image dehazing based on detail-enhanced convolution and content-guided attention,”
(2023). Google Scholar
B. Li et al.,
“You only look yourself: unsupervised and untrained single image dehazing neural network,”
Int. J. Comput. Vis., 129
(5), 1754
–1767 https://doi.org/10.1007/s11263-021-01431-5 IJCVEQ 0920-5691
(2021).
Google Scholar
Y. Jin et al.,
“Structure representation network and uncertainty feedback learning for dense non-uniform fog removal,”
in Asian Conf. Comput. Vis.,
(2022). https://doi.org/10.1007/978-3-031-26313-2_10 Google Scholar
L. Tran, S. Moon and D. Park,
“A novel encoder-decoder network with guided transmission map for single image dehazing,”
(2022). Google Scholar
Y. Yu et al.,
“A two-branch neural network for nonhomogeneous dehazing via ensemble learning,”
in Proc. IEEE/CVF Conf. Comput. Vis. and Pattern Recognit.,
193
–202
(2021). https://doi.org/10.1109/CVPRW53098.2021.00028 Google Scholar
Y. Song et al.,
“Rethinking performance gains in image dehazing networks,”
(2022). Google Scholar
S. Chaurasia and B. S. Gohil,
“Detection of day time fog over India using INSAT-3D data,”
IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens., 8
(9), 4524
–4530 https://doi.org/10.1109/JSTARS.2015.2493000
(2015).
Google Scholar
R. Malik,
“Modeling and accident analysis on NH-10 (INDIA),”
Int. J. Eng. Manage. Res., 5
(2), 880
–882
(2015).
Google Scholar
M. K. Mondal et al.,
“Design and development of a fog-assisted elephant corridor over a railway track,”
Sustainability, 15
(7), 5944 https://doi.org/10.3390/su15075944
(2023).
Google Scholar
M. Srivastava, P. Dixit and P. Ranjan,
“Accident detection using fog computing,”
in 6th Int. Conf. Inf. Syst. and Comput. Netw. (ISCON),
1
–5
(2023). https://doi.org/10.1109/ISCON57294.2023.10111980 Google Scholar
G. Mahendra and H. R. Roopashree,
“Prediction of road accidents in the different states of India using machine learning algorithms,”
in IEEE Int. Conf. Integr. Circuits and Commun. Syst. (ICICACS),
1
–6
(2023). https://doi.org/10.1109/ICICACS57338.10099519.2023 Google Scholar
A. Roy, L. Manam and R. H. Laskar,
“Region adaptive fuzzy filter: an approach for removal of random-valued impulse noise,”
IEEE Trans. Ind. Electron., 65
(9), 7268
–7278 https://doi.org/10.1109/TIE.2018.2793225
(2018).
Google Scholar
A. Roy et al.,
“Removal of impulse noise for multimedia-IoT applications at gateway level,”
Multimedia Tools Appl., 81 34463
–34480 https://doi.org/10.1007/s11042-021-11832-w
(2022).
Google Scholar
A. Mittal, R. Soundararajan and A. C. Bovik,
“Making a “completely blind” image quality analyzer,”
IEEE Signal Process. Lett., 20
(3), 209
–212 https://doi.org/10.1109/LSP.2012.2227726 IESPEJ 1070-9908
(2013).
Google Scholar
L. Zhang, Y. Shen and H. Li,
“VSI: a visual saliency-induced index for perceptual image quality assessment,”
IEEE Trans. Image Process., 23
(10), 4270
–4281 https://doi.org/10.1109/TIP.2014.2346028 IIPRE4 1057-7149
(2014).
Google Scholar
W. E. K. Middleton,
“Vision through the atmosphere,”
Geophysik II/Geophysics II, Springer, Berlin, Germany
(1957). Google Scholar
A. Kar et al.,
“Zero-shot single image restoration through controlled perturbation of Koschmieder’s model,”
in IEEE/CVF Conf. Comput. Vis. and Pattern Recognit. (CVPR),
16200
–16210
(2021). https://doi.org/10.1109/CVPR46437.2021.01594 Google Scholar
X. Yan et al.,
“Underwater image dehazing using a novel color channel based dual transmission map estimation,”
Multimedia Tools Appl., 1
–24 https://doi.org/10.1007/s11042-023-15708-z
(2023).
Google Scholar
Y. Gao, W. Xu and Y. Lu,
“Let you see in haze and sandstorm: two-in-one low-visibility enhancement network,”
IEEE Trans. Instrum. Meas., 72 5023712 https://doi.org/10.1109/TIM.2023.3304668 IEIMAO 0018-9456
(2023).
Google Scholar
Y. Guo et al.,
“Haze visibility enhancement for promoting traffic situational awareness in vision-enabled intelligent transportation,”
IEEE Trans. Veh. Technol., 1
–15 https://doi.org/10.1109/TVT.2023.3298041
(2023).
Google Scholar
S. Zhao et al.,
“Dehazing evaluation: real-world benchmark datasets, criteria, and baselines,”
IEEE Trans. Image Process., 29 6947
–6962 https://doi.org/10.1109/TIP.2020.2995264 IIPRE4 1057-7149
(2020).
Google Scholar
L. Sun et al.,
“Adaptive image dehazing and object tracking in UAV videos based on the template updating Siamese network,”
IEEE Sens. J., 23
(11), 12320
–12333 https://doi.org/10.1109/JSEN.2023.3266653 ISJEAZ 1530-437X
(2023).
Google Scholar
S. Tian et al.,
“DHIQA: quality assessment of dehazed images based on attentive multi-scale feature fusion and rank learning,”
Displays, 79 102495 https://doi.org/10.1016/j.displa.2023.102495 DISPDP 0141-9382
(2023).
Google Scholar
G. Verma, M. Kumar and S. Raikwar,
“FCNN: fusion-based underwater image enhancement using multilayer convolution neural network,”
J. Electron. Imaging, 31
(6), 063039 https://doi.org/10.1117/1.JEI.31.6.063039 JEIME5 1017-9909
(2022).
Google Scholar
M. Guo et al.,
“DFBDehazeNet: an end-to-end dense feedback network for single image dehazing,”
J. Electron. Imaging, 30
(3), 033004 https://doi.org/10.1117/1.JEI.30.3.033004 JEIME5 1017-9909
(2021).
Google Scholar
Q. Wang et al.,
“Variant-depth neural networks for deblurring traffic images in intelligent transportation systems,”
IEEE Trans. Intell. Transport. Syst., 24
(6), 5792
–5802 https://doi.org/10.1109/TITS.2023.3255839
(2023).
Google Scholar
A. Filin, I. Gracheva and A. Kopylov,
“Haze removal method based on joint transmission map estimation and atmospheric-light extraction,”
in Proc. 4th Int. Conf. Future Netw. and Distrib. Syst.,
(2020). Google Scholar
X. Liu, T. Zhang and J. Zhang,
“Toward visual quality enhancement of dehazing effect with improved Cycle-GAN,”
Neural Comput. Appl., 35 5277
–5290 https://doi.org/10.1007/s00521-022-07964-1
(2023).
Google Scholar
L. Zhang et al.,
“FSIM: a feature similarity index for image quality assessment,”
IEEE Trans. Image Process., 20
(8), 2378
–2386 https://doi.org/10.1109/TIP.2011.2109730 IIPRE4 1057-7149
(2011).
Google Scholar
A. Filin et al.,
“Hazy images dataset with localized light sources for experimental evaluation of dehazing methods,”
in Proc. 6th Int. Workshop on Deep Learn. in Comput. Phys.— PoS (DLCP),
(2022). Google Scholar
Y. Shao et al.,
“Domain adaptation for image dehazing,”
in Proc. IEEE/CVF Conf. Comput. Vis. and Pattern Recognit.,
2808
–2817
(2020). https://doi.org/10.1109/CVPR42600.2020.00288 Google Scholar
X. Zhang et al.,
“Pyramid channel-based feature attention network for image dehazing,”
Comput. Vis. Image Underst., 197 103003 https://doi.org/10.1016/j.cviu.2020.103003 CVIUF4 1077-3142
(2020).
Google Scholar
R. Jing et al.,
“Cloud removal for optical remote sensing imagery using the SPA-CycleGAN network,”
J. Appl. Remote Sens., 16
(3), 034520 https://doi.org/10.1117/1.JRS.16.034520
(2022).
Google Scholar
BiographySahadeb Shit currently serves as a PhD scholar at AcSIR (CSIR-CMERI) in Durgapur, West Bengal. He received his MTech degree in telecommunication engineering from MAKAUT, West Bengal, in 2015, and his BTech degree in electronics and communication engineering from WBUT, West Bengal, in 2013. His primary areas of interest encompass computer vision, image processing, machine learning, and sensor fusion. Dip Narayan Ray currently holds the position of Sr. Pr. Scientist at CSIR-CMERI, located in Durgapur, West Bengal. He received his bachelor’s degree in mechanical engineering from NIT Durgapur in 2002 and later achieved his PhD in mechanical engineering from the same institution in 2012. His professional expertise lies in the fields of machine vision, image processing, robotics, and machine learning. |