KEYWORDS: Rain, Cameras, Education and training, Convolution, Network architectures, Depth of field, Data modeling, Image restoration, Model based design, Deep learning
Traditional rain models typically focus on modeling images based on the physical characteristics and generation of rain streaks, which often obtain good results in the synthetic dataset. However, in real-world scenarios, the transformation of the camera depth of field (DOF) and the focusing position lead to different degrees of blur in different areas of the image. This variability poses challenges in extracting and processing rain streaks far from the camera DOF. Furthermore, the current image rain removal model has grown increasingly complex, which makes the image rain removal task difficult to make real time for practical applications. To address this challenge, we take a novel approach by considering the camera imaging principle. We explore the potential depth information concealed within regions of image blurriness, introducing a raindrop model and proposing the progressive depth information network (PDIN). To be specific, PDIN concurrently learns both the direction characteristics and depth information of the rain streaks and gradually removes the rain streaks in a cyclic way. Our method innovatively considers the blur problem of rainfall images from the perspective of camera shooting. Importantly, our model demonstrates excellent rain removal performance with a relatively low time complexity. A large number of experiments on synthetic and real datasets show that PDIN has better performance in image rain removal, which also indicates the rationality of our new model.
Underwater image saliency detection makes the terrestrial saliency detection model less effective in the application of underwater vehicles due to factors such as turbid water and unstable light, resulting in a degradation of the model's performance. We offer a model for underwater saliency detection based on an improved attentional feedback mechanism to overcome the aforementioned issues. The features at the top and bottom levels are effectively fused by forming a cascaded feedback decoder through the cross feature module and adding channel space attention, after which the residual refinement module is added for further refinement. The training and testing process uses underwater open datasets. The experimental findings demonstrate that our method is superior to other ways in comparative analysis with four general saliency detection methods and two underwater saliency detection methods on four underwater image datasets, proving the model's viability and efficacy.
Underwater images play a crucial role in underwater exploration tasks. However, due to the unique physical and chemical properties of the underwater environment, underwater images often suffer from issues such as low contrast, color cast, and blurriness. To address these challenges, this paper proposes a dual-path fusion model (UW-DSFNet) for underwater image enhancement. The model aims to extract both color and texture features from underwater images comprehensively, utilizing spatial and frequency domains. In the spatial domain path, a low-complexity NAFNet network is employed along with gate residual and GELU activation functions to extract color features from the images. In the frequency domain path, an MLP framework is utilized, and Fourier transform is applied to obtain frequency domain texture feature maps. Finally, the features extracted from the spatial and frequency domains are fused, followed by a detail enhancement process. Experimental results demonstrate that the proposed model effectively enhances underwater images, producing clear and visually appealing results with rich colors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.