The segmentation of the offshore farm area in the high-resolution SAR image is of great significance for the statistics of the farming area and the analysis of the rationality of the farming layout. However, the SAR images have the characteristics of a lot of noise and inconspicuous features. It is difficult to achieve precise segmentation by directly using non-learning image segmentation methods. Therefore, we propose a precise segmentation scheme for offshore farms in high-resolution SAR images based on improved UNet++. We first adopt a simulated annealing strategy for the update of the learning rate during the network training. By initializing the learning rate multiple times, we avoid the network from falling into a local optimum. Secondly, for the dataset studied, we verify that the segmentation performance of resizing the image to 256×256 pixels is better than that of 512×512 pixels. Finally, we propose an improved UNet++, which uses SE-Net as the feature extraction network to enhance the feature learning ability. Extensive experimental results show that, compared to some state-of-the-art methods, the proposed scheme achieves superior performance with a frequency weighted intersection over union (FWIoU) of 0.9853 on the high-resolution SAR offshore farm dataset.
In aquaculture, the normal growth of fish is closely related to the density of aquaculture. Therefore, it is of great significance to use remote sensing images to accurately segment the cages in a specific sea area at a macro level. This research proposes an accurate segmentation scheme for remote sensing cages based on U-Net and voting mechanism. Firstly, a remote sensing cage segmentation (RSCS) data set is produced, which includes fifty-three high-resolution cage images with inconsistent resolution. Secondly, by using random cropping and data enhancement operations on the training samples, three training sets with image block sizes of 256×256 pixels, 512×512 pixels, and 1024×1024 pixels are created. And through the introduction of U-Net network, three training sets of different sample sizes are trained separately and three trained models are generated. Then, after reasonably filling the test image, a window sliding overlap cropping method is adopted. The high-resolution remote sensing cage test images are sequentially cut into the image blocks for segmentation, and the segmented image blocks are spliced and combined into the binary segmentation image by the mean method. Finally, for each image, the three binary segmentation images generated by different trained models are used to vote for each pixel. The experimental results show that by testing three remote sensing images of Li'an Port, Xincun Port and Potou Port, the Mean Intersection over Union (mIoU) is 0.865. Our data and code can be available online.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.