As a key technology for maritime applications, trajectory prediction can effectively help ships reduce risks such as collisions and groundings at sea. Currently, although the combination of rich automatic identification system (AIS) data and deep learning brings new possibilities for ship trajectory prediction, ship trajectory prediction is still hugely challenging due to the complexity of ship motion. In this paper, we improved a new ship prediction model based on TrAISformer. On the one hand, we sparse the multi-dimensional data through dictionary coding, map it into probability space, and use a new loss function to measure network performance. On the other hand, we proposed a An MLP algorithm enables it to effectively learn the timing characteristics of AIS data while avoiding destroying encoding correlation. Finally, we conducted experiments on public AIS data, and the experiments showed that the improved model improved the prediction performance by about 12.5 % compared with TrAISformer.
In view of the fault and leak detection problems caused by complex scenes of offshore area in remote sensing image ship detection, a lightweight ship classification detection method is proposed based on improved YOLOv7-tiny. On the one hand, this method stacks a lightweight feature extraction module and applies it to the backbone feature extraction network, which significantly reduces the parameter and computational complexity and does not weaken the network's ability of feature extraction. On the other hand, this method introduces spatial information into the feature pyramid, raising the discrimination of features at different scales, to improve the classification and detection ability of the network. This method has been tested on the remote sensing image ship data set. The experimental results show that the average accuracy of ship classification detection based on the improved network is increased by 2.9%. Meanwhile, the parameter quantity and computational complexity are better than YOLOv7-tiny, with a 15% reduction in parameter quantity and a 24% reduction in computational complexity.
With the development of computer vision and deep learning, the convolutional neural network has been widely used in image processing such as object detection and semantic segmentation, and has achieved breakthrough achievements. However, when the training samples are insufficient, the conventional neural network usually has unsatisfactory robustness. In order to solve the problem, we improve the generalization performance of the few-shot detectors by focusing on the target center and can identify novel categories. The paper proposes a new attention mechanism based on the auxiliary circle feature map of the object center. By selecting an auxiliary circle feature map with the object center as the center of the circle and the minimum size in height and width as the diameter, adding it to the anchor-free CenterNet network as soft attention to promote network training. Several experiments on PASCAL VOC2007/2012 datasets show that the proposed method achieves the most advanced level in terms of the accuracy and standard deviation of few-shot object detection, which indicates the algorithm’s effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.