Object detection for blind zones is critical to ensuring the driving safety of heavy trucks. We propose a scheme to realize object detection in the blind zones of heavy trucks based on the improved you-only-look-once (YOLO) v3 network. First, according to the actual detection requirements, the targets are determined to establish a new data set of persons, cars, and fallen pedestrians, with a focus on small and medium objects. Subsequently, the network structure is optimized, and the features are enhanced by combining the shallow and deep convolution information of the Darknet platform. In this way, the feature propagation can be effectively enhanced, feature reuse can be promoted, and the network performance for small object detection can be improved. Furthermore, new anchors are obtained by clustering the data set using the K-means technique to improve the accuracy of the detection frame positioning. In the test stage, detection is performed using the trained model. The test results demonstrate that the proposed improved YOLO v3 network is superior to the original YOLO v3 model in terms of the blind zone detection and can satisfy the accuracy and real-time requirements with an accuracy of 94% and runtime of 13.792 ms / frame. Moreover, the mean average precision value for the improved model is 87.82%, which is 2.79% higher than that of the original YOLO v3 network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.