Vehicle detection is a fundamental problem in object detection and plays a significant role in intelligent transportation and smart driving. To enhance the accuracy of vehicle detection and the robustness of the model in detecting occluded vehicles, we propose an improved vehicle detection method for you only look once v5s (YOLOv5s). First, we introduce the coordinate attention module into the backbone of the model. This module guides the model to improve its attention toward the location information of vehicles and channel features under occlusion conditions. Second, the feature fusion component of the model is improved by incorporating bidirectional scale connections and weighted feature fusion. Finally, the prediction head of YOLOv5s is decoupled and the regression and classification tasks are assigned to two separate branches. Experimental results show that our proposed method has 2% and 2.5% higher average precision than YOLOv5s for the common objects in context vehicle dataset and University at Albany Detection and Tracking, respectively. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Object detection
Head
Feature fusion
Detection and tracking algorithms
Education and training
Feature extraction
Target detection