This article proposes a Facial Expression Hierarchical Detection Network (FHDN), use a convolutional neural network based on multiple branches, about facial expression detection. To further improve feature extraction performance, the method adds an ESSAM module as an attention mechanism. The ESSAM module can adaptively adjust the weight of each feature map, thus improving the performance of the model in feature extraction and facial expression recognition tasks. The method was experimentally evaluated on a self-made dataset, and the results showed that the detection accuracy rate of this model was 81.40%, which is an improvement of 5% and 0.7% compared to YOLOV5 and YOLOV8, respectively. When compared to conventional deep learning techniques, this approach extracts picture characteristics more quickly and accurately without the need for labor-intensive manual labor.
Existing object detection methods primarily focus on detecting large targets in images, with limited research conducted on small targets. Additionally, there are challenges with low detection accuracy and difficulty in meeting real-time requirements. Based on this, an improved RetinaNet is proposed. First, the use of high-resolution feature maps can better capture the features of small targets. Then a weight factor is added to solve the unbalanced distribution of training samples caused by using high-resolution feature maps. Finally, adding a novel query subnet by two main steps: first estimate the coarse location of the small target in the high-level feature map, then the exact location is calculated in the low-level feature map. Experiments show that this query subnet saves the computational cost of low-level feature maps, and greatly improved detection speed. The performance of the model is evaluated on a new benchmark called TinyPerson, mAP up to 36.18, FPS up to 13.98. Compared to the unimproved RetinaNet, the accuracy and speed are improved by 0.9 and 1.28 respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.