The purpose of this paper is to present a dataset for facial expression analysis and facial animation. Nearly all existing Facial Action Coding System-based datasets that include facial action unit (AU) intensity information annotate the intensity values hierarchically using A–E levels. However, facial expressions change continuously and shift smoothly from one state to another. Therefore, it is more effective to regress the intensity value of local facial AUs to represent whole facial expression changes, particularly in the fields of expression transfer and facial animation. We introduce an extension of FEAFA in combination with the relabeled DISFA database, which is available at http://www.iiplab.net/feafa+/ now. Extended FEAFA (FEAFA+) includes 154 video sequences from FEAFA and DISFA, with a total of 230,184 frames being manually annotated on floating-point intensity values of 24 redefined AUs using the Expression Quantitative Tool. We list crude numerical results for posed and spontaneous subsets and provide a baseline comparison for the AU intensity regression task.
Accurate segmentation of prostate magnetic resonance (MR) images provides sufficient information for the diagnosis and treatment of prostate cancer. However, the automatic segmentation of prostate based on 3D MR images faces some challenges. First, the prostate differs from other anatomical structures in that its gland lacks clear boundaries. Moreover, there are large differences in the background texture, shape, and size of different samples of prostate MR images, which makes segmentation accuracy difficult to improve. To address this issue, we propose an automatic segmentation model called the Shape Constraint U-Net (SCU-Net) for prostate MR images. The network focuses on the segmentation of the prostate boundary region by introducing a shape constraint stream based on a parallel encoder–decoder structure that can handle different shape and texture information. Specifically, the shape constraint stream is composed of a multi-level boundary attention module that can process boundary-related information using advanced activation features in the regular segmentation stream. Finally, the multi-scale context information extracted from the regular segmentation stream is fused with the multi-level boundary information obtained from the shape constraint stream to generate a segmentation prediction map. Owing to the additional shape constraints, the network substantially improves prostate boundary region segmentation. We evaluated our method on two different real clinical MR prostate datasets. The experimental results demonstrate that SCU-Net achieves state-of-the-art prostate segmentation accuracy, especially in boundary-region voxel prediction. Further analysis indicates that the proposed shape constraint stream can be used to improve the boundary voxel prediction performance of other segmentation networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.