Existing methods on appearance-based gaze estimation mostly regress gaze direction from eye images, neglecting facial information and head pose which can be much helpful. In this paper, we propose a robust appearance-based gaze estimation method that regresses gaze directions jointly from human face and eye. The face and eye regions are located based on the detected landmark points, and representations of the two modalities are modeled with the convolutional neural networks (CNN), which are finally combined for gaze estimation by a fused network. Furthermore, considering the various impact of different facial regions on human gaze, the spatial weights for facial area are learned automatically with an attention mechanism and are applied to refine the facial representation. Experimental results validate the benefits of fusing multiple modalities in gaze estimation on the Eyediap benchmark dataset, and the propose method can yield better performance to previous advanced methods.
Most previous target detection methods are based on the physical properties of visible-light polarization images, depending on different targets and backgrounds. However, this process is not only complicated but also vulnerable to environmental noises. A multimodal fusion detection network based on the multimodal deep neural network architecture is proposed in this research. The multimodal fusion detection network integrates the high-level semantic information of visible-light polarization image in crater detection. The network contains the base network, the fusion network, and the detection network. Each of the base networks outputs a corresponding feature figure of polarization image, fused by the fusion network later to output a final fused feature figure, which is input into the detection network to detect the target in the image. To learn target characteristics effectively and improve the accuracy of target detection, we select the base network by comparing between VGG and ResNet networks and adopt the strategy of model parameter pretraining. The experimental results demonstrate that the simulated crater detection performance of the proposed method is superior to the traditional and single-modal-based methods in that the extracted polarization characteristics are beneficial to target detection.
Fluorescent in situ hybridization (FISH) is a molecular cytogenetic technique that provides reliable imaging biomarkers to diagnose cancer and genetic disorders in the cellular level. One prerequisite step to identify carcinoma cells in FISH images is to accurately segment cells, so as to quantify DNA/RNA signals within each cell. Manual cell segmentation is a tedious and time-consuming task, which demands automatic methods. However, automatic cell segmentation is hindered by low image contrast, weak cell boundaries, and cell touching in FISH images. In this paper, we develop a fast mini-U-Net method to address these challenges. Some special characteristics are tailored in the mini-U-Net, including connections between input images and their feature maps to accurately localize cells, mlpcon (multilayer perceptron + convolution) to segment cell regions, and morphology operators and the watershed algorithm to separate each individual cell. In comparison with the U-Net, the miniU-Net has fewer training parameters and less computational cost. The validation on 510 cells indicated that the Dice coefficients of the mini-U-Net and U-Net were 80.20% and 77.27%, and area overlap ratios were 69.17% and 68.04%, respectively. These promising results suggest that the mini-U-Net could generate accurate cell segmentation for fully automatic FISH image analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.