Thyroid nodules are extremely common lesions and highly detectable by ultrasound (US). Several studies have shown that the overall incidence of papillary thyroid cancer in patients with nodules selected for biopsy is only about 10%. Therefore, there is a clinical need for a dramatic reduction of thyroid biopsies. In this study, we present a guided classification system using deep learning that predicts malignancy of nodules from B-mode US. We retrospectively collected transverse and longitudinal images of 150 benign and 150 malignant thyroid nodules with biopsy proven results. We divided our dataset into training (n=460), validation(n=40), and test (n=100) datasets. We manually segmented nodules from B-mode US images and provided the nodule mask as a second input channel to the convolutional neural network (CNN) for increasing the attention of nodule regions in images. We evaluated the classification performance of different CNN architectures such as Inception and Resnet50 CNN architectures with different input images. The InceptionV3 model showed the best performance on the test dataset: 86% (sensitivity), 90% (specificity), and 90% precision when the threshold was set for highest accuracy. When the threshold was set for maximum sensitivity (0 missed cancers), the ROC curve suggests the number of biopsies may be reduced by 52% without missing patients with malignant thyroid nodules. We anticipate that this performance can be further improved with including more patients and the information from other ultrasound modalities.
Removing non-brain tissues such as the skull, scalp and face from head computed tomography (CT) images is an important field of study in brain image processing applications. It is a prerequisite step in numerous quantitative imaging analyses of neurological diseases as it improves the computational speed and accuracy of quantitative analyses and image coregistration. In this study, we present an accurate method based on fully convolutional neural networks (fCNN) to remove non-brain tissues from head CT images in a time-efficient manner. The method includes an encoding part which has sequential convolutional filters that produce activation maps of the input image in low dimensional space; and it has a decoding part consisting of convolutional filters that reconstruct the input image from the reduced representation. We trained the fCNN on 122 volumetric head CT images and tested on 22 unseen volumetric CT head images based on an expert’s manual brain segmentation masks. The performance of our method on the test set is: Dice Coefficient= 0.998±0.001 (mean ± standard deviation), recall=0.998±0.001, precision=0.998±0.001, and accuracy=0.9995±0.0001. Our method extracts complete volumetric brain from head CT images in 2s which is much faster than previous methods. To the best of our knowledge, this is the first study using fCNN to perform skull stripping from CT images. Our approach based on fCNN provides accurate extraction of brain tissue from head CT images in a time-efficient manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.