Osteoporosis is a common age-related disease characterized by reduced bone density and increased fracture-risk. Microstructural quality of trabecular bone (Tb), commonly found at axial skeletal sites and at the end of long bones, is an important determinant of bone-strength and fracture-risk. High-resolution emerging CT scanners enable in vivo measurement of Tb microstructures at peripheral sites. However, resolution-dependence of microstructural measures and wide resolution-discrepancies among various CT scanners together with rapid upgrades in technology warrant data harmonization in CT-based cross-sectional and longitudinal bone studies. This paper presents a deep learning-based method for high-resolution reconstruction of Tb microstructures from low-resolution CT scans using GAN-CIRCLE. A network was developed and evaluated using post-registered ankle CT scans of nineteen volunteers on both low- and highresolution CT scanners. 9,000 matching pairs of low- and high-resolution patches of size 64×64 were randomly harvested from ten volunteers for training and validation. Another 5,000 matching pairs of patches from nine other volunteers were used for evaluation. Quantitative comparison shows that predicted high-resolution scans have significantly improved structural similarity index (p < 0.01) with true high-resolution scans as compared to the same metric for low-resolution data. Different Tb microstructural measures such as thickness, spacing, and network area density are also computed from low- and predicted high-resolution images, and compared with the values derived from true high-resolution scans. Thickness and network area measures from predicted images showed higher agreement with true high-resolution CT (CCC = [0.95, 0.91]) derived values than the same measures from low-resolution images (CCC = [0.72, 0.88]).
A major challenge in computed tomography (CT) is how to minimize patient radiation exposure without compromising image quality and diagnostic performance. The use of deep convolutional (Conv) neural networks for noise reduction in Low-Dose CT (LDCT) images has recently shown a great potential in this important application. In this paper, we present a highly efficient and effective neural network model for LDCT image noise reduction. Specifically, to capture local anatomical features we integrate Deep Convolutional Neural Networks (CNNs) and Skip connection layers for feature extraction. Also, we introduce parallelized 1 × 1 CNN, called Network in Network, to lower the dimensionality of the output from the previous layer, achieving faster computational speed at less feature loss. To optimize the performance of the network, we adopt a Wasserstein generative adversarial network (WGAN) framework. Quantitative and qualitative comparisons demonstrate that our proposed network model can produce images with lower noise and more structural details than state-of-the-art noise-reduction methods.
Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are widely used for screening, diagnosis and imageguided therapeutics. Due to physical, technical and economical limitations, it is impossible for MRI and CT scanners to target ideal image resolution. Given the nominal imaging performance, how to improve image resolution has been a hot topic, and referred to as super-resolution research. As a promising method for super-resolution, over recent years deep learning has shown a great potential especially in deblurring natural images. In this paper, based on the neural network model termed as GAN-CIRCLE (Constrained by the Identical, Residual, Cycle Learning Ensemble), we adapt this neural network for achieving super-resolution for both MRI and CT. In this study, we demonstrate two-fold resolution enhancement for MRI and CT with the same network architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.