Deep learning-based algorithms have been widely used in the low-dose CT imaging field, and have achieved promising results. However, most of these algorithms only consider the information of the desired CT image itself, ignoring the external information that can help improve the imaging performance. Therefore, in this study, we present a convolutional neural network for low-dose CT reconstruction with non-local texture learning (NTL-CNN) approach. Specifically, different from the traditional network in CT imaging, the presented NTL- CNN approach takes into consideration the non-local features within the adjacent slices in 3D CT images. Then, both low-dose target CT images and the non-local features feed into the residual network to produce desired high-quality CT images. Real patient datasets are used to evaluate the performance of the presented NTL-CNN. The corresponding experiment results demonstrate that the presented NTL-CNN approach can obtain better CT images compared with the competing approaches, in terms of noise-induced artifacts reduction and structure details preservation.
Low-dose computed tomography (LDCT) examinations are of essential usages in clinical applications due to the lower radiation-associated cancer risks in CT imaging. Reductions in radiation dose can produce severe noise and artifacts that can affect the diagnostic accuracy of radiologists. Although deep learning networks have been widely proposed, most of these networks rely on a large number of annotated CT image pairs (LDCT images/high-dose CT (HDCT) images). Moreover, it is challenging for these networks to cope with the growing amount of CT images, especially large amount of medium-dose CT (MDCT) images that are easily to collect and have lower radiation dose than the HDCT images and higher radiation dose than the LDCT images. Therefore, in this work, we propose a progressive transfer-learning network (PETNet) for low-dose CT image reconstruction with limited annotated CT data and abundant corrupted CT data. The presented PETNet consists of two phases. In the first phase, a network is trained on a large amount of LDCT/MDCT image pairs, similar to the Noise2Noise network that has shown potential in yielding promising results with corrupted data for network training. It should be noted that this network would inevitably introduce undesired bias in the results due to the complex noise distribution in CT images. Then, in the second phase, we combined the pre-trained network and another simple network to construct the presented PETNet. In particular, the parameters of the pre-trained network are frozen and transferred directly to the presented PETNet, and the presented PETNet is trained on a small amount of LDCT/HDCT image pairs. Experimental results on Mayo clinic data demonstrate the superiority of the presented PETNet method both qualitatively and quantitatively compared with the network trained on LDCT/HDCT images pairs, and Noise2Noise method trained on LDCT/MDCT image pairs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.