Sparse-view computed tomographic (CT) image reconstruction aims to shorten scanning time, reduce radiation dose, and yield high-quality CT images simultaneously. Some researchers have developed deep learning (DL) based models for sparse-view CT reconstruction on the circular scanning trajectories. However, cone beam CT (CBCT) image reconstruction based on the circular trajectory is theoretically an ill-posed problem and cannot accurately reconstruct 3D CT images, while CBCT reconstruction of helical trajectory has the possibility of accurate reconstruction because it satisfies the tuy condition. Therefore, we propose a dual-domain helical projection-fidelity network (DHPF-Net) for sparse-view helical CT (SHCT) reconstruction. The DHPF-Net mainly consists of three modules, namely artifact reduction network (ARN), helical projection fidelity (HPF), and union restoration network (URN). Specifically, the ARN reconstructs high-quality CT images by suppressing the noise artifacts of sparse-view images. The HPF module uses the measured sparse-view projection to replace the projection values of the corresponding position in the projection of the ARN, which can ensure data fidelity of the final predicted projection and preserve the sharpness of the reconstructed CT images. The URN further improves the reconstruction performance by combining the sparse-view images, IRN images, and HPF images. In addition, in order to extract the structure information of adjacent images, leverage the structural self-similarity information, and avoid the expensive computational cost, we convert 3D volumn CT image into channel directions. The experimental results on the public dataset demonstrated that the proposed method can achieve a superior performance for sparse-view helical CT image reconstruction.
The presence of metal often heavily degrades the computed tomography (CT) image quality and inevitably affects the subsequent clinical diagnosis and therapy. With the rapid development of deep learning (DL), a lot of DL-based methods have been proposed for metal artifact reduction (MAR) task in CT imaging, including image domain, projection domain and dual-domain based MAR methods. Recently, view-by-view backprojection tensor (VVBP-Tensor) domain is developed as the intermediary domain between image domain and projection domain, while VVBP-Tensor also has many good mathematical properties, such as low-rank property and structural self-similarity. Therefore, we present a VVBP-Tensor based deep neural network (DNN) framework for better MAR performance in CT imaging. Specifically, the original projection is separately pre-processed by the linear interpolation completion algorithm and the clipping algorithm, to quickly remove most metal artifacts and preserve structural information. Then, the clipped projection is restored by one sinogram recovery network to smooth the projection values in and out of the metal trajectory. In addition, two pre-processed projections are separately transferred to two tensors by filtering, backprojecting and sorting, and two sorted tensors are simultaneously rolled into the MAR reconstruction network for further improving reconstructed CT image quality. The proposed method has a good interpretability since the MAR reconstruction network can be considered as a weighted CT image reconstruction process with learnable adaptive weights along the direction of scan views. The superior MAR performance of the presented method is demonstrated on the simulated dataset in terms of qualitative and quantitative measurements.
Fully supervised deep learning (DL) methods have been widely used in low-dose CT (LDCT) imaging field and can usually achieve high accuracy results. These methods require a large labeled training set which consists of pairs of LDCT images as well as their corresponding high-dose CT (HDCT) ones. They successfully learn intermediate concept of features describing important components in CT images, such as noise distribution, and structure details, which is important to capture dependencies from LDCT image to HDCT ones. However, it should be noted that it is quite time-consuming and costly to obtain such a large of labeled CT images especially the HDCT images are limited in clinics. In comparison, lots of unlabeled LDCT images are usually easily accessible and massive critical information latent in the unlabeled LDCT can be leveraged to further boost restoration performance. Therefore, in this work, we present a semi-supervised noise distribution learning network to suppress noise-induced artifacts in the LDCT images. For simplicity, the presented network in termed as "SNDL-Net". The presented SNDL-Net consists of two sub-networks, i.e., supervised network, and unsupervised network. In the supervised network, the LDCT/HDCT image pairs are used for network training. And the unsupervised network considers the complex noise distribution in the LDCT images, and model the noise with a Gaussian mixture framework, then learns the proper gradient of LDCT images in a purely unsupervised manner. Similar with the supervised network training, the gradient information in a large of unlabeled LDCT images can be used for unsupervised network training. Moreover, to learn the noise distribution accurately, the discrepancy between the learned noise distribution in the supervised network and learned noise distribution in the unsupervised network can be modeled by a Kullback-Leibler (KL) divergence. Experiments on the Mayo clinic dataset verify the method is effective in low-dose CT image restoration with only a small amount of labeled data compared to previous supervised deep learning methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.