Metal implants give rise to metal artifacts in computed tomography (CT) images, which may lead to diagnostic errors and erroneous CT number estimates when the CT is used for radiation therapy planning. Methods for reducing metal artifacts by exploiting the anatomical information provided by coregistered magnetic resonance (MR) images are of great potential value, but remain technically challenging due to the poor contrast between bone and air on the MR image. In this paper, we present a novel MR-based algorithm for automatic CT metal artifact reduction (MAR), referred to as kerMAR. It combines kernel regression on known CT value/MR patch pairs in the uncorrupted patient volume with a forward model of the artifact corrupted values to estimate CT replacement values. In contrast to pseudo-CT generation that builds on multi-patient modelling, the algorithm requires no MR intensity normalisation or atlas registration. Image results for 7 head-and-neck radiation therapy patients with T1-weighted images acquired in the same fixation as the RT planning CT suggest a potential for more complete MAR close to the metal implants than the oMAR algorithm (Philips) used clinically. Our results further show improved performance in air and bone regions as compared to other MR-based MAR algorithms. In addition, we experimented with using kerMAR to define a prior for iterative reconstruction with the maximum likelihood transmission reconstruction algorithm, however with no apparent improvements
Transcranial brain stimulation (TBS) techniques such as transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS) and others have seen a strong increase as tools in therapy and research within the last 20 years. In order to precisely target the stimulation, it is important to accurately model the individual head anatomy of a subject. Of particular importance is accurate reconstruction of the skull, as it has the strongest impact on the current pathways due to its low conductivity. Thus providing automated tools, which can reliably reconstruct the anatomy of the human head from magnetic resonance (MR) scans would be highly valuable for the application of transcranial stimulation methods. These head models can also be used to inform source localization methods such as EEG and MEG. Automated segmentation of the skull from MR images is, however, challenging as the skull emits very little signal in MR. In order to avoid topological defects, such as holes in the segmentations, a strong model of the skull shape is needed. In this paper we propose a new shape model for skull segmentation based on the so-called convolutional restricted Boltzmann machines (cRBMs). Compared to traditionally used lower-order shape models, such as pair-wise Markov random fields (MRFs), the cRBMs model local shapes in larger spatial neighborhoods while still allowing for efficient inference. We compare the skull segmentation accuracy of our approach to two previously published methods and show significant improvement.
Many pulmonary diseases can be characterized by visual abnormalities on lung CT scans. Some diseases manifest similar defects but require completely different treatments, as is the case for Pulmonary Hypertension (PH) and Pulmonary Embolism (PE): both present hypo- and hyper-perfused regions but with different distribution across the lung and require different treatment protocols. Finding these distributions by visual inspection is not trivial even for trained radiologists who currently use invasive catheterism to diagnose PH. A Computer-Aided Diagnosis (CAD) tool that could facilitate the non-invasive diagnosis of these diseases can benefit both the radiologists and the patients. Most of the visual differences in the parenchyma can be characterized using texture descriptors. Current CAD systems often use texture information but the texture is either computed in a patch-based fashion, or based on an anatomical division of the lung. The difficulty of precisely finding these divisions in abnormal lungs calls for new tools for obtaining new meaningful divisions of the lungs.
In this paper we present a method for unsupervised segmentation of lung CT scans into subregions that are similar in terms of texture and spatial proximity. To this extent, we combine a previously validated Riesz-wavelet texture descriptor with a well-known superpixel segmentation approach that we extend to 3D. We demonstrate the feasibility and accuracy of our approach on a simulated texture dataset, and show preliminary results for CT scans of the lung comparing subjects suffering either from PH or PE. The resulting texture-based atlas of individual lungs can potentially help physicians in diagnosis or be used for studying common texture distributions related to other diseases.
In radiotherapy treatment planning that is only based on magnetic resonance imaging (MRI), the electron density information usually obtained from computed tomography (CT) must be derived from the MRI by synthesizing a so-called pseudo CT (pCT). This is a non-trivial task since MRI intensities are neither uniquely nor quantitatively related to electron density. Typical approaches involve either a classification or regression model requiring specialized MRI sequences to solve intensity ambiguities, or an atlas-based model necessitating multiple registrations between atlases and subject scans. In this work, we explore a machine learning approach for creating a pCT of the pelvic region from conventional MRI sequences without using atlases. We use a random forest provided with information about local texture, edges and spatial features derived from the MRI. This helps to solve intensity ambiguities. Furthermore, we use the concept of auto-context by sequentially training a number of classification forests to create and improve context features, which are finally used to train a regression forest for pCT prediction. We evaluate the pCT quality in terms of the voxel-wise error and the radiologic accuracy as measured by water-equivalent path lengths. We compare the performance of our method against two baseline pCT strategies, which either set all MRI voxels in the subject equal to the CT value of water, or in addition transfer the bone volume from the real CT. We show an improved performance compared to both baseline pCTs suggesting that our method may be useful for MRI-only radiotherapy.
We present a fully automated generative method for simultaneous brain tumor and organs-at-risk segmentation in multi-modal magnetic resonance images. The method combines an existing whole-brain segmentation technique with a spatial tumor prior, which uses convolutional restricted Boltzmann machines to model tumor shape. The method is not tuned to any specific imaging protocol and can simultaneously segment the gross tumor volume, peritumoral edema and healthy tissue structures relevant for radiotherapy planning. We validate the method on a manually delineated clinical data set of glioblastoma patients by comparing segmentations of gross tumor volume, brainstem and hippocampus. The preliminary results demonstrate the feasibility of the method.
Conference Committee Involvement (12)
Image Processing
17 February 2025 | San Diego, California, United States
Image Processing
19 February 2024 | San Diego, California, United States
Image Processing
20 February 2023 | San Diego, California, United States
Image Processing
20 February 2022 | San Diego, California, United States
Image Processing
15 February 2021 | Online Only, California, United States
Image Processing
17 February 2020 | Houston, Texas, United States
Image Processing
19 February 2019 | San Diego, California, United States
Image Processing
11 February 2018 | Houston, Texas, United States
Image Processing Posters
12 February 2017 | Orlando, FL, United States
Image Processing
12 February 2017 | Orlando, Florida, United States
Image Processing
1 March 2016 | San Diego, California, United States
Image Processing
24 February 2015 | Orlando, Florida, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.