Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has
been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR
images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical
images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and
difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen
segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation
for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated
atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective
and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated
craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to
guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2
weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC >
0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson
correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to
achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
Automatic spleen segmentation on CT is challenging due to the complexity of abdominal structures. Multi-atlas
segmentation (MAS) has shown to be a promising approach to conduct spleen segmentation. To deal with the
substantial registration errors between the heterogeneous abdominal CT images, the context learning method for
performance level estimation (CLSIMPLE) method was previously proposed. The context learning method
generates a probability map for a target image using a Gaussian mixture model (GMM) as the prior in a Bayesian
framework. However, the CLSSIMPLE typically trains a single GMM from the entire heterogeneous training atlas
set. Therefore, the estimated spatial prior maps might not represent specific target images accurately. Rather than
using all training atlases, we propose an adaptive GMM based context learning technique (AGMMCL) to train the
GMM adaptively using subsets of the training data with the subsets tailored for different target images. Training sets
are selected adaptively based on the similarity between atlases and the target images using cranio-caudal length,
which is derived manually from the target image. To validate the proposed method, a heterogeneous dataset with a
large variation of spleen sizes (100 cc to 9000 cc) is used. We designate a metric of size to differentiate each group
of spleens, with 0 to 100 cc as small, 200 to 500cc as medium, 500 to 1000 cc as large, 1000 to 2000 cc as XL, and
2000 and above as XXL. From the results, AGMMCL leads to more accurate spleen segmentations by training
GMMs adaptively for different target images.
Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.
The abdominal wall is an important structure differentiating subcutaneous and visceral compartments and intimately involved with maintaining abdominal structure. Segmentation of the whole abdominal wall on routinely acquired computed tomography (CT) scans remains challenging due to variations and complexities of the wall and surrounding tissues. In this study, we propose a slice-wise augmented active shape model (AASM) approach to robustly segment both the outer and inner surfaces of the abdominal wall. Multi-atlas label fusion (MALF) and level set (LS) techniques are integrated into the traditional ASM framework. The AASM approach globally optimizes the landmark updates in the presence of complicated underlying local anatomical contexts. The proposed approach was validated on 184 axial slices of 20 CT scans. The Hausdorff distance against the manual segmentation was significantly reduced using proposed approach compared to that using ASM, MALF, and LS individually. Our segmentation of the whole abdominal wall enables the subcutaneous and visceral fat measurement, with high correlation to the measurement derived from manual segmentation. This study presents the first generic algorithm that combines ASM, MALF, and LS, and demonstrates practical application for automatically capturing visceral and subcutaneous fat volumes.
Identifying cross-sectional and longitudinal correspondence in the abdomen on computed tomography (CT) scans is necessary for quantitatively tracking change and understanding population characteristics, yet abdominal image registration is a challenging problem. The key difficulty in solving this problem is huge variations in organ dimensions and shapes across subjects. The current standard registration method uses the global or body-wise registration technique, which is based on the global topology for alignment. This method (although producing decent results) has substantial influence of outliers, thus leaving room for significant improvement. Here, we study a new image registration approach using local (organ-wise registration) by first creating organ-specific bounding boxes and then using these regions of interest (ROIs) for aligning references to target. Based on Dice Similarity Coefficient (DSC), Mean Surface Distance (MSD) and Hausdorff Distance (HD), the organ-wise approach is demonstrated to have significantly better results by minimizing the distorting effects of organ variations. This paper compares exclusively the two registration methods by providing novel quantitative and qualitative comparison data and is a subset of the more comprehensive problem of improving the multi-atlas segmentation by using organ normalization.
Modern magnetic resonance imaging (MRI) brain atlases are high quality 3-D volumes with specific structures labeled in the volume. Atlases are essential in providing a common space for interpretation of results across studies, for anatomical education, and providing quantitative image-based navigation. Extensive work has been devoted to atlas construction for humans, macaque, and several non-primate species (e.g., rat). One notable gap in the literature is the common squirrel monkey – for which the primary published atlases date from the 1960’s. The common squirrel monkey has been used extensively as surrogate for humans in biomedical studies, given its anatomical neuro-system similarities and practical considerations. This work describes the continued development of a multi-modal MRI atlas for the common squirrel monkey, for which a structural imaging space and gray matter parcels have been previously constructed. This study adds white matter tracts to the atlas. The new atlas includes 49 white matter (WM) tracts, defined using diffusion tensor imaging (DTI) in three animals and combines these data to define the anatomical locations of these tracks in a standardized coordinate system compatible with previous development. An anatomist reviewed the resulting tracts and the inter-animal reproducibility (i.e., the Dice index of each WM parcel across animals in common space) was assessed. The Dice indices range from 0.05 to 0.80 due to differences of local registration quality and the variation of WM tract position across individuals. However, the combined WM labels from the 3 animals represent the general locations of WM parcels, adding basic connectivity information to the atlas.
Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining.
Image registration has become an essential image processing technique to compare data across time and individuals. With the successes in volumetric brain registration, general-purpose software tools are beginning to be applied to abdominal computed tomography (CT) scans. Herein, we evaluate five current tools for registering clinically acquired abdominal CT scans. Twelve abdominal organs were labeled on a set of 20 atlases to enable assessment of correspondence. The 20 atlases were pairwise registered based on only intensity information with five registration tools (affine IRTK, FNIRT, Non-Rigid IRTK, NiftyReg, and ANTs). Following the brain literature, the Dice similarity coefficient (DSC), mean surface distance, and Hausdorff distance were calculated on the registered organs individually. However, interpretation was confounded due to a significant proportion of outliers. Examining the retrospectively selected top 1 and 5 atlases for each target revealed that there was a substantive performance difference between methods. To further our understanding, we constructed majority vote segmentation with the top 5 DSC values for each organ and target. The results illustrated a median improvement of 85% in DSC between the raw results and majority vote. These experiments show that some images may be well registered to some targets using the available software tools, but there is significant room for improvement and reveals the need for innovation and research in the field of registration in abdominal CTs. If image registration is to be used for local interpretation of abdominal CT, great care must be taken to account for outliers (e.g., atlas selection in statistical fusion).
Abdominal organ segmentation with clinically acquired computed tomography (CT) is drawing increasing interest in the medical imaging community. Gaussian mixture models (GMM) have been extensively used through medical segmentation, most notably in the brain for cerebrospinal fluid / gray matter / white matter differentiation. Because abdominal CT exhibit strong localized intensity characteristics, GMM have recently been incorporated in multi-stage abdominal segmentation algorithms. In the context of variable abdominal anatomy and rich algorithms, it is difficult to assess the marginal contribution of GMM. Herein, we characterize the efficacy of an a posteriori framework that integrates GMM of organ-wise intensity likelihood with spatial priors from multiple target-specific registered labels. In our study, we first manually labeled 100 CT images. Then, we assigned 40 images to use as training data for constructing target-specific spatial priors and intensity likelihoods. The remaining 60 images were evaluated as test targets for segmenting 12 abdominal organs. The overlap between the true and the automatic segmentations was measured by Dice similarity coefficient (DSC). A median improvement of 145% was achieved by integrating the GMM intensity likelihood against the specific spatial prior. The proposed framework opens the opportunities for abdominal organ segmentation by efficiently using both the spatial and appearance information from the atlases, and creates a benchmark for large-scale automatic abdominal segmentation.
Spleen segmentation on clinically acquired CT data is a challenging problem given the complicity and variability of abdominal anatomy. Multi-atlas segmentation is a potential method for robust estimation of spleen segmentations, but can be negatively impacted by registration errors. Although labeled atlases explicitly capture information related to feasible organ shapes, multi-atlas methods have largely used this information implicitly through registration. We propose to integrate a level set shape model into the traditional label fusion framework to create a shape-constrained multi-atlas segmentation framework. Briefly, we (1) adapt two alternative atlas-to-target registrations to obtain the loose bounds on the inner and outer boundaries of the spleen shape, (2) project the fusion estimate to registered shape models, and (3) convert the projected shape into shape priors. With the constraint of the shape prior, our proposed method offers a statistically significant improvement in spleen labeling accuracy with an increase in DSC by 0.06, a decrease in symmetric mean surface distance by 4.01 mm, and a decrease in symmetric Hausdorff surface distance by 23.21 mm when compared to a locally weighted vote (LWV) method.
Ventral hernias (VHs) are abnormal openings in the anterior abdominal wall that are common side effects of surgical intervention. Repair of VHs is the most commonly performed procedure by general surgeons worldwide, but VH repair outcomes are not particularly encouraging (with recurrence rates up to 43%). A variety of open and laparoscopic techniques are available for hernia repair, and the specific technique used is ultimately driven by surgeon preference and experience. Despite routine acquisition of computed tomography (CT) for VH patients, little quantitative information is available on which to guide selection of a particular approach and/or optimize patient-specific treatment. From anecdotal interviews, the success of VH repair procedures correlates with hernia size, location, and involvement of secondary structures. Herein, we propose an image labeling protocol to segment the anterior abdominal area to provide a geometric basis with which to derive biomarkers and evaluate treatment efficacy. Based on routine clinical CT data, we are able to identify inner and outer surfaces of the abdominal walls and the herniated volume. This is the first formal presentation of a protocol to quantify these structures on abdominal CT. The intra- and inter rater reproducibilities of this protocol are evaluated on 4 patients with suspected VH (3 patients were ultimately diagnosed with VH while 1 was not). Mean surfaces distances of less than 2mm were achieved for all structures.
KEYWORDS: Virtual reality, Computed tomography, Visualization, 3D modeling, Surgery, Data modeling, Medical imaging, Head-mounted displays, 3D displays, 3D image processing
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.
The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is
fraught with failure; recurrence rates ranging from 24-43% have been reported, even with the use of biocompatible
mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical
judgments; notably, quantitative metrics based on image-processing are not used. We propose that image segmentation
methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation
on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention. To
date, automated segmentation algorithms have not been presented to quantify the abdominal wall and potential hernias.
In this pilot study with four clinically acquired CT scans on post-operative patients, we demonstrate a novel approach to
geometric classification of the abdominal wall and essential abdominal features (including bony landmarks and skin
surfaces). Our approach uses a hierarchical design in which the abdominal wall is isolated in the context of the skin and
bony structures using level set methods. All segmentation results were quantitatively validated with surface errors based
on manually labeled ground truth. Mean surface errors for the outer surface of the abdominal wall was less than 2mm.
This approach establishes a baseline for characterizing the abdominal wall for improving VH care.
Malignant gliomas are the most common form of primary neoplasm in the central nervous system, and one of the
most rapidly fatal of all human malignancies. They are treated by maximal surgical resection followed by radiation
and chemotherapy. Herein, we seek to improve the methods available to quantify the extent of tumors using newly
presented, collaborative labeling techniques on magnetic resonance imaging. Traditionally, labeling medical images
has entailed that expert raters operate on one image at a time, which is resource intensive and not practical for very
large datasets. Using many, minimally trained raters to label images has the possibility of minimizing laboratory
requirements and allowing high degrees of parallelism. A successful effort also has the possibility of reducing
overall cost. This potentially transformative technology presents a new set of problems, because one must pose the
labeling challenge in a manner accessible to people with little or no background in labeling medical images and
raters cannot be expected to read detailed instructions. Hence, a different training method has to be employed. The
training must appeal to all types of learners and have the same concepts presented in multiple ways to ensure that all
the subjects understand the basics of labeling. Our overall objective is to demonstrate the feasibility of studying
malignant glioma morphometry through statistical analysis of the collaborative efforts of many, minimally-trained
raters. This study presents preliminary results on optimization of the WebMILL framework for neoplasm labeling
and investigates the initial contributions of 78 raters labeling 98 whole-brain datasets.
Segmentation plays a critical role in exposing connections between biological structure and function. The process of
label fusion collects and combines multiple observations into a single estimate. Statistically driven techniques provide
mechanisms to optimally combine segmentations; yet, optimality hinges upon accurate modeling of rater behavior.
Traditional approaches, e.g., Majority Vote and Simultaneous Truth and Performance Level Estimation (STAPLE), have
been shown to yield excellent performance in some cases, but do not account for spatial dependences of rater
performance (i.e., regional task difficulty). Recently, the COnsensus Level, Labeler Accuracy and Truth Estimation
(COLLATE) label fusion technique augmented the seminal STAPLE approach to simultaneously estimate regions of
relative consensus versus confusion along with rater performance. Herein, we extend the COLLATE framework to
account for multiple consensus levels. Toward this end, we posit a generalized model of rater behavior of which
Majority Vote, STAPLE, STAPLE Ignoring Consensus Voxels, and COLLATE are special cases. The new algorithm is
evaluated with simulations and shown to yield improved performance in cases with complex region difficulties. Multi-COLLATE achieve these results by capturing different consensus levels. The potential impacts and applications of
generative model to label fusion problems are discussed.
We report a method for improving the sensitivity of label-free optical biosensors based on in-situ synthesis of DNA probes within porous silicon structures. The stepwise attachment of up to 15mer probes inside 30 nm mesopores was accomplished through a series of phosphoramidite reactions. In this work, a porous silicon waveguide was utilized as the sensor structure. Synthesis of DNA probe, as well as sensing of target DNA, was verified by monitoring the change in effective refractive index of the porous silicon waveguide through angle-resolved attenuated total reflectance measurements. The average resonance shift per oligo of 0.091° during stepwise synthesis corresponds to surface coverage slightly less than 50%, according to theoretical models. When compared with the traditional method of direct attachment of pre-synthesized oligonucleotide probes, the sequential phosphoramidite method resulted in an approximately four-fold increase in DNA probe attachment. This increased surface coverage by DNA probes increases the likelihood of target molecule binding, leading to improved sensitivity for bio-molecule detection. Exposure to a 50&mgr;M solution of target 8-base DNA in deionized water produced a 0.4236° change in the waveguide resonance angle. Nanomolar detection limits for small molecule sensing are realizable with this sensor scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.