Background. Prostate segmentation is a crucial step in computer-aided systems for prostate cancer detection. Multi-planar acquisitions are commonly used by clinicians to obtain a more accurate patient diagnosis but their relevance in prostate segmentation using fully automated algorithms has not been assessed. To date, the limited assessment of this relevance stems from the fact that both axial and sagittal prostate imaging views, as opposed to a single view, doubles the acquisition time. In this work, we assess the relevance of multi-planar imaging for prostate segmentation within a deep learning segmentation framework. Materials and Methods. We propose a deep learning prostate segmentation framework either from either axial or from axial and sagittal T2-weighted magnetic resonance images (MRI). The system is based on an ensemble of convolutional neural networks, each independently trained on a single imaging view. We compare single-view (axial) segmentations to those obtained from two imaging views (axial and sagittal) to assess the relevance of using multi-planar acquisitions. Algorithm performance assessment will be two-fold: 1) the global DICE score between the algorithm’s predictions and the segmentations of an experienced reader will be computed and 2) the number of lesions located within the algorithm’s segmentation prediction will be calculated. A subset of 80 patients from the public PROSTATEx-2 database containing both axial and sagittal T2-weighted MRIs will be used for this study. Results. The multiplanar network outperformed the network trained on only axial views according to both the proposed metrics. A statistically significant increase of 4% in DICE scores was found along with an 9% increase in the number of lesions within the predicted segmentation. Conclusions. The proposed method allows for a fully automatic segmentation of the prostate from single- or multi-view MRI and assesses the relevance of multi-planar MRI acquisitions for fully automatic prostate segmentation algorithms.
Background The extraction and analysis of image features (radiomics) is a promising field in the precision medicine era,
with applications to prognosis, prediction, and response to treatment quantification. In this work, we present a mutual
information – based method for quantifying reproducibility of features, a necessary step for qualification before their
inclusion in big data systems.
Materials and Methods Ten patients with Non-Small Cell Lung Cancer (NSCLC) lesions were followed over time (7
time points in average) with Computed Tomography (CT). Five observers segmented lesions by using a semi-automatic
method and 27 features describing shape and intensity distribution were extracted. Inter-observer reproducibility was
assessed by computing the multi-information (MI) of feature changes over time, and the variability of global extrema.
Results The highest MI values were obtained for volume-based features (VBF). The lesion mass (M), surface to volume
ratio (SVR) and volume (V) presented statistically significant higher values of MI than the rest of features. Within the
same VBF group, SVR showed also the lowest variability of extrema. The correlation coefficient (CC) of feature values
was unable to make a difference between features.
Conclusions MI allowed to discriminate three features (M, SVR, and V) from the rest in a statistically significant manner.
This result is consistent with the order obtained when sorting features by increasing values of extrema variability. MI is a
promising alternative for selecting features to be considered as surrogate biomarkers in a precision medicine context.
This paper combines different parallelization strategies for speeding up motion and deformation computation by
non-rigid registration of a sequence of images. The registration is performed in a two-level acceleration approach:
(1) parallelization of each registration process using MPI and/or threads, and (2) distribution of the sequential
registrations over a cluster.
On a 24-node double quad-core Intel Xeon (2.66 GHz CPU, 16 GB RAM) cluster, the method is demonstrated
to efficiently compute the deformation of a cardiac sequence reducing the computation time from more than 3
hours to a couple of minutes (for low downsampled images). It is shown that the distribution of the sequential
registrations over the cluster together with the parallelization of each pairwise registration by multithreading
lowers the computation time towards values compatible with clinical requirements (a few minutes per patient).
The combination of MPI and multithreading is only advantageous for large input data sizes.
Performances are assessed for the specific scenario of aligning cardiac sequences of taggedMagnetic Resonance
(tMR) images, with the aim of comparing strain in healthy subjects and hypertrophic cardiomyopathy (HCM)
patients. In particular, we compared the distribution of systolic strain in both populations. On average, HCM
patients showed lower average values of strain with larger deviation due to the coexistence of regions with
impaired deformation and regions with normal deformation.
In the present paper we describe the automatic construction of a statistical shape model of the whole heart built
from a training set of 100 Multi-Slice Computed Tomography (MSCT) studies of pathologic and asymptomatic
patients, including 15 (temporal) cardiac phases each. With these data sets we were able to build a compact
and representative shape model of both inter-subject and temporal variability. A practical limitation in building
statistical shape models, and in particular point distribution models (PDM), is the manual delineation of the
training set. A key advantage of the proposed method is to overcome this limitation by not requiring them.
Another one is the use of MSCT images, which thanks to their excellent anatomical depiction, have allowed
for a realistic heart representation, including the four chambers and connected vasculature. The generalization
ability of the shape model permits its deformation to unseen anatomies with an acceptable accuracy. Moreover,
its compactness allows for having a reduced set of parameters to describe the modeled population. By varying
these parameters, the statistical model can generate a set of valid examples. This is especially useful for the
generation of synthetic populations of cardiac shapes, that may correspond e.g. to healthy or diseased cases.
Finally, an illustrative example of the use of the constructed shape model for cardiac segmentation is provided.
Hemodynamics, and in particular Wall Shear Stress (WSS), is thought to play a critical role in the progression
and rupture of intracranial aneurysms. Wall motion is related to local biomechanical properties of the aneurysm,
which in turn are associated with the amount of damage undergone by the tissue. The underlying hypothesis
in this work is that injured regions show differential motion with respect to normal ones, allowing a connection
between local wall biomechanics and a potential mechanism of wall injury such as elevated WSS. In a previous
work, a novel method was presented combining wall motion estimation using image registration techniques with
Computational Fluid Dynamics (CFD) simulations in order to provide realistic intra-aneurysmal flow patterns.
It was shown that, when compared to compliant vessels, rigid models tend to overestimate WSS and produce
smaller areas of elevated WSS and force concentration, being the observed differences related to the magnitude
of the displacements. This work aims to further study the relationships between wall motion, flow patterns and
risk of rupture in aneurysms. To this end, four studies containing both 3DRA and DSA studies were analyzed,
and an improved version of the method developed previously was applied to cases showing wall motion. A
quantification and analysis of the displacement fields and their relationships to flow patterns are presented. This
relationship may play an important role in understanding interaction mechanisms between hemodynamics, wall
biomechanics, and the effect on aneurysm evolution mechanisms.
Crouzon syndrome is characterised by premature fusion of cranial sutures and synchondroses leading to craniofacial
growth disturbances. The gene causing the syndrome was discovered approximately a decade ago and
recently the first mouse model of the syndrome was generated. In this study, a set of Micro CT scans of the heads
of wild-type (normal) mice and Crouzon mice were investigated. Statistical deformation models were built to
assess the anatomical differences between the groups, as well as the within-group anatomical variation. Following
the approach by Rueckert et al. we built an atlas using B-spline-based nonrigid registration and subsequently,
the atlas was nonrigidly registered to the cases being modelled. The parameters of these registrations were then
used as input to a PCA. Using different sets of registration parameters, different models were constructed to
describe (i) the difference between the two groups in anatomical variation and (ii) the within-group variation.
These models confirmed many known traits in the wild-type and Crouzon mouse craniofacial anatomy. However,
they also showed some new traits.
Tagged Magnetic Resonance Imaging (MRI) is currently the reference modality for myocardial motion and strain analysis. Mutual Information (MI) based non rigid registration has proven to be an accurate method to retrieve cardiac motion and overcome many drawbacks present on previous approaches. In a previous work1, we used Wavelet-based Attribute Vectors (WAVs) instead of pixel intensity to measure similarity between frames. Since the curse of dimensionality forbids the use of histograms to estimate MI of high dimensional features, k-Nearest Neighbors Graphs (kNNG) were applied to calculate α-MI. Results showed that cardiac motion estimation was feasible with that approach. In this paper, K-Means clustering method is applied to compute MI from the same set of WAVs. The proposed method was applied to four tagging MRI sequences, and the resulting displacements were compared with respect to manual measurements made by two observers. Results show that more accurate motion estimation is obtained with respect to the use of pixel intensity.
The use of affine image registration based on normalized mutual information (NMI) has recently been proposed by Frangi et al. as an automatic method for assessing brachial artery flow mediated dilation (FMD) for the characterization of endothelial function. Even though this method solves many problems of previous approaches, there are still some situations that can lead to misregistration between frames, such as the presence of adjacent vessels due to probe movement, muscle fibres or poor image quality. Despite its widespread use as a registration metric and its promising results, MI is not the panacea and can occasionally fail. Previous work has attempted to include spatial information into the image similarity metric. Among these methods the direct estimation of α-MI through Minimum Euclidean Graphs allows to include spatial information and it seems suitable to tackle the registration problem in vascular images, where well oriented structures corresponding to vessel walls and muscle fibres are present. The purpose of this work is twofold. Firstly, we aim to evaluate the effect of including spatial information in the performance of the method suggested by Frangi et al. by using α-MI of spatial features as similarity metric. Secondly, the application of image registration to long image sequences in which both rigid motion and deformation are present will be used as a benchmark to prove the value of α-MI as a similarity metric, and will also allow us to make a comparative study with respect to NMI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.