Accurate and automated prostate whole gland and central gland segmentations on MR images are essential for aiding any prostate cancer diagnosis system. Our work presents a 2-D orthogonal deep learning method to automatically segment the whole prostate and central gland from T2-weighted axial-only MR images. The proposed method can generate high-density 3-D surfaces from low-resolution (z axis) MR images. In the past, most methods have focused on axial images alone, e.g., 2-D based segmentation of the prostate from each 2-D slice. Those methods suffer the problems of over-segmenting or under-segmenting the prostate at apex and base, which adds a major contribution for errors. The proposed method leverages the orthogonal context to effectively reduce the apex and base segmentation ambiguities. It also overcomes jittering or stair-step surface artifacts when constructing a 3-D surface from 2-D segmentation or direct 3-D segmentation approaches, such as 3-D U-Net. The experimental results demonstrate that the proposed method achieves 92.4 % ± 3 % Dice similarity coefficient (DSC) for prostate and DSC of 90.1 % ± 4.6 % for central gland without trimming any ending contours at apex and base. The experiments illustrate the feasibility and robustness of the 2-D-based holistically nested networks with short connections method for MR prostate and central gland segmentation. The proposed method achieves segmentation results on par with the current literature.
Accurate automatic segmentation of the prostate in magnetic resonance images (MRI) is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity of tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. We investigate both patch-based and holistic (image-to-image) deep-learning methods for segmentation of the prostate. First, we introduce a patch-based convolutional network that aims to refine the prostate contour which provides an initialization. Second, we propose a method for end-to-end prostate segmentation by integrating holistically nested edge detection with fully convolutional networks. Holistically nested networks (HNN) automatically learn a hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 250 patients in fivefold cross-validation. The proposed enhanced HNN model achieves a mean ± standard deviation. A Dice similarity coefficient (DSC) of 89.77%±3.29% and a mean Jaccard similarity coefficient (IoU) of 81.59%±5.18% are used to calculate without trimming any end slices. The proposed holistic model significantly (p<0.001) outperforms a patch-based AlexNet model by 9% in DSC and 13% in IoU. Overall, the method achieves state-of-the-art performance as compared with other MRI prostate segmentation methods in the literature.
Accurate automatic prostate magnetic resonance image (MRI) segmentation is a challenging task due to the high
variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity tissues around the prostate
boundary inhibit traditional segmentation methods from achieving high accuracy. The proposed method performs end-to-
end segmentation by integrating holistically nested edge detection with fully convolutional neural networks.
Holistically-nested networks (HNN) automatically learn the hierarchical representation that can improve prostate
boundary detection. Quantitative evaluation is performed on the MRI scans of 247 patients in 5-fold cross-validation.
We achieve a mean Dice Similarity Coefficient of 88.70% and a mean Jaccard Similarity Coefficient of 80.29% without
trimming any erroneous contours at apex and base.
The paper presents an automatic segmentation methodology for the patellar bone, based on 3D gradient recalled echo and gradient recalled echo with fat suppression magnetic resonance images. Constricted search space outlines are incorporated into recursive ray-tracing to segment the outer cortical bone. A statistical analysis based on the dependence of information in adjacent slices is used to limit the search in each image to between an outer and inner search region. A section based recursive ray-tracing mechanism is used to skip inner noise regions and detect the edge boundary. The proposed method achieves higher segmentation accuracy (0.23mm) than the current state-of-the-art methods with the average dice similarity coefficient of 96.0% (SD 1.3%) agreement between the auto-segmentation and ground truth surfaces.
KEYWORDS: Image segmentation, 3D modeling, Prostate, Magnetic resonance imaging, Machine learning, 3D image processing, Pattern recognition, Image analysis, Data modeling, Principal component analysis, Statistical modeling, Cancer
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
This work extends the multi-histogram volume rendering framework proposed by Kniss et al. [1] to provide rendering results based on the impression of overlaid triangles on a graph of image intensity versus gradient magnitude. The developed method of volume rendering allows for greater emphasis to boundary visualization while avoiding issues common in medical image acquisition. For example, partial voluming effects in computed tomography and intensity inhomogeneity of similar tissue types in magnetic resonance imaging introduce pixel values that will not reflect differing tissue types when a standard transfer function is applied to an intensity histogram. This new framework uses developing technology to improve upon the Kniss multi-histogram framework by using Java, the GPU, and MIPAV, an open-source medical image processing application, to allow multi-histogram techniques to be widely disseminated. The OpenGL view aligned texture rendering approach suffered from performance setbacks, inaccessibility, and usability problems. Rendering results can now be interactively compared with other rendering frameworks, surfaces can now be extracted for use in other programs, and file formats that are widely used in the field of biomedical imaging can be visualized using this multi-histogram approach. OpenCL and GLSL are used to produce this new multi-histogram approach, leveraging texture memory on the graphics processing unit of desktops to provide a new interactive method for visualizing biomedical images. Performance results for this method are generated and qualitative rendering results are compared. The resulting framework provides the opportunity for further applications in medical imaging, both in volume rendering and in generic image processing.
Accurate segmentation of prostate magnetic resonance images (MRI) is a challenging task due to the variable anatomical
structure of the prostate. In this work, two semi-automatic techniques for segmentation of T2-weighted MRI images of
the prostate are presented. Both models are based on 2D registration that changes shape to fit the prostate boundary
between adjacent slices. The first model relies entirely on registration to segment the prostate. The second model
applies Fuzzy-C means and morphology filters on top of the registration in order to refine the prostate boundary. Key to
the success of the two models is the careful initialization of the prostate contours, which requires specifying three
Volume of Interest (VOI) contours to each axial, sagittal and coronal image. Then, a fully automatic segmentation
algorithm generates the final results with the three images. The algorithm performance is evaluated with 45 MR image
datasets. VOI volume, 3D surface volume and VOI boundary masks are used to quantify the segmentation accuracy
between the semi-automatic and expert manual segmentations. Both models achieve an average segmentation accuracy
of 90%. The proposed registration guided segmentation model has been generalized to segment a wide range of T2-
weighted MRI prostate images.
In recent years, the number and utility of 3-D rendering frameworks has grown substantially. A quantitative and
qualitative evaluation of the capabilities of a subset of these systems is important to determine the applicability
of these methods to typical medical visualization tasks. The libraries evaluated in this paper include the Java3D
Application Programming Interface (API), Java OpenGL (Jogl) API, a multi-histogram software-based rendering
method, and the WildMagic API. Volume renderer implementations using each of these frameworks were
developed using the platform-independent Java programming language. Quantitative performance measurements
(frames per second, memory usage) were used to evaluate the strengths and weaknesses of each implementation.
In Radio Frequency Ablation (RFA) procedures, hepatic tumor tissue is heated to a temperature where necrosis is insured. Unfortunately, recent results suggest that heating tumor tissue to necrosis is complicated because nearby major blood vessels provide a cooling effect. Therefore, it is fundamentally important for physicians to perform a careful analysis of the spatial relationship of diseased tissue to larger liver blood vessels. The liver contains many of these large vessels, which affect the RFA ablation shape and size. There are many sophisticated vasculature detection and segmentation techniques reported in the literature that identify continuous vessels as the diameter changes size and it transgresses through many bifurcation levels. However, the larger blood vessels near the treatment area are the only vessels required for proper RFA treatment plan formulation and analysis. With physician guidance and interaction, our system can segment those vessels which are most likely to affect the RFA ablations. We have found that our system provides the physician with therapeutic, geometric and spatial information necessary to accurately plan treatment of tumors near large blood vessels. The segmented liver vessels near the treatment region are also necessary for computing isolevel heating profiles used to evaluate different proposed treatment configurations.
It is fundamentally important that all cancerous cells be adequately destroyed during Radio Frequency Ablation (RFA) procedures. To help achieve this goal, probe manufacturers advise physicians to increase the treatment region by one centimeter (1cm) in all directions around the diseased tissue. This enlarged treatment region provides a buffer to insure that cancer cells that migrated into surrounding tissue are adequately treated and necrose. Even though RFA is a minimally invasive, image-guided procedure, it is difficult for physicians to confidently follow the specified treatment protocol. In this paper we visually assess an RFA treatment by comparing a registered image set containing the untreated tumor, including the 1 cm safety boundary, to that of an image set containing the treated region acquired one month after surgery. For this study, we used Computerized Tomography images as both the tumor and treated regions are visible. To align the image sets of the abdomen, we investigate three different registration techniques; an affine transform that minimizes the correlation ratio, a point (or landmark) based 3D thin-plate spline approach, and a nonlinear B-spline elastic registration methodology. We found the affine registration technique simple and easy to use because it is fully automatic. Unfortunately, this method resulted in the largest visible discrepancy between the liver in the fused images. The thin-plate spline technique required the physician to identify corresponding landmarks in both image sets, but resulted in better visual accuracy in the fused images. Finally, the non-linear, B-spline, elastic registration technique used the registration results of the thin-plate spline method as a starting point and then required a significant amount of computation to determine its transformation, but also provided the most visually accurate fused image set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.