The analysis of magnetic resonance (MR) images plays an important role in medicine diagnosis. The localization of the anatomical structure of lesions or organs is a very important pretreatment step in clinical treatment planning. Furthermore, the accuracy of localization directly affects the diagnosis. We propose a multi-agent deep reinforcement learning-based method for prostate localization in MR image. We construct a collaborative communication environment for multi-agent interaction by sharing parameters of convolution layers of all agents. Because each agent needs to make action strategies independently, the fully connected layers are separate for each agent. In addition, we present a coarse-to-fine multi-scale image representation method to further improve the accuracy of prostate localization. The experimental results show that our method outperforms several state- of-the-art methods on PROMISE12 test dataset.
Interactive image segmentation can improve segmentation performance using manual intervention. Traditional interactive segmentation methods have unsatisfactory segmentation accuracy for images with complex background. Deep learning-based methods depend on large and accurate annotated datasets. In this paper, we propose an online interactive segmentation method based on graph convolutional network (GCN), which includes the superiorities of these two types of methods. We present a pre-segmentation stage to get an initial segmentation of the image, then propose an interactive GCN (iGCN) module to further improve the accuracy of the initial segmentation. Moreover, iGCN module is trained online without any pre-training burden. Experimental results show that our method outperforms several state-of-the-art methods on GrabCut and Berkeley datasets.
The quality of Aluminum Profiles is the most important evaluation criterion in industrial production. To perform the quality control of Aluminum Profiles, strict defect detection must be carried out. Traditional machine learning methods need to design hand-crafted features in advance. Deep learning methods need to preset anchor parameters according to all defects, which is inefficient and inaccurate. In this paper, we propose an adaptive anchor network with an attention-based refinement mechanism for defect detection. The network has learnable parameters to generate anchors adaptively. Meanwhile, to better represent the different defects, we design a refinement module with the channel and spatial attention mechanism and deformable convolution at the stage of feature extraction. Besides, we also use cascade detection architecture to retain more defect information. The proposed method gets the AP of 62.4 and AP50 of 86.1 on an industrial dataset, which has AP of 12.8 and AP50 of 17.8 improved to the conventional methods and outperforms several state-of-the-art methods.
Accurate segmentation of the prostate has many applications in the detection, diagnosis and treatment of prostate cancer. Automatic segmentation can be a challenging task because of the inhomogeneous intensity distributions on MR images. In this paper, we propose an automatic segmentation method for the prostate on MR images based on anatomy. We use the 3D U-Net guided by anatomy knowledge, including the location and shape prior knowledge of the prostate on MR images, to constrain the segmentation of the gland. The proposed method has been evaluated on the public dataset PROMISE2012. Experimental results show that the proposed method achieves a mean Dice similarity coefficient of 91.6% as compared to the manual segmentation. The experimental results indicate that the proposed method based on anatomy knowledge can achieve satisfactory segmentation performance for prostate MRI.
In computed tomography (CT), segmentation of organs-at-risk (OARs) is a key task in formulating the radiation therapy (RT) plan. However, it takes a lot of time to delineate OARs slice by slice in CT scans. The proposal of deep convolutional neural networks makes it possible to effectively segment medical images automatically. In this work, we propose an improved 2D U-Net to segment multiple OARs, aiming to increase accuracy while reducing complexity. Our method replaces vanilla convolutions with Octave Convolution (OctConv) units to reduce memory use and computation cost without accuracy sacrifice. We further plug a ‘Selective Kernel’ (SK) block after the encoder to capture multi-scale information and adaptively recalibrate the learned feature maps with attention mechanism. An in-house dataset is used to evaluate our method, where four chest organs are involved: left lung, right lung, heart, and spinal cord. Compared with the naive U-Net, the proposed method can improve Dice by up to nearly 3% and has fewer float-point operations (FLOPs).
Medical image segmentation is a complex and critical step in the field of medical image processing and analysis. Manual annotation of the medical image requires a lot of effort by professionals, which is a subjective task. In recent years, researchers have proposed a number of models for automatic medical image segmentation. In this paper, we formulate the medical image segmentation problem as a Markov Decision Process (MDP) and optimize it by reinforcement learning method. The proposed medical image segmentation method mimics a professional delineating the foreground of medical images in a multi-step manner. The proposed model get notable accuracy compared to popular methods on prostate MR data sets. Meanwhile, we adopted a deep reinforcement learning (DRL) algorithm called deep deterministic policy gradient (DDPG) to learn the segmentation model, which provides an insight on medical image segmentation problem.
Prostate segmentation on magnetic resonance images (MRI) is an important step for prostate cancer diagnosis and therapy. After the birth of deep convolution neural network (DCNN), prostate segmentation has achieved great success in supervised segmentation. However, these works are mostly based on abundant fully labeled pixel-level image data. In this work, we propose a weakly supervised prostate segmentation (WS-PS) method based on image-level labels. Although the image-level label is not sufficient for an exact prostate contour, it contains potential information which is helpful to make sure a coarse contour. This information is referred to confident information in this paper. Our WS-PS method includes two steps which are mask generation and prostate segmentation. First, the mask generation (MG) exploits a class activation maps (CAM) technique to generate a coarse probability map for MRI slices based on image-level label. These elements of the coarse map which have higher probability are considered to contain more confident information. To make use of confident information from coarse probability map, a similarity model (S-Model) is introduced to refine the coarse map. Second, the prostate segmentation (PS) uses a residual U-Net with a size constraint loss to segment prostate based on the refined mask obtained from MG. The proposed method achieves a mean Dice similarity coefficient (DSC) of 83.39% as compared to the manually delineated ground-truth. The experimental results indicate that our weakly supervised method can achieve a satisfactory segmentation on prostate MRI only with image-level labels.
Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of 85.0±3.8% as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.
Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%±3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.
Hyperspectral imaging (HSI) is an emerging imaging modality for medical applications. HSI acquires two dimensional images at various wavelengths. The combination of both spectral and spatial information provides quantitative information for cancer detection and diagnosis. This paper proposes using superpixels, principal component analysis (PCA), and support vector machine (SVM) to distinguish regions of tumor from healthy tissue. The classification method uses 2 principal components decomposed from hyperspectral images and obtains an average sensitivity of 93% and an average specificity of 85% for 11 mice. The hyperspectral imaging technology and classification method can have various applications in cancer research and management.
Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.
This paper proposes a new semi-automatic segmentation method for the prostate on 3D transrectal ultrasound images (TRUS) by combining the region and classification information. We use a random walk algorithm to express the region information efficiently and flexibly because it can avoid segmentation leakage and shrinking bias. We further use the decision tree as the classifier to distinguish the prostate from the non-prostate tissue because of its fast speed and superior performance, especially for a binary classification problem. Our segmentation algorithm is initialized with the user roughly marking the prostate and non-prostate points on the mid-gland slice which are fitted into an ellipse for obtaining more points. Based on these fitted seed points, we run the random walk algorithm to segment the prostate on the mid-gland slice. The segmented contour and the information from the decision tree classification are combined to determine the initial seed points for the other slices. The random walk algorithm is then used to segment the prostate on the adjacent slice. We propagate the process until all slices are segmented. The segmentation method was tested in 32 3D transrectal ultrasound images. Manual segmentation by a radiologist serves as the gold standard for the validation. The experimental results show that the proposed method achieved a Dice similarity coefficient of 91.37±0.05%. The segmentation method can be applied to 3D ultrasound-guided prostate biopsy and other applications.
KEYWORDS: Prostate, Image segmentation, Magnetic resonance imaging, 3D image processing, Data modeling, Prostate cancer, Medical imaging, Process modeling, 3D modeling, Image processing algorithms and systems
Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a “Supervoxel” based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%±3.2%. The segmentation method can be used not only for the prostate but also for other organs.
Most of multi-atlas segmentation methods focus on the registration between the full-size volumes of the data set. Although the transformations obtained from these registrations may be accurate for the global field of view of the images, they may not be accurate for the local prostate region. This is because different magnetic resonance (MR) images have different fields of view and may have large anatomical variability around the prostate. To overcome this limitation, we proposed a two-stage prostate segmentation method based on a fully automatic multi-atlas framework, which includes the detection stage i.e. locating the prostate, and the segmentation stage i.e. extracting the prostate. The purpose of the first stage is to find a cuboid that contains the whole prostate as small cubage as possible. In this paper, the cuboid including the prostate is detected by registering atlas edge volumes to the target volume while an edge detection algorithm is applied to every slice in the volumes. At the second stage, the proposed method focuses on the registration in the region of the prostate vicinity, which can improve the accuracy of the prostate segmentation. We evaluated the proposed method on 12 patient MR volumes by performing a leave-one-out study. Dice similarity coefficient (DSC) and Hausdorff distance (HD) are used to quantify the difference between our method and the manual ground truth. The proposed method yielded a DSC of 83.4%±4.3%, and a HD of 9.3 mm±2.6 mm. The fully automated segmentation method can provide a useful tool in many prostate imaging applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.