The manual assessment of chest radiographs by radiologists is a time-consuming and error-prone process that relies on the availability of trained professionals. Deep learning methods have the potential to alleviate the workload of radiologists in pathology detection and diagnosis. However, one major drawback of deep learning methods is their lack of explainable decision-making, which is crucial in computer-aided diagnosis. To address this issue, activation maps of the underlying convolutional neural networks (CNN) are frequently used to indicate the regions of focus for the network during predictions. However, often, an evaluation of these activation maps concerning the actual predicted pathology is missing. In this study, we quantitatively evaluate the usage of activation maps for segmenting pulmonary nodules in chest radiographs. We compare transformer-based, CNN-based, and hybrid architectures using different visualization methods. Our results show that although high performance can be achieved in the classification task across all models, the activation masks show little correlation with the actual position of the nodules.
Population-based analysis of medical images plays an essential role in identification and development of imaging biomarkers. Most commonly the focus lies on a single structure or image region in order to identify variations to discriminate between patient groups. Such approaches require high segmentation accuracy in specific image regions while the accuracy in the remaining image area is of less importance. We propose an efficient ROI-based approach for unsupervised learning of deformable atlas-to-image registration to facilitate structure-specific analysis. Our hierarchical model improves registration accuracy in relevant image regions while reducing computational cost in terms of memory consumption, computation time and consequently energy consumption. The proposed method was evaluated for predicting cognitive impairment from morphological changes of the hippocampal region in brain MRI images showing that next to the efficient processing of 3D data, our method delivers accurate results comparable to state-of-the-art tools.
Lesion detection in brain Magnetic Resonance Images (MRIs) remains a challenging task. MRIs are typically read and interpreted by domain experts, which is a tedious and time-consuming process. Recently, unsupervised anomaly detection (UAD) in brain MRI with deep learning has shown promising results to provide a quick, initial assessment. So far, these methods only rely on the visual appearance of healthy brain anatomy for anomaly detection. Another biomarker for abnormal brain development is the deviation between the brain age and the chronological age, which is unexplored in combination with UAD. We propose deep learning for UAD in 3D brain MRI considering additional age information. We analyze the value of age information during training, as an additional anomaly score, and systematically study several architecture concepts. Based on our analysis, we propose a novel deep learning approach for UAD with multi-task age prediction. We use clinical T1-weighted MRIs of 1735 healthy subjects and the publicly available BraTs 2019 data set for our study. Our novel approach significantly improves UAD performance with an AUC of 92.60% compared to an AUC-score of 84.37% using previous approaches without age information.
Quantification of potentially cancerous lesions from imaging modalities, most prominently from CT or PET
images, plays a crucial role both in diagnosing and staging of cancer as well as in the assessment of the response
of a cancer to a therapy, e.g. for lymphoma or lung cancer. For PET imaging, several quantifications which might
bear great discriminating potential (e.g. total tumor burden or total tumor glycolysis) involve the segmentation
of the entirety of all of the cancerous lesions. However, this particular task of segmenting the entirety of all
cancerous lesions might be very tedious if it has to be done manually, in particular if the disease is scattered or
metastasized and thus consists of numerous foci; this is one of the reasons why only few clinical studies on those
quantifications are available. In this work, we investigate a way to aid the easy determination of the entirety of
cancerous lesions in a PET image of a human. The approach is designed to detect all hot spots within a PET
image and rank their probability of being a cancerous lesion. The basis of this component is a modified watershed
algorithm; the ranking is performed on a combination of several, primarily morphological measures derived from
the individual basins. This component is embedded in a software suite to assess response to a therapy based on
PET images. As a preprocessing step, potential lesions are segmented and indicated to the user, who can select
the foci which constitute the tumor and discard the false positives. This procedure substantially simplifies the
segmentation of the entire tumor burden of a patient. This approach of semi-automatic hot spot detection is
evaluated on 17 clinical datasets.
Early response assessment of cancer therapy is a crucial component towards a more effective and patient individualized
cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with
functional information. We have developed algorithms which allow the user to track both tumor volume and
standardized uptake value (SUV) measurements during the therapy from series of CT and PET images, respectively.
To prepare for tumor volume estimation we have developed a new technique for a fast, flexible, and
intuitive 3D definition of meshes. This initial surface is then automatically adapted by means of a model-based
segmentation algorithm and propagated to each follow-up scan. If necessary, manual corrections can be added by
the user. To determine SUV measurements a prioritized region growing algorithm is employed. For an improved
workflow all algorithms are embedded in a PET/CT therapy monitoring software suite giving the clinician a
unified and immediate access to all data sets. Whenever the user clicks on a tumor in a base-line scan, the
courses of segmented tumor volumes and SUV measurements are automatically identified and displayed to the
user as a graph plot. According to each course, the therapy progress can be classified as complete or partial
response or as progressive or stable disease. We have tested our methods with series of PET/CT data from 9
lung cancer patients acquired at Princess Margaret Hospital in Toronto. Each patient underwent three PET/CT
scans during a radiation therapy. Our results indicate that a combination of mean metabolic activity in the
tumor with the PET-based tumor volume can lead to an earlier response detection than a purely volume based
(CT diameter) or purely functional based (e.g. SUV max or SUV mean) response measures. The new software
seems applicable for easy, faster, and reproducible quantification to routinely monitor tumor therapy.
A real-time matching algorithm for follow-up chest CT scans can significantly reduce the workload on radiologists by
automatically finding the corresponding location in the first or second scan, respectively. The objective of this study was
to assess the accuracy of a fast and versatile single-point registration algorithm for thoracic CT scans.
The matching algorithm is based on automatic lung segmentations in both CT scans, individually for left and right lung.
Whenever the user clicks on an arbitrary structure in the lung, the coarse position of the corresponding point in the other
scan is identified by comparing the volume percentiles of the lungs. Then the position is refined by optimizing the gray
value cross-correlation of a local volume of interest. The algorithm is able to register any structure in or near the lungs,
but is of clinical interest in particular with respect to lung nodules and airways.
For validation, CT scan pairs were used in which the patients were scanned twice in one session, using low-dose non-contrast-enhanced chest CT scans (0.75 mm collimation). Between these scans, patients got off and on the table to
simulate a follow-up scan. 291 nodules were evaluated. Average nodule diameter was 9.5 mm (range 2.9 - 74.1 mm).
Automatic registration succeeded in 95.2% of all cases (277 / 291). In successful registered nodules, average registration
consistency was 1.1 mm. The real-time matching proved to be an accurate and useful tool for radiologists evaluating
follow-up chest CT scans to assess possible nodule growth.
Response assessment of cancer therapy is a crucial component towards a more effective and patient individualized cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with functional information. However, dealing simultaneously with several PET/CT scans poses a serious workflow problem. It can be a difficult and tedious task to extract response criteria based upon an integrated analysis of PET and CT images and to track these criteria over time. In order to improve the workflow for serial analysis of PET/CT scans we introduce in this paper a fast lesion tracking algorithm. We combine a global multi-resolution rigid registration algorithm with a local block matching and a local region growing algorithm. Whenever the user clicks on a lesion in the base-line PET scan the course of standardized uptake values (SUV) is automatically identified and shown to the user as a graph plot. We have validated our method by a data collection from 7 patients. Each patient underwent two or three PET/CT scans during the course of a cancer therapy. An experienced nuclear medicine physician manually measured the courses of the maximum SUVs for altogether 18 lesions. As a result we obtained that the automatic detection of the corresponding lesions resulted in SUV measurements which are nearly identical to the manually measured SUVs. Between 38 measured maximum SUVs derived from manual and automatic detected lesions we observed a correlation of 0.9994 and a average error of 0.4 SUV units.
One challenge facing radiologists is the characterization of whether a pulmonary nodule detected in a CT scan is likely to be benign or malignant. We have developed an image processing and machine learning based computer-aided diagnosis (CADx) method to support such decisions by estimating the likelihood of malignancy of pulmonary nodules. The system computes 192 image features which are combined with patient age to comprise the feature pool. We constructed an ensemble of 1000 linear discriminant classifiers using 1000 feature subsets selected from the feature pool using a random subspace method. The classifiers were trained on a dataset of 125 pulmonary nodules. The individual classifier results were combined using a majority voting method to form an ensemble estimate of the likelihood of malignancy. Validation was performed on nodules in the Lung Imaging Database Consortium (LIDC) dataset for which radiologist interpretations were available. We performed calibration to reduce the differences in the internal operating points and spacing between radiologist rating and the CADx algorithm. Comparing radiologists with the CADx in assigning nodules into four malignancy categories, fair agreement was observed (κ=0.381) while binary rating yielded an agreement of (κ=0.475), suggesting that CADx can be a promising second reader in a clinical setting.
Computer aided characterization aims to support the differential diagnosis of indeterminate pulmonary nodules. A
number of published studies have correlated automatically computed features from image processing with clinical
diagnoses of malignancy vs. benignity. Often, however, a high number of features was trained on a relatively small
number of diagnosed nodules, raising a certain skepticism as to how salient and numerically robust the various features
really are. On the way towards computer aided diagnosis which is trusted in clinical practice, the credibility of the
individual numerical features has to be carefully established.
Nodule volume is the most crucial parameter for nodule characterization, and a number of studies are testing its
repeatability. Apart from functional parameters (such as dynamic CT enhancement and PET uptake values), the next
most widely used parameter is the surface characteristic (vascularization, spicularity, lobulation, smoothness). In this
study, we test the repeatability of two simple surface smoothness features which can discriminate between smoothly
delineated nodules and those with a high degree of surface irregularity.
Robustness of the completely automatically computed features was tested with respect to the following aspects: (a)
repeated CT scan of the same patient with equal dose, (b) repeated CT scan with much lower dose and much higher
noise, (c) repeated automatic segmentation of the nodules using varying segmentation parameters, resulting in differing
nodule surfaces. The tested nodules (81) were all solid or partially solid and included a high number of sub- and juxtapleural
nodules. We found that both tested surface characterization features correlated reasonably well with each other
(80%), and that in particular the mean-surface-shape-index showed an excellent repeatability: 98% correlation between
equal dose CT scans, 93% between standard-dose and low-dose scan (without systematic shift), and 97% between
varying HU-threshold of the automatic segmentation, which makes it a reliable feature to be used in computer aided
diagnosis.
We present an effective and intuitive visualization of the macro-vasculature of a selected nodule or tumor in three-dimensional
image data (e.g. CT, MR, US). For the differential diagnosis of nodules the possible distortion of adjacent
vessels is one important clinical criterion.
Surface renderings of vessel- and tumor-segmentations depend critically on the chosen parameter- and threshold-values
for the underlying segmentation. Therefore we use rotating Maximum Intensity Projections (MIPs) of a volume of
interests (VOI) around the selected tumor. The MIP does not require specific parameters, and allows much quicker
visual inspection in comparison to slicewise navigation, while the rotation gives depth cues to the viewer. Of the vessel
network within the VOI, however, not all vessels are connected to the selected tumor, and it is tedious to sort out which
adjacent vessels are in fact connected and which are overlaid only by projection. Therefore we suggest a simple
transformation of the original image values into connectivity values. In the derived connectedness-image each voxel
value corresponds to the lowest image value encountered on the highest possible pathway from the tumor to the voxel.
The advantage of the visualization is that no implicit binary decision is made whether a certain vessel is connected to
the tumor or not, but rather the degree of connectedness is visualized as the brightness of the vessel. Non-connected
structures disappear, feebly connected structures appear faint, and strongly connected structures remain in their original
brightness. The visualization does not depend on delicate threshold values. Promising results have been achieved for
pulmonary nodules in CT.
A robust, fast and generally applicable algorithm is presented for the splitting of anatomical trees such as vessel and airway trees into meaningful subtrees, which relies on a straightforward mathematical objective function and produces subjectively very satisfactory results. The algorithm is applicable to unstructured 2D or 3D voxel sets or undirected graphs of centerlines with unknown anatomical root point as produced by unsupervised segmentation algorithms. The automated tree splitting improves clinical tree segmentation tasks by replacing tedious manual three-dimensional navigation and editing.
Computer aided quantification of emphysema in high resolution CT data is based on identifying low attenuation areas below clinically determined Hounsfield thresholds. However, the emphysema quantification is prone to error since a gravity effect can influence the mean attenuation of healthy lung parenchyma up to ± 50 HU between ventral and dorsal lung areas. Comparing ultra-low-dose (7 mAs) and standard-dose (70 mAs) CT scans of each patient we show that measurement of the ventrodorsal gravity effect is patient specific but reproducible. It can be measured and corrected in an unsupervised way using robust fitting of a linear function.
For more than one decade computer aided detection (CAD) for
pulmonary nodules has been an active research area. There are
numerous publications dedicated to this topic. Most authors have
created their own database with their own ground truth for
validation. This makes it hard to compare the performance of
different systems with each other. It is a known fact that the
performance of a CAD system can differ significantly depending on
which data it is tested and on the underlying ground truth. The
lung image data base consortium (LIDC) has recently released 93
publicly available lung images with ground truth lists from 4
different radiologists. This data base will make it possible to
compare the performance of different CAD algorithms. In this paper
we do the first step to use the LIDC data as a benchmark test.
We present a CAD algorithm with a validation study on these data
sets. The CAD performance was analyzed by virtue of multiple Free
Response Receiver Operator Characteristic (FROC) curves for
different lower thresholds of the nodule diameter. There are
different ways to merge the ground truth lists of the
4 radiologists and we discuss the performance of our CAD algorithm
for several of these possibilities. For nodules with a
volume-equivalent diameter ≥4mm which have been
simultaneously confirmed by all four radiologists our CAD system
shows a detection rate of 89 % at a median false positive rate
of 2 findings per patient.
In this paper we describe a new general tumor segmentation approach, which combines energy minimization
methods with radial basis function surface modelling techniques. A tumor is mathematically described by a
superposition of radial basis functions. In order to find the optimal segmentation we minimize a certain energy
functional. Similar to snake segmentation our energy functional is a weighted sum of an internal and an external
energy. The internal energy is the bending energy of the surface and can be computed from the radial basis
function coefficients directly. Unlike to snake segmentation we do not have to derive and solve Euler-Lagrange
equations. We can solve the minimization problem by standard optimization techniques. Our approach is not
restricted to one single imaging modality and it can be applied to 2D, 3D or even 4D data. In addition, our
segmentation method makes several simple and intuitive user interactions possible. For instance, we can enforce
interpolation of certain user defined points. We validate our new method with lung nodules on CT data. A
validation on clinical data is carried out with the 91 publicly available CT lung images provided by the lung
image database consortium (LIDC). The LIDC also provides ground truth lists by 4 different radiologists. We
discuss the inter-observer variability of the 4 radiologists and compare their segmentations with the segmentation
results of the presented algorithm.
The performance of computer aided lung nodule detection (CAD) and
computer aided nodule volumetry is compared between standard-dose
(70-100 mAs) and ultra-low-dose CT images (5-10 mAs). A direct
quantitative performance comparison was possible, since for each
patient both an ultra-low-dose and a standard-dose CT scan were
acquired within the same examination session. The data sets were
recorded with a multi-slice CT scanner at the Charite university
hospital Berlin with 1 mm slice thickness. Our computer aided
nodule detection and segmentation algorithms were deployed on both
ultra-low-dose and standard-dose CT data without any dose-specific
fine-tuning or preprocessing. As a reference standard 292 nodules
from 20 patients were visually identified, each nodule both in
ultra-low-dose and standard-dose data sets. The CAD performance was
analyzed by virtue of multiple FROC curves for different lower
thresholds of the nodule diameter. For nodules with a
volume-equivalent diameter equal or larger than 4 mm (149 nodules
pairs), we observed a detection rate of 88% at a median false
positive rate of 2 per patient in standard-dose images, and 86%
detection rate in ultra-low-dose images, also at 2 FPs per patient.
Including even smaller nodules equal or larger than 2 mm (272
nodules pairs), we observed a detection rate of 86% in
standard-dose images, and 84% detection rate in ultra-low-dose
images, both at a rate of 5 FPs per patient.
Moreover, we observed a
correlation of 94% between the volume-equivalent nodule diameter as
automatically measured on ultra-low-dose versus on standard-dose
images, indicating that ultra-low-dose CT is also feasible for
growth-rate assessment in follow-up examinations. The comparable
performance of lung nodule CAD in ultra-low-dose and standard-dose
images is of particular interest with respect to lung cancer
screening of asymptomatic patients.
Automatic extraction of the tracheobronchial tree from high resolution CT data serves visual inspection by virtual endoscopy as well as computer aided measurement of clinical parameters along the airways. The purpose of this study is to show the feasibility of automatic extraction (segmentation) of the airway tree even in ultra-low-dose CT data (5-10 mAs), and to compare the performance of the airway extraction between ultra-low-dose and standard-dose (70-100 mAs) CT data. A direct performance comparison (instead of a mere simulation) was possible since for each patient both an ultra-low-dose and a standard-dose CT scan were acquired within the same examination session. The data sets were recorded with a multi-slice CT scanner at the Charite university hospital Berlin with 1 mm slice thickness.
An automated tree extraction algorithm was applied to both the ultra-low-dose and the standard-dose CT data. No dose-specific parameter-tuning or image pre-processing was used. For performance comparison, the total length of all visually verified centerlines of each tree was accumulated for all airways beyond the tracheal carina. Correlation of the extracted total airway length for ultra-low-dose versus standard-dose for each patient showed that on average in the ultra-low-dose images 84% of the length of the standard-dose images was retrieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.