Esophageal wall thickness is an important predictor of esophageal cancer response to therapy. In this study, we developed a computerized pipeline for quantification of esophageal wall thickness using computerized tomography (CT). We first segmented the esophagus using a multi-atlas-based segmentation scheme. The esophagus in each atlas CT was manually segmented to create a label map. Using image registration, all of the atlases were aligned to the imaging space of the target CT. The deformation field from the registration was applied to the label maps to warp them to the target space. A weighted majority-voting label fusion was employed to create the segmentation of esophagus. Finally, we excluded the lumen from the esophagus using a threshold of -600 HU and measured the esophageal wall thickness. The developed method was tested on a dataset of 30 CT scans, including 15 esophageal cancer patients and 15 normal controls. The mean Dice similarity coefficient (DSC) and mean absolute distance (MAD) between the segmented esophagus and the reference standard were employed to evaluate the segmentation results. Our method achieved a mean Dice coefficient of 65.55 ± 10.48% and mean MAD of 1.40 ± 1.31 mm for all the cases. The mean esophageal wall thickness of cancer patients and normal controls was 6.35 ± 1.19 mm and 6.03 ± 0.51 mm, respectively. We conclude that the proposed method can perform quantitative analysis of esophageal wall thickness and would be useful for tumor detection and tumor response evaluation of esophageal cancer.
Three-dimensional volumetric imaging correlated with respiration (4DCT) typically utilizes external breathing
surrogates and phase-based models to determine lung tissue motion. However, 4DCT requires time consuming post-processing
and the relationship between external breathing surrogates and lung tissue motion is not clearly defined. This
study compares algorithms using external respiratory motion surrogates as predictors of internal lung motion tracked in
real-time by electromagnetic transponders (Calypso® Medical Technologies) implanted in a canine model.
Simultaneous spirometry, bellows, and transponder positions measurements were acquired during free breathing and
variable ventilation respiratory patterns. Functions of phase, amplitude, tidal volume, and airflow were examined by
least-squares regression analysis to determine which algorithm provided the best estimate of internal motion. The cosine
phase model performed the worst of all models analyzed (R2 = 31.6%, free breathing, and R2 = 14.9%, variable
ventilation). All algorithms performed better during free breathing than during variable ventilation measurements. The
5D model of tidal volume and airflow predicted transponder location better than amplitude or either of the two phasebased
models analyzed, with correlation coefficients of 66.1% and 64.4% for free breathing and variable ventilation
respectively. Real-time implanted transponder based measurements provide a direct method for determining lung tissue
location. Current phase-based or amplitude-based respiratory motion algorithms cannot as accurately predict lung tissue
motion in an irregularly breathing subject as a model including tidal volume and airflow. Further work is necessary to
quantify the long term stability of prediction capabilities using amplitude and phase based algorithms for multiple lung
tumor positions over time.
In many patients respiratory motion causes motion artifacts in CT images, thereby inhibiting precise treatment planning and lowering the ability to target radiation to tumors. The 4D Phantom, which includes a 3D stage and a 1D stage that each are capable of arbitrary motion and timing, was developed to serve as an end-to-end radiation therapy QA device that could be used throughout CT imaging, radiation therapy treatment planning, and radiation therapy delivery. The dynamic accuracy of the system was measured with a camera system. The positional error was found to be equally likely to occur in the positive and negative directions for each axis, and the stage was within 0.1 mm of the desired position 85% of the time. In an experiment designed to use the 4D Phantom's encoders to measure trial-to-trial precision of the system, the 4D Phantom reproduced the motion during variable bag ventilation of a transponder that had been bronchoscopically implanted in a canine lung. In this case, the encoder readout indicated that the stage was within 10 microns of the sent position 94% of the time and that the RMS error was 7 microns. Motion artifacts were clearly visible in 3D and respiratory-correlated (4D) CT scans of phantoms reproducing tissue motion. In 4D CT scans, apparent volume was found to be directly correlated to instantaneous velocity. The system is capable of reproducing individual patient-specific tissue trajectories with a high degree of accuracy and precision and will be useful for end-to-end radiation therapy QA.
The mobility of lung tumors during the respiratory cycle is a source of error in radiotherapy treatment planning.
Spatiotemporal CT data sets can be used for studying the motion of lung tumors and inner organs during the
breathing cycle.
We present methods for the analysis of respiratory motion using 4D CT data in high temporal resolution. An
optical flow based reconstruction method was used to generate artifact-reduced 4D CT data sets of lung cancer
patients. The reconstructed 4D CT data sets were segmented and the respiratory motion of tumors and inner
organs was analyzed.
A non-linear registration algorithm is used to calculate the velocity field between consecutive time frames of
the 4D data. The resulting velocity field is used to analyze trajectories of landmarks and surface points. By
this technique, the maximum displacement of any surface point is calculated, and regions with large respiratory
motion are marked. To describe the tumor mobility the motion of the lung tumor center in three orthogonal
directions is displayed. Estimated 3D appearance probabilities visualize the movement of the tumor during the
respiratory cycle in one static image. Furthermore, correlations between trajectories of the skin surface and the
trajectory of the tumor center are determined and skin regions are identified which are suitable for prediction of
the internal tumor motion.
The results of the motion analysis indicate that the described methods are suitable to gain insight into the
spatiotemporal behavior of anatomical and pathological structures during the respiratory cycle.
Respiratory motion is a significant source of error in conformal radiation therapy for the thorax and upper abdomen. Four-dimensional computed tomography (4D CT) has been proposed to reduce the uncertainty caused by internal respiratory organ motion. A 4D CT dataset is retrospectively reconstructed at various stages of a respiratory cycle. An important tool for 4D treatment planning is deformable image registration. An inverse consistent image registration is used to model lung motion from one respiratory stage to another during a breathing cycle. This diffeomorphic registration jointly estimates the forward and reverse transformations providing more accurate correspondence between two images. Registration results and modeled motions in the lung are shown for three example respiratory stages. The results demonstrate that the consistent image registration satisfactorily models the large motions in the lung, providing a useful tool for 4D planning and delivering.
Respiratory motion is a significant source of error in radiotherapy treatment planning. 4D-CT data sets can be useful to measure the impact of organ motion caused by breathing. But modern CT scanners can only scan a limited region of the body simultaneously and patients have to be scanned in segments consisting of multiple slices. For studying free breathing motion multislice CT scans can be collected simultaneously with digital spirometry over several breathing cycles. The 4D data set is assembled by sorting the free breathing multislice CT scans according to the couch position and the tidal volume. But artifacts can occur because there are no data segments for exactly the same tidal volume and all couch positions. We present an optical flow based method for the reconstruction of 4D-CT data sets from multislice CT scans, which are collected simultaneously with digital spirometry. The optical flow between the scans is estimated by a non-linear registration method. The calculated velocity field is used to reconstruct a 4D-CT data set by interpolating data at user-defined tidal volumes. By this technique, artifacts can be reduced significantly. The reconstructed 4D-CT data sets are used for studying inner organ motion during the respiratory cycle. The procedures described were applied to reconstruct 4D-CT data
sets for four tumour patients who have been scanned during free breathing. The reconstructed 4D data sets were used to quantify organ displacements and to visualize the abdominothoracic organ motion.
Object detection in ultrasound fetal images is a challenging task for the relatively low resolution and low signal-to-noise ratio. A direct inverse randomized Hough transform (DIRHT) is developed for filtering and detecting incomplete curves in images with strong noise. The DIRHT combines the advantages of both the inverse and the randomized Hough transforms. In the reverse image, curves are highlighted while a large number of unrelated pixels are removed, demonstrating a “curve-pass filtering” effect. Curves are detected by iteratively applying the DIRHT to the filtered image. The DIRHT was applied to head detection and measurement of the biparietal diameter (BPD) and head circumference (HC). No user input or geometric properties of the head were required for the detection. The detection and measurement took 2 seconds for each image on a PC. The inter-run variations and the differences between the automatic measurements and sonographers’ manual measurements were small compared with published inter-observer variations. The results demonstrated that the automatic measurements were consistent and accurate. This method provides a valuable tool for fetal examinations.
This paper describes an automatic method for measuring the biparietal diameter (BPD) and head circumference (HC) in ultrasound fetal images. A total of 217 ultrasound images were segmented by using a K-Mean classifier, and the head skull was detected in 214 of the 217 cases by an iterative randomized Hough transform developed for detection of incomplete curves in images with strong noise without user intervention. The automatic measurements were compared with conventional manual measurements by sonographers and a trained panel. The inter-run variations and differences between the automatic and conventional measurements were small compared with published inter-observer variations. The results showed that the automated measurements were as reliable as the expert measurements and more consistent. This method has great potential in clinical applications.
Issam El Naqa, Daniel Low, Gary Christensen, Parag Parikh, Joo Hyun Song, Michelle Nystrom, Wei Lu, Joseph Deasy, James Hubenschmidt, Sasha Wahab, Sasa Mutic, Anurag Singh, Jeffrey Bradley
We are developing 4D-CT to provide breathing motion information (trajectories) for radiation therapy treatment planning of lung cancer. Potential applications include optimization of intensity-modulated beams in the presence of breathing motion and intra-fraction target volume margin determination for conformal therapy. The images are acquired using a multi-slice CT scanner while the patient undergoes simultaneous quantitative spirometry. At each couch position, the CT scanner is operated in ciné mode and acquires up to 15 scans of 12 slices each. Each CT scan is associated with the measured tidal volume for retrospective reconstruction of 3D CT scans at arbitrary tidal volumes. The specific tasks of this project involves the development of automated registration of internal organ motion (trajectories) during breathing. A modified least-squares based optical flow algorithm tracks specific features of interest by modifying the eigenvalues of gradient matrix (gradient structural tensor). Good correlations between the measured motion and spirometry-based tidal volume are observed and evidence of internal hysteresis is also detected.
Segmentation of ultrasound images is challenging because of the noisy nature and subtle boundaries of objects in ultrasound images. This paper discusses object segmentation and identification for ultrasound fetal images. The feature space for segmentation consists of information extracted from three sources: gray level, texture, and wavelet-based decomposition. Several texture features, including Laws' texture-energy measures and features based on local gray level run-length, were found useful for segmentation. An unsupervised clustering procedure was used to classify each pixel into its most probable class. Morphological operations were used to remove noisy structures from the original gray level images and to improve the boundaries of the segmented objects. An algorithm was developed to locate objects of interest based on a multiscale implementation of an image transform. Fetal heads were identified and their corresponding measurements are made automatically. The method was tested with a set of clinical images. The resulting images showed clearly the segmented objects. The measurements agreed closely with a sonographer's measurements. The purposed method holds promise for processing and analyzing ultrasound fetal images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.