Purpose: To develop an imaging-based 3D catheter navigation system for transbronchial procedures including biopsy and tumor ablation using a single-plane C-arm x-ray system. The proposed system provides time-resolved catheter shape and position as well as motion compensated 3D airway roadmaps.
Approach: A continuous-sweep limited angle (CLA) imaging mode where the C-arm continuously rotates back and forth within a limited angular range while acquiring x-ray images was used for device tracking. The catheter reconstruction was performed using a sliding window of the most recent x-ray images, which captures information on device shape and position versus time. The catheter was reconstructed using a model-based approach and was displayed together with the 3D airway roadmap extracted from a pre-navigational cone-beam CT (CBCT). The roadmap was updated in regular intervals using deformable registration to tomosynthesis reconstructions based on the CLA images. The approach was evaluated in a porcine study (three animals) and compared to a gold standard CBCT reconstruction of the device.
Results: The average 3D root mean squared distance between CLA and CBCT reconstruction of the catheter centerline was 1 ± 0.5 mm for a stationary catheter and 2.9 ± 1.1 mm for a catheter moving at ∼1 cm / s. The average tip localization error was 1.3 ± 0.7 mm and 2.7 ± 1.8 mm, respectively.
Conclusions: The results indicate catheter navigation based on the proposed single plane C-arm imaging technique is feasible with reconstruction errors similar to the diameter of a typical ablation catheter.
Accurate and efficient 3D catheter navigation within the airways is crucial for transbronchial procedures, including biopsy and tumor ablation. While electromagnetic tracking for 3D tip localization exists, it requires specialized equipment, timeconsuming registration steps and is prone to CT-to-body divergence. Recently developed techniques allow 3D reconstruction of catheters and curvilinear devices from two simultaneously acquired projection images, but they require a biplane C-arm x-ray system which is not widely available in clinical practice. This study investigates a method of timeresolved 3D tracking of catheters using the more widely available single-plane C-arm system. Imaging was performed using an acquisition protocol where the C-arm continuously rotates back and forth within a limited angular range while acquiring x-ray images. The catheter reconstruction was performed using a sliding window of the most recent x-ray images, which captures information on device shape and position versus time. A model-based approach was used to estimate the catheter shape and position at the time of the last x-ray image acquisition. To evaluate the approach, a pig study was performed where the proposed reconstruction was compared to a gold standard extracted from cone-beam CT (CBCT). The average 3D root mean squared distance between single plane and CBCT reconstruction was 0.8 ± 0.3mm for a stationary catheter and 2.4 ± 1.4mm for a catheter moving at ~1 cm/s. The tip localization error ranged from 1.0 ± 0.4mm to 3.8 ± 2.2mm. The results indicate catheter navigation based on the proposed single plane C-arm imaging technique is feasible with reconstruction errors on the order of the typical ablation catheter diameter (2.0 - 3.2mm).
Endovascular procedures performed in the angio suite have gained considerable popularity for treatment of ischemic stroke as well as aneurysms. However, new intracranial hemorrhage (ICH) may develop during these procedures, and it is highly desirable to arm the angio suite with real-time and reliable ICH monitoring tools. Currently, angio suites are equipped with scintillator-based flat panel detector (FPD) imaging systems for both planar and cone beam CT (CBCT) imaging applications. However, the reliability of CBCT for ICH imaging is insufficient due to its poor low-contrast detectability compared with MDCT and lack of spectral imaging capability for differentiating between ICH, calcifications, and iodine staining from periprocedural contrast-enhanced imaging sequences. To preserve the benefits of the FPD for 2D imaging and certain high-contrast 3D imaging tasks while adding a high quality, quantitative, and affordable CT imaging capability to the angio room for intraoperative ICH monitoring, a hybrid detector system was developed that includes the existing FPD on the C-arm gantry and a strip photon-counting detector (PCD) that can be translated into the field-of-view for high quality PCD-CT imaging at a given brain section-of-interest. The hybrid system maintains the openness and ease of use of the C-arm system without the need to remodel the angio room and without installing a slidinggantry MDCT (aka Angio CT) with orders of magnitude higher costs. Additionally, the cost of the strip PCD is much less than the cost of a large-area PCD. To demonstrate the feasibility and potential benefits of the hybrid PCD-FPD system, a series of physical phantom studies, and human cadaver studies were performed at a gantry rotation speed (7 s) and radiation dose level that closely match those of clinical CBCT acquisitions. The experimental images of C-arm PCD-CT demonstrated MDCT-equivalent low-contrast detectability of PCD-CT and significantly reduced artifacts compared with FPD-based CBCT.
Dual-energy subtraction angiography (DESA) using fast kV switching has received attention for its potential to reduce misregistration artifacts in thoracic and abdominal imaging where patient motion is difficult to control; however, commercial interventional solutions are not currently available. The purpose of this work was to adapt an x-ray angiography system for 2D and 3D DESA. The platform for the dual-energy prototype was a commercially available xray angiography system with a flat panel detector and an 80 kW x-ray tube. Fast kV switching was implemented using custom x-ray tube control software that follows a user-defined switching program during a rotational acquisition. Measurements made with a high temporal resolution kV meter were used to calibrate the relationship between the requested and achieved kV and pulse width. To enable practical 2D and 3D imaging experiments, an automatic exposure control algorithm was developed to estimate patient thickness and select a dual-energy switching technique (kV and ms switching) that delivers a user-specified task CNR at the minimum air kerma to the interventional reference point. An XCAT-based simulation study conducted to evaluate low and high energy image registration for the scenario of 30-60 frame/s pulmonary angiography with respiratory motion found normalized RMSE values ranging from 0.16% to 1.06% in tissue-subtracted DESA images, depending on respiratory phase and frame rate. Initial imaging in a porcine model with a 60 kV, 10 ms, 325 mA / 120 kV, 3.2 ms, 325 mA switching technique demonstrated an ability to form tissuesubtracted images from a single contrast-enhanced acquisition.
KEYWORDS: Blood circulation, Data modeling, Precision measurement, In vivo imaging, Monte Carlo methods, Aneurysms, Angiography, Hemodynamics, Image-guided intervention
In-vivo blood flow measurement, either catheter based or derived from medical images, has become increasingly used for clinical decision making. Most methods focus on a single vascular segment, catheter or simulations, due to mechanical and computational complexity. Accuracy of blood flow measurements in vascular segments are improved by considering the constraint of blood flow conservation across the whole network. Image derived blood flow measurements for individual vessels are made with a variety of techniques including ultrasound, MR, 2D DSA, and 4D-DSA. Time resolved DSA (4D) volumes are derived from 3D-DSA acquisitions and offer one such environment to measure the blood flow and respective measurement uncertainty in a vascular network automatically without user intervention. Vessel segmentation in the static DSA volume allows a mathematical description of the vessel connectivity and flow propagation direction. By constraining the allowable values of flow afforded by the measurement uncertainty and enforcing flow conservation at each junction, a reduction in the effective number of degrees of freedom in the vascular network can be made. This refines the overall measurement uncertainty in each vessel segment and provides a more robust measure of flow. Evaluations are performed with a simulated vascular network and with arterial segments in canine subjects and human renal 4D-DSA datasets. Results show a 30% reduction in flow uncertainty from a renal arterial case and a 2.5-fold improvement in flow uncertainty in some canine vessels. This method of flow uncertainty reduction may provide a more quantitative approach to treatment planning and evaluation in interventional radiology.
Transcatheter aortic valve replacement is a minimally invasive technique for the treatment of valvular heart disease, where an artificial valve mounted on a balloon catheter is guided to the aortic valve annulus. The balloon catheter is then expanded and displaces the diseased valve. We recently proposed an algorithm to track the 3D position, orientation and shape of a prosthetic transcatheter aortic valve using biplane fluoroscopic imaging. In this work, we present a real time hardware and software implementation of this prosthetic valve tracking method. A prototype was implemented which gathers fluoroscopic images from the angiography system via a research interface. A dynamic point cloud model of the valve is then used to estimate the 3D position, orientation and shape by minimizing a cost function. The cost function is implemented using parallel processing on graphics processing units to improve the performance. The system includes 3D rendering of the valve model and additional anatomy for visualization. The timing performance of the system was evaluated using a plastic cylinder phantom and a prosthetic valve mounted on a balloon catheter. The total computation time per frame for tracking and visualization using two different valve models was 46.11 ms and 43.88 ms respectively. This would allow frame rates of up to 21.69 frames per second. The target registration error of the estimated valve model was 1.22 ± 0.29 mm. Combined with 3D echocardiographic imaging, this technique would enable real time image guidance in 3D, where both the prosthetic valve and the soft tissue of the heart are visible.
Time-resolved cone beam CT angiography (CBCTA) imaging in the interventional suite has the potential to identify occluded vessels and the collaterals of symptomatic ischemic stroke patients. However, traditional C-arm gantries offer limited rotational speed and thus the temporal resolution is limited when the conventional filtered backprojection (FBP) reconstruction is used. Recently, a model based iterative image reconstruction algorithm: Synchronized MultiArtifact Reduction with Tomographic reconstruction (SMART-RECON) was proposed to reconstruct multiple CBCT image volumes per short-scan CBCT acquisition to improve temporal resolution. However, it is not clear how much temporal resolution can be improved using the SMART-RECON algorithm or what the corresponding reconstruction accuracy is. In this paper, a novel fractal tree based numerical timeresolved angiography phantom with ground truth temporal information was introduced to quantify temporal resolution using a temporal blurring model analysis along with other two quantification metrics introduced to quantify reconstruction accuracy: the relative root mean square error (rRMSE) and the Kullback-Leibler Divergence (DKL). The quantitative results show that the temporal resolution is 0.8 s for SMART-RECON and 3.6 s for the FBP reconstruction. The reconstruction fidelity with SMART-RECON was substantially improved with the rRMSE improved by at least 70% and the DKL was improved by at least 40%.
The recently proposed 4D DSA technique enables reconstruction of time resolved 3D volumes from two C-arm CT acquisitions. This provides information on the blood flow in neurovascular applications and can be used for the diagnosis and treatment of vascular diseases. For applications in the thorax and abdomen, respiratory motion can prevent successful 4D DSA reconstruction and cause severe artifacts. The purpose of this work is to propose a novel technique for motion compensated 4D DSA reconstruction to enable applications in the thorax and abdomen. The approach uses deformable 2D registration to align the projection images of a non-contrast and a contrast enhanced scan. A subset of projection images is then selected, which are acquired in a similar respiratory state and an iterative simultaneous multiplicative algebraic reconstruction is applied to determine a 3D constraint volume. A 2D-3D registration step then aligns the remaining projection images with the 3D constraint volume. Finally, a constrained back-projection is performed to create a 3D volume for each projection image. A pig study has been performed, where 4D DSA acquisitions were performed with and without respiratory motion to evaluate the feasibility of the approach. The dice similarity coefficient between the reference 3D constraint volume and the motion compensated reconstruction was 51.12 % compared to 35.99 % without motion compensation. This technique could improve the workflow for procedures in interventional radiology, e.g. liver embolizations, where changes in blood flow have to be monitored carefully.
Time resolved 3D angiographic data from 4D DSA provides a unique environment to explore physical properties of blood flow. Utilizing the pulsatility of the contrast waveform, the Fourier components can be used to track the waveform motion through vessels. Areas of strong pulsatility are determined through the FFT power spectrum. Using this method, we find an accuracy from 4D-DSA flow measurements within 7.6% and 6.8% RMSE of ICA PCVIPR and phantom flow probe validation measurements, respectively. The availability of velocity and flow information with fast acquisition could provide a more quantitative approach to treatment planning and evaluation in interventional radiology.
In this work, a newly developed reconstruction algorithm, Synchronized MultiArtifact Reduction with Tomographic RECONstruction (SMART-RECON), was applied to C-arm cone beam CT perfusion (CBCTP) imaging. This algorithm contains a special rank regularizer, designed to reduce limited-view artifacts associated with super- short scan reconstructions. As a result, high temporal sampling and temporal resolution image reconstructions were achieved using an interventional C-arm x-ray system. The algorithm was evaluated in terms of the fidelity of the dynamic contrast update curves and the accuracy of perfusion parameters through numerical simulation
studies. Results shows that, not only were the dynamic curves accurately recovered (relative root mean square error ∈ [3%, 5%] compared with [13%, 22%] for FBP), but also the noise in the final perfusion maps was dramatically reduced. Compared with filtered backprojection, SMART-RECON generated CBCTP maps with much improved capability in differentiating lesions with perfusion deficits from the surrounding healthy brain tissues.
Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.
KEYWORDS: Spatial resolution, 3D image processing, Modulation transfer functions, Angiography, Spatial frequencies, Temporal resolution, 3D image reconstruction, Point spread functions, 3D acquisition, Medical imaging
C-Arm CT three-dimensional (3-D) digital subtraction angiography (DSA) reconstructions cannot provide temporal information to radiologists. Four-dimensional (4-D) DSA provides a time series of 3-D volumes utilizing temporal dynamics in the two-dimensional (2-D) projections using a constraining image reconstruction approach. Volumetric limiting spatial resolution (VLSR) of 4-D DSA is quantified and compared to a 3-D DSA. The effects of varying 4-D DSA parameters of 2-D projection blurring kernel size and threshold of the 3-D DSA (constraining image) of an in silico phantom (ISPH) and physical phantom (PPH) were investigated. The PPH consisted of a 76-micron tungsten wire. An 8-s/248-frame/198-deg scan protocol acquired the projection data. VLSR was determined from MTF curves generated from each 2-D transverse slice of every (248) 4-D temporal frame. 4-D DSA results for PPH and ISPH were compared to the 3-D DSA. 3-D DSA analysis resulted in a VLSR of 2.28 and 1.69 lp/mm for ISPH and PPH, respectively. Kernel sizes of either 10×10 or 20×20 pixels with a 3-D DSA constraining image threshold of 10% provided 4-D DSA VLSR nearest to the 3-D DSA. 4-D DSA yielded 2.21 and 1.67 lp/mm with a percent error of 3.1 and 1.2% for ISPH and PPH, respectively, as compared to 3-D DSA. This research indicates 4-D DSA is capable of retaining the resolution of 3-D DSA.
Static C-Arm CT 3D FDK baseline reconstructions (3D-DSA) are unable to provide temporal information to radiologists. 4D-DSA provides a time series of 3D volumes implementing a constrained image, thresholded 3D-DSA, reconstruction utilizing temporal dynamics in the 2D projections. Volumetric limiting spatial resolution (VLSR) of 4DDSA is quantified and compared to a 3D-DSA reconstruction using the same 3D-DSA parameters. Investigated were the effects of varying over significant ranges the 4D-DSA parameters of 2D blurring kernel size applied to the projection and threshold applied to the 3D-DSA when generating the constraining image of a scanned phantom (SPH) and an electronic phantom (EPH). The SPH consisted of a 76 micron tungsten wire encased in a 47 mm O.D. plastic radially concentric thin walled support structure. An 8-second/248-frame/198° scan protocol acquired the raw projection data. VLSR was determined from averaged MTF curves generated from each 2D transverse slice of every (248) 4D temporal frame (3D). 4D results for SPH and EPH were compared to the 3D-DSA. Analysis of the 3D-DSA resulted in a VLSR of 2.28 and 1.69 lp/mm for the EPH and SPH respectively. Kernel (2D) sizes of either 10x10 or 20x20 pixels with a threshold of 10% of the 3D-DSA as a constraining image provided 4D-DSA VLSR nearest to the 3D-DSA. 4D-DSA algorithms yielded 2.21 and 1.67 lp/mm with a percent error of 3.1% and 1.2% for the EPH and SPH respectively as compared to the 3D-DSA. This research indicates 4D-DSA is capable of retaining the resolution of the 3D-DSA.
KEYWORDS: Image processing, Denoising, Data acquisition, Angiography, Medical imaging, Arteries, Algorithm development, In vivo imaging, Optimization (mathematics), Image quality
In this work we developed a novel denoising algorithm for DSA image series. This algorithm takes advantage of the low rank nature of the DSA image sequences to enable a dramatic reduction in radiation and/or contrast doses in DSA imaging. Both spatial and temporal regularizers were introduced in the optimization algorithm to further reduce noise. To validate the method, in vivo animal studies were conducted with a Siemens Artis Zee biplane system using different radiation dose levels and contrast concentrations. Both conventionally processed DSA images and the DSA images generated using the novel denoising method were compared using absolute noise standard deviation and the contrast to noise ratio (CNR). With the application of the novel denoising algorithm for DSA, image quality can be maintained with a radiation dose reduction by a factor of 20 and/or a factor of 2 reduction in contrast dose. Image processing is completed on a GPU within a second for a 10s DSA data acquisition.
This paper provides a fast and patient-specific scatter artifact correction method for cone-beam computed tomography (CBCT) used in image-guided interventional procedures. Due to increased irradiated volume of interest in CBCT imaging, scatter radiation has increased dramatically compared to 2D imaging, leading to a degradation of image quality. In this study, we propose a scatter artifact correction strategy using an analytical convolution-based model whose free parameters are estimated using a rough estimation of scatter profiles from the acquired cone-beam projections. It was evaluated using Monte Carlo simulations with both monochromatic and polychromatic X-ray sources. The results demonstrated that the proposed method significantly reduced the scatter-induced shading artifacts and recovered CT numbers.
C-arm cone-beam CT is an emerging tool for intraoperative imaging, but currently exhibits modest soft-tissue imaging capability. This work adapts a spectrum of statistical iterative reconstruction approaches to C-arm CBCT and investigates performance in imaging of low-contrast tasks pertinent to soft-tissue surgical guidance. Experiments involved a mobile C-arm and phantoms and cadavers presenting soft-tissue structures imaged using 3D FBP and penalized likelihood reconstruction. Statistical reconstruction - especially non-quadratic PL - boosted soft-tissue image quality through reduction of noise and artifacts, therefore presenting promise for interventional imaging. Further investigation of task-specific performance may overcome conventional tradeoffs in noise, resolution, and dose.
Fast, accurate, deformable image registration is an important aspect of image-guided interventions. Among the factors that can confound registration is the presence of additional material in the intraoperative image - e.g., contrast bolus or a surgical implant - that was not present in the prior image. Existing deformable registration methods generally fail to account for tissue excised between image acquisitions and typically simply “move” voxels within the images with no ability to account for tissue that is removed or introduced between scans. We present a variant of the Demons algorithm to accommodate such content mismatch. The approach combines segmentation of mismatched content with deformable registration featuring an extra pseudo-spatial dimension representing a reservoir from which material can be drawn into the registered image. Previous work tested the registration method in the presence of tissue excision (“missing tissue”). The current paper tests the method in the presence of additional material in the target image and presents a general method by which either missing or additional material can be accommodated. The method was tested in phantom studies, simulations, and cadaver models in the context of intraoperative cone-beam CT with three examples of content mismatch: a variable-diameter bolus (contrast injection); surgical device (rod), and additional material (bone cement). Registration accuracy was assessed in terms of difference images and normalized cross correlation (NCC). We identify the difficulties that traditional registration algorithms encounter when faced with content mismatch and evaluate the ability of the proposed method to overcome these challenges.
Intraoperative imaging could improve patient safety and quality assurance (QA) via the detection of subtle complications that might otherwise only be found hours after surgery. Such capability could therefore reduce morbidity and the need for additional intervention. Among the severe adverse events that could be more quickly detected by high-quality intraoperative imaging is acute intracranial hemorrhage (ICH), conventionally assessed using post-operative CT. A mobile C-arm capable of high-quality cone-beam CT (CBCT) in combination with advanced image reconstruction techniques is reported as a means of detecting ICH in the operating room. The system employs an isocentric C-arm with a flat-panel detector in dual gain mode, correction of x-ray scatter and beam-hardening, and a penalized likelihood (PL) iterative reconstruction method. Performance in ICH detection was investigated using a quantitative phantom focusing on (non-contrast-enhanced) blood-brain contrast, an anthropomorphic head phantom, and a porcine model with injection of fresh blood bolus. The visibility of ICH was characterized in terms of contrast-to-noise ratio (CNR) and qualitative evaluation of images by a neurosurgeon. Across a range of size and contrast of the ICH as well as radiation dose from the CBCT scan, the CNR was found to increase from ~2.2-3.7 for conventional filtered backprojection (FBP) to ~3.9-5.4 for PL at equivalent spatial resolution. The porcine model demonstrated superior ICH detectability for PL. The results support the role of high-quality mobile C-arm CBCT employing advanced reconstruction algorithms for detecting subtle complications in the operating room at lower radiation dose and lower cost than intraoperative CT scanners and/or fixedroom C-arms. Such capability could present a potentially valuable aid to patient safety and QA.
Purpose: An increasingly popular minimally invasive approach to resection of oropharyngeal / base-of-tongue cancer is made possible by a transoral technique conducted with the assistance of a surgical robot. However, the highly deformed surgical setup (neck flexed, mouth open, and tongue retracted) compared to the typical patient orientation in preoperative images poses a challenge to guidance and localization of the tumor target and adjacent critical anatomy. Intraoperative cone-beam CT (CBCT) can account for such deformation, but due to the low contrast of soft-tissue in CBCT images, direct localization of the target and critical tissues in CBCT images can be difficult. Such structures may be more readily delineated in preoperative CT or MR images, so a method to deformably register such information to intraoperative CBCT could offer significant value. This paper details the initial implementation of a deformable registration framework to align preoperative images with the deformed intraoperative scene and gives preliminary evaluation of the geometric accuracy of registration in CBCT-guided TORS. Method: The deformable registration aligns preoperative CT or MR to intraoperative CBCT by integrating two established approaches. The volume of interest is first segmented (specifically, the region of the tongue from the tip to the hyoid), and a Gaussian mixture (GM) mode1 of surface point clouds is used for rigid initialization (GMRigid) as well as an initial deformation (GMNonRigid). Next, refinement of the registration is performed using the Demons algorithm applied to distance transformations of the GM-registered and CBCT volumes. The registration accuracy of the framework was quantified in preliminary studies using a cadaver emulating preoperative and intraoperative setups. Geometric accuracy of registration was quantified in terms of target registration error (TRE) and surface distance error. Result: With each step of the registration process, the framework demonstrated improved registration, achieving mean TRE of 3.0 mm following the GM rigid, 1.9 mm following GM nonrigid, and 1.5 mm at the output of the registration process. Analysis of surface distance demonstrated a corresponding improvement of 2.2, 0.4, and 0.3 mm, respectively. The evaluation of registration error revealed the accurate alignment in the region of interest for base-of-tongue robotic surgery owing to point-set selection in the GM steps and refinement in the deep aspect of the tongue in the Demons step. Conclusions: A promising framework has been developed for CBCT-guided TORS in which intraoperative CBCT provides a basis for registration of preoperative images to the highly deformed intraoperative setup. The registration framework is invariant to imaging modality (accommodating preoperative CT or MR) and is robust against CBCT intensity variations and artifact, provided corresponding segmentation of the volume of interest. The approach could facilitate overlay of preoperative planning data directly in stereo-endoscopic video in support of CBCT-guided TORS.
Because tomographic reconstructions are ill-conditioned, algorithms that incorporate additional knowledge about the
imaging volume generally have improved image quality. This is particularly true when measurements are noisy or have
missing data. This paper presents a general framework for inclusion of the attenuation contributions of specific
component objects known to be in the field-of-view as part of the reconstruction. Components such as surgical devices
and tools may be modeled explicitly as being part of the attenuating volume but are inexactly known with respect to their
locations poses, and possible deformations. The proposed reconstruction framework, referred to as Known-Component
Reconstruction (KCR), is based on this novel parameterization of the object, a likelihood-based objective function, and
alternating optimizations between registration and image parameters to jointly estimate the both the underlying
attenuation and unknown registrations. A deformable KCR (dKCR) approach is introduced that adopts a control pointbased
warping operator to accommodate shape mismatches between the component model and the physical component,
thereby allowing for a more general class of inexactly known components. The KCR and dKCR approaches are applied
to low-dose cone-beam CT data with spine fixation hardware present in the imaging volume. Such data is particularly
challenging due to photon starvation effects in projection data behind the metallic components. The proposed algorithms
are compared with traditional filtered-backprojection and penalized-likelihood reconstructions and found to provide
substantially improved image quality. Whereas traditional approaches exhibit significant artifacts that complicate
detection of breaches or fractures near metal, the KCR framework tends to provide good visualization of anatomy right
up to the boundary of surgical devices.
Localizing sub-palpable nodules in minimally invasive video-assisted thoracic surgery (VATS) presents a significant
challenge. To overcome inherent problems of preoperative nodule tagging using CT fluoroscopic guidance, an
intraoperative C-arm cone-beam CT (CBCT) image-guidance system has been developed for direct localization of
subpalpable tumors in the OR, including real-time tracking of surgical tools (including thoracoscope), and video-CBCT
registration for augmentation of the thoracoscopic scene. Acquisition protocols for nodule visibility in the inflated and
deflated lung were delineated in phantom and animal/cadaver studies. Motion compensated reconstruction was
implemented to account for motion induced by the ventilated contralateral lung. Experience in CBCT-guided targeting of
simulated lung nodules included phantoms, porcine models, and cadavers. Phantom studies defined low-dose acquisition
protocols providing contrast-to-noise ratio sufficient for lung nodule visualization, confirmed in porcine specimens with
simulated nodules (3-6mm diameter PE spheres, ~100-150HU contrast, 2.1mGy). Nodule visibility in CBCT of the
collapsed lung, with reduced contrast according to air volume retention, was more challenging, but initial studies
confirmed visibility using scan protocols at slightly increased dose (~4.6-11.1mGy). Motion compensated reconstruction
employing a 4D deformation map in the backprojection process reduced artifacts associated with motion blur.
Augmentation of thoracoscopic video with renderings of the target and critical structures (e.g., pulmonary artery) showed
geometric accuracy consistent with camera calibration and the tracking system (2.4mm registration error). Initial results
suggest a potentially valuable role for CBCT guidance in VATS, improving precision in minimally invasive, lungconserving
surgeries, avoid critical structures, obviate the burdens of preoperative localization, and improve patient
safety.
Localization of target vertebrae is an essential step in minimally invasive spine surgery, with conventional methods relying
on "level counting" - i.e., manual counting of vertebrae under fluoroscopy starting from readily identifiable anatomy
(e.g., the sacrum). The approach requires an undesirable level of radiation, time, and is prone to counting errors due to
the similar appearance of vertebrae in projection images; wrong-level surgery occurs in 1 of every ~3000 cases. This
paper proposes a method to automatically localize target vertebrae in x-ray projections using 3D-2D registration between
preoperative CT (in which vertebrae are preoperatively labeled) and intraoperative fluoroscopy. The registration uses an
intensity-based approach with a gradient-based similarity metric and the CMA-ES algorithm for optimization. Digitally
reconstructed radiographs (DRRs) and a robust similarity metric are computed on GPU to accelerate the process. Evaluation
in clinical CT data included 5,000 PA and LAT projections randomly perturbed to simulate human variability in
setup of mobile intraoperative C-arm. The method demonstrated 100% success for PA view (projection error: 0.42mm)
and 99.8% success for LAT view (projection error: 0.37mm). Initial implementation on GPU provided automatic target
localization within about 3 sec, with further improvement underway via multi-GPU. The ability to automatically label
vertebrae in fluoroscopy promises to streamline surgical workflow, improve patient safety, and reduce wrong-site surgeries,
especially in large patients for whom manual methods are time consuming and error prone.
Conventional surgical tracking configurations carry a variety of limitations in line-of-sight, geometric accuracy, and
mismatch with the surgeon's perspective (for video augmentation). With increasing utilization of mobile C-arms,
particularly those allowing cone-beam CT (CBCT), there is opportunity to better integrate surgical trackers at bedside to
address such limitations. This paper describes a tracker configuration in which the tracker is mounted directly on the Carm.
To maintain registration within a dynamic coordinate system, a reference marker visible across the full C-arm
rotation is implemented, and the "Tracker-on-C" configuration is shown to provide improved target registration error
(TRE) over a conventional in-room setup - (0.9±0.4) mm vs (1.9±0.7) mm, respectively. The system also can generate
digitally reconstructed radiographs (DRRs) from the perspective of a tracked tool ("x-ray flashlight"), the tracker, or the
C-arm ("virtual fluoroscopy"), with geometric accuracy in virtual fluoroscopy of (0.4±0.2) mm. Using a video-based
tracker, planning data and DRRs can be superimposed on the video scene from a natural perspective over the surgical
field, with geometric accuracy (0.8±0.3) pixels for planning data overlay and (0.6±0.4) pixels for DRR overlay across all
C-arm angles. The field-of-view of fluoroscopy or CBCT can also be overlaid on real-time video ("Virtual Field Light")
to assist C-arm positioning. The fixed transformation between the x-ray image and tracker facilitated quick, accurate
intraoperative registration. The workflow and precision associated with a variety of realistic surgical tasks were
significantly improved using the Tracker-on-C - for example, nearly a factor of 2 reduction in time required for C-arm
positioning, reduction or elimination of dose in "hunting" for a specific fluoroscopic view, and confident placement of
the x-ray FOV on the surgical target. The proposed configuration streamlines the integration of C-arm CBCT with realtime
tracking and demonstrated utility in a spectrum of image-guided interventions (e.g., spine surgery) benefiting from
improved accuracy, enhanced visualization, and reduced radiation exposure.
This paper proposes to utilize a patient-specific prior to augment intraoperative sparse-scan data to accurately reconstruct
the aspects of the region that have changed by a surgical procedure in image-guided surgeries. When anatomical changes
are introduced by a surgical procedure, only a sparse set of x-ray images are acquired, and the prior volume is registered
to these data. Since all the information of the patient anatomy except for the surgical change is already known from the
prior volume, we highlight only the change by creating difference images between the new scan and digitally
reconstructed radiographs (DRR) computed from the registered prior volume. The region of change (RoC) is
reconstructed from these sparse difference images by a penalized likelihood (PL) reconstruction method regularized by a
compressed sensing penalty. When the surgical changes are local and relatively small, the RoC reconstruction involves
only a small volume size and a small number of projections, allowing much faster computation and lower radiation dose
than is needed to reconstruct the entire surgical volume. The reconstructed RoC merges with the prior volume to
visualize an updated surgical field. We apply this novel approach to sacroplasty phantom data obtained from a conebeam
CT (CBCT) test bench and vertebroplasty data with a fresh cadaver acquired from a C-arm CBCT system with a
flat-panel detector (FPD).
Intraoperative cone-beam CT (CBCT) could offer an important advance to thoracic surgeons in directly localizing
subpalpable nodules during surgery. An image-guidance system is under development using mobile C-arm CBCT to
directly localize tumors in the OR, potentially reducing the cost and logistical burden of conventional preoperative
localization and facilitating safer surgery by visualizing critical structures surrounding the surgical target (e.g.,
pulmonary artery, airways, etc.). To utilize the wealth of preoperative image/planning data and to guide targeting under
conditions in which the tumor may not be directly visualized, a deformable registration approach has been developed that
geometrically resolves images of the inflated (i.e., inhale or exhale) and deflated states of the lung. This novel technique
employs a coarse model-driven approach using lung surface and bronchial airways for fast registration, followed by an
image-driven registration using a variant of the Demons algorithm to improve target localization to within ~1 mm. Two
approaches to model-driven registration are presented and compared - the first involving point correspondences on the
surface of the deflated and inflated lung and the second a mesh evolution approach. Intensity variations (i.e., higher
image intensity in the deflated lung) due to expulsion of air from the lungs are accounted for using an a priori lung
density modification, and its improvement on the performance of the intensity-driven Demons algorithm is
demonstrated. Preliminary results of the combined model-driven and intensity-driven registration process demonstrate
accuracy consistent with requirements in minimally invasive thoracic surgery in both target localization and critical
structure avoidance.
KEYWORDS: Lung, Data acquisition, Image registration, Compressed sensing, Tomography, Image analysis, Data modeling, Surveillance, Signal attenuation, Signal to noise ratio
This paper introduces a general reconstruction technique for using unregistered prior images within model-based penalized-
likelihood reconstruction. The resulting estimator is implicitly defined as the maximizer of an objective composed
of a likelihood term that enforces a fit to data measurements and that incorporates the heteroscedastic statistics of the
tomographic problem; and a penalty term that penalizes differences from prior image. Compressed sensing (p-norm)
penalties are used to allow for differences between the reconstruction and the prior. Moreover, the penalty is parameterized
with registration terms that are jointly optimized as part of the reconstruction to allow for mismatched images. We
apply this novel approach to synthetic data using a digital phantom as well as tomographic data derived from a conebeam
CT test bench. The test bench data includes sparse data acquisitions of a custom modifiable anthropomorphic lung
phantom that can simulate lung nodule surveillance. Sparse reconstructions using this approach demonstrate the simultaneous
incorporation of prior imagery and the necessary registration to utilize those priors.
The ability to perform fast, accurate, deformable registration with intraoperative images featuring surgical excisions was
investigated for use in cone-beam CT (CBCT) guided head and neck surgery. Existing deformable registration methods
generally fail to account for tissue excised between image acquisitions and typically simply "move" voxels within the
images with no ability to account for tissue that is removed (or introduced) between scans. We have thus developed an
approach in which an extra dimension is added during the registration process to act as a sink for voxels removed during
the course of the procedure. A series of cadaveric images acquired using a prototype CBCT-capable C-arm were used to
model tissue deformation and excision occurring during a surgical procedure, and the ability of deformable registration
to correctly account for anatomical changes under these conditions was investigated. Using a previously developed
version of the Demons deformable registration algorithm, we identify the difficulties that traditional registration
algorithms encounter when faced with excised tissue and present a modified version of the algorithm better suited for
use in intraoperative image-guided procedures. Studies were performed for different deformation and tissue excision
tasks, and registration performance was quantified in terms of the ability to accurately account for tissue excision while
avoiding spurious deformations arising around the excision.
Intraoperative imaging modalities are becoming more prevalent in recent years, and the need for integration of these modalities
with surgical guidance is rising, creating new possibilities as well as challenges. In the context of such emerging
technologies and new clinical applications, a software architecture for cone-beam CT (CBCT) guided surgery has been
developed with emphasis on binding open-source surgical navigation libraries and integrating intraoperative CBCT with
novel, application-specific registration and guidance technologies. The architecture design is focused on accelerating
translation of task-specific technical development in a wide range of applications, including orthopaedic, head-and-neck,
and thoracic surgeries. The surgical guidance system is interfaced with a prototype mobile C-arm for high-quality CBCT
and through a modular software architecture, integration of different tools and devices consistent with surgical workflow
in each of these applications is realized. Specific modules are developed according to the surgical task, such as: 3D-3D
rigid or deformable registration of preoperative images, surgical planning data, and up-to-date CBCT images; 3D-2D
registration of planning and image data in real-time fluoroscopy and/or digitally reconstructed radiographs (DRRs);
compatibility with infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements;
augmented overlay of image and planning data in endoscopic or in-room video; real-time "virtual fluoroscopy" computed
from GPU-accelerated DRRs; and multi-modality image display. The platform aims to minimize offline data processing
by exposing quantitative tools that analyze and communicate factors of geometric precision. The system was
translated to preclinical phantom and cadaver studies for assessment of fiducial (FRE) and target registration error (TRE)
showing sub-mm accuracy in targeting and video overlay within intraoperative CBCT. The work culminates in the development
of a CBCT guidance system (reported here for the first time) that leverages the technical developments in Carm
CBCT and associated technologies for realizing a high-performance system for translation to clinical studies.
Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.
KEYWORDS: Image filtering, Angiography, X-rays, Image quality, Signal to noise ratio, Signal attenuation, X-ray imaging, Data centers, Data acquisition, Physics
Rotational angiography (RA) is widely used clinically to obtain 3D data. In many procedures, e.g.,
neurovascular interventions, the imaged field of view (FOV) is much larger than the region of interest (ROI),
thereby subjecting the patient to unnecessary x-ray dose. To reduce the dose in these procedures, we have proposed
placing an x-ray attenuating filter with an open aperture (ROI) in the x-ray beam (called filtered region of interest
(FROI) RA. We have shown that this approach yields high quality data for centered objects of interest (OoIs). In
this study, we investigate the noise behavior of the FROI approach for off-center OoIs.
Using filter-specific attenuation and noise characteristics, simulated FROI projection images were generated.
The intensities in the peripheral region were equalized, and the 3D data reconstructed. For each reconstructed voxel,
the intersections with the full intensity beam (ROI) were determined for each projection, and noise properties were
evaluated. Off-center OoIs intersect the high intensity beam in more than 60% of the projections (ROI having 40%
FOV area), with intersection frequency increasing with increasing ROI area and OoI proximity to the central region.
The noise increases with distance from the central region up to a factor of two. Integral dose reductions range
between 40% and 85%, depending on ROI area and filter thickness.
Substantial dose reductions (40-85%) are achieved with less than a factor of two increase in noise for OoIs
peripheral to the central region, indicating the FROI approach might be an alternative for reducing dose during
standard procedures.
With a steady increase of CT interventions, population dose is increasing. Thus, new approaches must be
developed to reduce the dose. In this paper, we present a means for rapid identification and reconstruction of
objects of interest in reconstructed data. Active shape models are first trained on sets of data obtained from
similar subjects. A reconstruction is performed using a limited number of views. As each view is added, the
reconstruction is evaluated using the active shape models. Once the object of interest is identified, the volume of
interest alone is reconstructed, saving reconstruction time. Note that the data outside of the objects of interest
can be reconstructed using fewer views or lower resolution providing the context of the region of interest data.
An additional feature of our algorithm is that a reliable segmentation of objects of interest is achieved from
a limited set of projections. Evaluations were performed using simulations with Shepp-Logan phantoms and
animal studies. In our evaluations, regions of interest are identified using about 33 projections on average. The
overlap of the identified regions with the true regions of interest is approximately 91%. The identification of the
region of interest requires about 1/5 of the time required for full reconstruction, the time for reconstruction of the
region of interest is currently determined by the fraction of voxels in the region of interest (i.e, voxels in region
of interest/voxels in full volume). The algorithm has several important clinical applications, e.g., rotational
angiography, digital tomosynthesis mammography, and limited view computed tomography.
The use of cone beam computed tomography (CBCT) is growing in the clinical arena, due to its ability to
provide 3-D information during interventions, its high diagnostic quality (sub-millimeter resolution), and its short
scanning times (10 seconds). In many situations, the reconstructions suffer from artifacts from high contrast
objects (due mainly to angular sampling by the projections or by beam hardening) which can reduce image
quality. In this study, we propose a novel algorithm to reduce these artifacts. In our approach, these objects are
identified and then removed in the sinogram space by using computational geometry techniques. In particular,
the object is identified in a reconstruction from a few views. Then, the rays (projection lines) intersecting the
high contrast objects are identified using the technique of topological walk in a dual space which effectively
models the problem as a visibility problem and provides a solution in optimal time and space complexity. As a
result, the corrections can be performed in real time, independent of the projection image size. Subsequently,
a full reconstruction is performed by leaving out the high contrast objects in the reconstructions. Evaluations
were performed using simulations and animal studies. The artifacts are significantly reduced when using our
approach. This optimal time and space complexity and relative simple implementation makes our approach
attractive for artifact reduction.
Computed Tomography (CT) is widely used in the modern clinical settings. In certain procedures, the region of interest (ROI) is often considerably smaller than the imaged field of view (FOV), thereby subjecting the patient to extra dose. For these procedures, we propose a method of filtered region-of-interest (FROI) CT. In this
procedure, a predetermined ROI is imaged with standard x-ray intensity, while surrounding areas are imaged using a substantial lower x-ray intensity by interposing an x-ray attenuator in the beam. For the FROI-CT acquisitions in this study, a gadolinium filter with a circular central opening is placed in the x-ray beam of a standard clinical rotational angiography system. The resulting image contains a high intensity ROI, a low intensity region surrounding the ROI, and a transition region between these two. Three-dimensional
reconstruction using these images would result in artifacts. Therefore, the intensities in the images are equalized
prior to reconstruction. To equalize the intensities, first two images are obtained, one unfiltered and one with a filter
in place. The corresponding data in the two images are used in a linear least-squares fit to determine the equalization function. The transition region is equalized using a radial filter technique, based on a comparison of the data on either side of the transition region after intensity equalization. The technique was evaluated using rotational angiographic sequences of a head phantom obtained with and without the filter in place. Differences between conventional (unfiltered) and FROI-equalized images of the head phantom were approximately 5%. Differences in reconstructed images (conventional and FROI) were 7% on average inside the reconstructed ROI. These results are comparable to those obtained for two separate standard acquisitions. A 50% dose reduction was obtained for a 50% FOV radius for the filter. These results indicate that FROI-CT can provide the physician with the image detail comparable to conventional image acquisition while reducing dose to the patient.
Three-dimensional datasets of complex objects are readily available from the tomographic modalities, and fusion of these data sets leads to new understanding of the data. Automatic alignment of the objects is difficult or time consuming when substantial misalignments are present or point correspondences cannot be established, or the solution space is non-convex. These issues effectively exclude most optimization algorithms used in conventional data alignment. Here, we present the particle swarm optimization (PSO) approach which is not sensitive to initial conditions, local minima or non-convex solution space.
Intercommunicating particle swarms are randomly placed in the solution space (representing the parameters of the rigid transformations). Each member of each swarm traverses the solution space, constantly evaluating the objective function at its own position and communicating with other members of the swarm about theirs. In addition, the swarms communicate between themselves. Through this information sharing between swarm members and the swarms, the space is searched completely and efficiently, and as a result all swarms converge near the globally optimal rigid transformation. To evaluate the technique, high-resolution micro-CT data sets of single mouse heads were acquired with large initial misalignments.
Using two communicating particle swarms in the same solution space, six distinct mouse head objects were aligned finding the approximate global minima in about 25 iterations or 140 seconds on a standard PC independent of initial conditions. Faster speeds (better accuracy) can be obtained by relaxing (restricting) the convergence criteria. These results indicate that the particle swarm approach may be a valuable tool for stand-alone or hybrid alignments.
Three-dimensional (3D) vessel data from CTA or MRA are not always available prior to or during endovascular
interventional procedures, whereas multiple 2D projection angiograms often are. Unfortunately, patient movement,
table movement, and gantry sag during angiographic procedures can lead to large errors in gantry-based imaging
geometries and thereby incorrect 3D. Therefore, we are developing methods for combining vessel data from
multiple 2D angiographic views obtained during interventional procedures to provide 3D vessel data during these
procedures. Multiple 2D projection views of carotid vessels are obtained, and the vessel centerlines are indicated.
For each pair of views, endpoints of the 3D centerlines are reconstructed using triangulation based on the provided
gantry geometry. Previous investigations indicated that translation errors were the primary source of error in the
reconstructed 3D. Therefore, the errors in the translations relating the imaging systems are corrected by minimizing
the L1 distance between the reconstructed endpoints, after which the 3D centerlines are reconstructed using epipolar
constraints for every pair of views. Evaluations were performed using simulations, phantom data, and clinical cases.
In simulation and phantom studies, the RMS error decreased from 6.0 mm obtained with biplane approaches to 0.5
mm with our technique. Centerlines in clinical cases are smoother and more consistent than those calculated from
individual biplane pairs. The 3D centerlines are calculated in about 2 seconds. These results indicate that reliable
3D vessel data can be generated for treatment planning or revision during interventional procedures.
KEYWORDS: Computer simulations, Arteries, Angiography, 3D modeling, Surgery, X-rays, Data modeling, Instrument modeling, Data acquisition, Signal attenuation
Endovascular interventional procedures are being used more frequently in cardiovascular surgery.
Unfortunately, procedural failure, e.g., vessel dissection, may occur and is often related to improper guidewire and/or
device selection. To support the surgeon's decision process and because of the importance of the guidewire in
positioning devices, we propose a method to determine the guidewire path prior to insertion using a model of its elastic
potential energy coupled with a representative graph construction.
The 3D vessel centerline and sizes are determined for a specified vessel. Points in planes perpendicular to the
vessel centerline are generated. For each pair of consecutive planes, a vector set is generated which joins all points in
these planes. We construct a graph representing these vector sets as nodes. The nodes representing adjacent vector sets
are joined by edges with weights calculated as a function of the angle between the corresponding vectors (nodes). The
optimal path through this weighted directed graph is then determined using shortest path algorithms, such as topological
sort based shortest path algorithm or Dijkstra's algorithm. Volumetric data of an internal carotid artery phantom (Ø 3.5mm) were acquired. Several independent guidewire (Ø 0.4mm) placements were performed, and the 3D paths were
determined using rotational angiography.
The average RMS distance between the actual and the average simulated guidewire path was 0.7mm; the
computation time to determine the path was 3 seconds. The ability to predict the guidewire path inside vessels may
facilitate calculation of vessel-branch access and force estimation on devices and the vessel wall.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.