Fast, accurate, deformable image registration is an important aspect of image-guided interventions. Among the factors that can confound registration is the presence of additional material in the intraoperative image - e.g., contrast bolus or a surgical implant - that was not present in the prior image. Existing deformable registration methods generally fail to account for tissue excised between image acquisitions and typically simply “move” voxels within the images with no ability to account for tissue that is removed or introduced between scans. We present a variant of the Demons algorithm to accommodate such content mismatch. The approach combines segmentation of mismatched content with deformable registration featuring an extra pseudo-spatial dimension representing a reservoir from which material can be drawn into the registered image. Previous work tested the registration method in the presence of tissue excision (“missing tissue”). The current paper tests the method in the presence of additional material in the target image and presents a general method by which either missing or additional material can be accommodated. The method was tested in phantom studies, simulations, and cadaver models in the context of intraoperative cone-beam CT with three examples of content mismatch: a variable-diameter bolus (contrast injection); surgical device (rod), and additional material (bone cement). Registration accuracy was assessed in terms of difference images and normalized cross correlation (NCC). We identify the difficulties that traditional registration algorithms encounter when faced with content mismatch and evaluate the ability of the proposed method to overcome these challenges.
Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or
anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve
navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to
critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the
engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a
clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to
other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is
underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime,
high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic
nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that
demonstrates mean re-projection accuracy (0.7±0.3) pixels and mean target registration error of (2.3±1.5) mm. An
IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which
each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented)
video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to
assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and
targets by means of video overlay during surgical approach, resection, and reconstruction.
Localizing sub-palpable nodules in minimally invasive video-assisted thoracic surgery (VATS) presents a significant
challenge. To overcome inherent problems of preoperative nodule tagging using CT fluoroscopic guidance, an
intraoperative C-arm cone-beam CT (CBCT) image-guidance system has been developed for direct localization of
subpalpable tumors in the OR, including real-time tracking of surgical tools (including thoracoscope), and video-CBCT
registration for augmentation of the thoracoscopic scene. Acquisition protocols for nodule visibility in the inflated and
deflated lung were delineated in phantom and animal/cadaver studies. Motion compensated reconstruction was
implemented to account for motion induced by the ventilated contralateral lung. Experience in CBCT-guided targeting of
simulated lung nodules included phantoms, porcine models, and cadavers. Phantom studies defined low-dose acquisition
protocols providing contrast-to-noise ratio sufficient for lung nodule visualization, confirmed in porcine specimens with
simulated nodules (3-6mm diameter PE spheres, ~100-150HU contrast, 2.1mGy). Nodule visibility in CBCT of the
collapsed lung, with reduced contrast according to air volume retention, was more challenging, but initial studies
confirmed visibility using scan protocols at slightly increased dose (~4.6-11.1mGy). Motion compensated reconstruction
employing a 4D deformation map in the backprojection process reduced artifacts associated with motion blur.
Augmentation of thoracoscopic video with renderings of the target and critical structures (e.g., pulmonary artery) showed
geometric accuracy consistent with camera calibration and the tracking system (2.4mm registration error). Initial results
suggest a potentially valuable role for CBCT guidance in VATS, improving precision in minimally invasive, lungconserving
surgeries, avoid critical structures, obviate the burdens of preoperative localization, and improve patient
safety.
Conventional surgical tracking configurations carry a variety of limitations in line-of-sight, geometric accuracy, and
mismatch with the surgeon's perspective (for video augmentation). With increasing utilization of mobile C-arms,
particularly those allowing cone-beam CT (CBCT), there is opportunity to better integrate surgical trackers at bedside to
address such limitations. This paper describes a tracker configuration in which the tracker is mounted directly on the Carm.
To maintain registration within a dynamic coordinate system, a reference marker visible across the full C-arm
rotation is implemented, and the "Tracker-on-C" configuration is shown to provide improved target registration error
(TRE) over a conventional in-room setup - (0.9±0.4) mm vs (1.9±0.7) mm, respectively. The system also can generate
digitally reconstructed radiographs (DRRs) from the perspective of a tracked tool ("x-ray flashlight"), the tracker, or the
C-arm ("virtual fluoroscopy"), with geometric accuracy in virtual fluoroscopy of (0.4±0.2) mm. Using a video-based
tracker, planning data and DRRs can be superimposed on the video scene from a natural perspective over the surgical
field, with geometric accuracy (0.8±0.3) pixels for planning data overlay and (0.6±0.4) pixels for DRR overlay across all
C-arm angles. The field-of-view of fluoroscopy or CBCT can also be overlaid on real-time video ("Virtual Field Light")
to assist C-arm positioning. The fixed transformation between the x-ray image and tracker facilitated quick, accurate
intraoperative registration. The workflow and precision associated with a variety of realistic surgical tasks were
significantly improved using the Tracker-on-C - for example, nearly a factor of 2 reduction in time required for C-arm
positioning, reduction or elimination of dose in "hunting" for a specific fluoroscopic view, and confident placement of
the x-ray FOV on the surgical target. The proposed configuration streamlines the integration of C-arm CBCT with realtime
tracking and demonstrated utility in a spectrum of image-guided interventions (e.g., spine surgery) benefiting from
improved accuracy, enhanced visualization, and reduced radiation exposure.
The ability to perform fast, accurate, deformable registration with intraoperative images featuring surgical excisions was
investigated for use in cone-beam CT (CBCT) guided head and neck surgery. Existing deformable registration methods
generally fail to account for tissue excised between image acquisitions and typically simply "move" voxels within the
images with no ability to account for tissue that is removed (or introduced) between scans. We have thus developed an
approach in which an extra dimension is added during the registration process to act as a sink for voxels removed during
the course of the procedure. A series of cadaveric images acquired using a prototype CBCT-capable C-arm were used to
model tissue deformation and excision occurring during a surgical procedure, and the ability of deformable registration
to correctly account for anatomical changes under these conditions was investigated. Using a previously developed
version of the Demons deformable registration algorithm, we identify the difficulties that traditional registration
algorithms encounter when faced with excised tissue and present a modified version of the algorithm better suited for
use in intraoperative image-guided procedures. Studies were performed for different deformation and tissue excision
tasks, and registration performance was quantified in terms of the ability to accurately account for tissue excision while
avoiding spurious deformations arising around the excision.
Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.
Intraoperative imaging modalities are becoming more prevalent in recent years, and the need for integration of these modalities
with surgical guidance is rising, creating new possibilities as well as challenges. In the context of such emerging
technologies and new clinical applications, a software architecture for cone-beam CT (CBCT) guided surgery has been
developed with emphasis on binding open-source surgical navigation libraries and integrating intraoperative CBCT with
novel, application-specific registration and guidance technologies. The architecture design is focused on accelerating
translation of task-specific technical development in a wide range of applications, including orthopaedic, head-and-neck,
and thoracic surgeries. The surgical guidance system is interfaced with a prototype mobile C-arm for high-quality CBCT
and through a modular software architecture, integration of different tools and devices consistent with surgical workflow
in each of these applications is realized. Specific modules are developed according to the surgical task, such as: 3D-3D
rigid or deformable registration of preoperative images, surgical planning data, and up-to-date CBCT images; 3D-2D
registration of planning and image data in real-time fluoroscopy and/or digitally reconstructed radiographs (DRRs);
compatibility with infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements;
augmented overlay of image and planning data in endoscopic or in-room video; real-time "virtual fluoroscopy" computed
from GPU-accelerated DRRs; and multi-modality image display. The platform aims to minimize offline data processing
by exposing quantitative tools that analyze and communicate factors of geometric precision. The system was
translated to preclinical phantom and cadaver studies for assessment of fiducial (FRE) and target registration error (TRE)
showing sub-mm accuracy in targeting and video overlay within intraoperative CBCT. The work culminates in the development
of a CBCT guidance system (reported here for the first time) that leverages the technical developments in Carm
CBCT and associated technologies for realizing a high-performance system for translation to clinical studies.
Advances in computer vision have made possible robust 3D reconstruction of monocular endoscopic video. These
reconstructions accurately represent the visible anatomy and, once registered to pre-operative CT data, enable
a navigation system to track directly through video eliminating the need for an external tracking system. Video
registration provides the means for a direct interface between an endoscope and a navigation system and allows
a shorter chain of rigid-body transformations to be used to solve the patient/navigation-system registration. To
solve this registration step we propose a new 3D-3D registration algorithm based on Trimmed Iterative Closest
Point (TrICP)1 and the z-buffer algorithm.2 The algorithm takes as input a 3D point cloud of relative scale
with the origin at the camera center, an isosurface from the CT, and an initial guess of the scale and location.
Our algorithm utilizes only the visible polygons of the isosurface from the current camera location during each
iteration to minimize the search area of the target region and robustly reject outliers of the reconstruction. We
present example registrations in the sinus passage applicable to both sinus surgery and transnasal surgery. To
evaluate our algorithm's performance we compare it to registration via Optotrak and present closest distance
point to surface error. We show our algorithm has a mean closest distance error of .2268mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.