This paper presents an improved bronchoscope tracking method for bronchoscopic navigation using scale invariant
features and sequential Monte Carlo sampling. Although image-based methods are widely discussed in
the community of bronchoscope tracking, they are still limited to characteristic information such as bronchial
bifurcations or folds and cannot automatically resume the tracking procedure after failures, which result usually
from problematic bronchoscopic video frames or airway deformation. To overcome these problems, we propose
a new approach that integrates scale invariant feature-based camera motion estimation into sequential Monte
Carlo sampling to achieve an accurate and robust tracking. In our approach, sequential Monte Carlo sampling
is employed to recursively estimate the posterior probability densities of the bronchoscope camera motion parameters
according to the observation model based on scale invariant feature-based camera motion recovery. We
evaluate our proposed method on patient datasets. Experimental results illustrate that our proposed method
can track a bronchoscope more accurate and robust than current state-of-the-art method, particularly increasing
the tracking performance by 38.7% without using an additional position sensor.
Image-guided bronchoscopy usually requires to track the bronchoscope camera position and orientation to align
the preinterventional 3-D computed tomography (CT) images to the intrainterventional 2-D bronchoscopic video
frames. Current state-of-the-art image-based algorithms often fail in bronchoscope tracking due to shortages
of information on depth and rotation around the viewing (running) direction of the bronchoscope camera. To
address these problems, this paper presents a novel bronchoscope tracking method for bronchoscopic navigation
based on a low-cost optical mouse sensor, bronchial structure information, and image registration. We first utilize
an optical mouse senor to automatically measure the insertion depth and the rotation of the viewing direction
along the bronchoscope. We integrate the outputs of such a 2-D sensor by performing a centerline matching
on the basis of bronchial structure information before optimizing the bronchoscope camera motion parameters
during image registration. An assessment of our new method is implemented on phantom data. Experimental
results illustrate that our proposed method is a promising means for bronchoscope tracking, compared to our
previous image-based method, significantly improving the tracking performance.
The extraction and analysis of the pulmonary artery in computed tomography (CT) of the chest can be an
important, but time-consuming step for the diagnosis and treatment of lung disease, in particular in non-contrast
data, where the pulmonary artery has low contrast and frequently merges with adjacent tissue of similar intensity.
We here present a new method for the automatic segmentation of the pulmonary artery based on an adaptive
model, Hough and Euclidean distance transforms, and spline fitting, which works equally well on non-contrast
and contrast enhanced data. An evaluation on 40 patient data sets and a comparison to manual segmentations
in terms of Jaccard index, sensitivity, specificity, and minimum mean distance shows its overall robustness.
This paper presents a hybrid camera tracking method that uses electromagnetic (EM) tracking and intensitybased
image registration and its evaluation on a dynamic motion phantom. As respiratory motion can significantly
affect rigid registration of the EM tracking and CT coordinate systems, a standard tracking approach
that initializes intensity-based image registration with absolute pose data acquired by EM tracking will fail
when the initial camera pose is too far from the actual pose. We here propose two new schemes to address this
problem. Both of these schemes intelligently combine absolute pose data from EM tracking with relative motion
data combined from EM tracking and intensity-based image registration. These schemes significantly improve
the overall camera tracking performance. We constructed a dynamic phantom simulating the respiratory motion
of the airways to evaluate these schemes. Our experimental results demonstrate that these schemes can
track a bronchoscope more accurately and robustly than our previously proposed method even when maximum
simulated respiratory motion reaches 24 mm.
This paper presents an improved method for compensating ultra-tiny electromagnetic tracker (UEMT) outputs and its application to a flexible neuroendoscopic surgery navigation system. Recently, UEMT is widely used in a surgical navigation system using a flexible endoscope to obtain the position and the orientation of an endoscopic camera.However, due to the distortion of the electromagnetic field, the accuracy of such UEMT system becomes low. Several research groups have presented methods for compensating UEMT outputs that are deteriorated by ferromagnetic objects existing around the UEMT. These compensation methods firstly acquired positions and orientations (sample data) by sweeping a special tool (hybrid tool) having a UEMT and an optical tracker (OT) in free-hand. Then a polynomial compensating UEMT outputs is computed from both outputs. However, these methods have following problems: 1) Compensation function is obtained as a function of position, and orientation information is not used in compensation. 2) Although we need to slowly move the hybrid tool to obtain better compensation results, this leads increase of time. To overcome such problems, this paper presents a UEMT-output compensation function that is a function of not only position but also orientation. Also, a new sweeping method of the hybrid tool is proposed in order to reduce the sweeping time required for obtaining sample data. We evaluated the accuracy and feasibility of the proposed method by experiments in an OpenMR operating room. According to the result of experiments, the accuracy of the compensation method is improved about 20% than that of the previous method. We implemented the proposed method in a navigation system for flexible neuroendoscopic surgery and performed a phantom test and several clinical application tests. The result showed the proposed method is efficient for UEMT output compensation and improves accuracy of a flexible neuroendoscopic surgery system.
This paper presents a method for accelerating bronchoscope tracking based on image registration by using the
GPU (Graphics Processing Unit). Parallel techniques for efficient utilization of CPU (Central Processing Unit)
and GPU in image registration are presented. Recently, a bronchoscope navigation system has been developed for
enabling a bronchoscopist to perform safe and efficient examination. In such system, it is indispensable to track
the motion of the bronchoscope camera at the tip of the bronchoscope in real time. We have previously developed
a method for tracking a bronchoscope by computing image similarities between real and virtual bronchoscopic
images. However, since image registration is quite time consuming, it is difficult to track the bronchoscope in real
time. This paper presents a method for accelerating the process of image registration by utilizing the GPU of the
graphics card and the CUDA (Compute Unified Device Architecture) architexture. In particular, we accelerate
two parts: (1) virtual bronchoscopic image generation by volume rendering and (2) image similarity calculation
between a real bronchoscopic image and virtual bronchoscopic images. Furthermore, to efficiently use the GPU,
we minimize (i) the amount of data transfer between CPU and GPU, and (ii) the number of GPU function calls
from the CPU. We applied the proposed method to bronchoscopic videos of 10 patients and their corresponding
CT data sets. The experimental results showed that the proposed method can track a bronchoscope at 15 frames
per second and 5.17 times faster than the same method only using the CPU.
Computed tomography (CT) of the chest is a very common staging investigation for the assessment of mediastinal, hilar, and intrapulmonary lymph nodes in the context of lung cancer. In the current clinical workflow, the detection and assessment of lymph nodes is usually performed manually, which can be error-prone and timeconsuming. We therefore propose a method for the automatic detection of mediastinal, hilar, and intrapulmonary lymph node candidates in contrast-enhanced chest CT. Based on the segmentation of important mediastinal anatomy (bronchial tree, aortic arch) and making use of anatomical knowledge, we utilize Hessian eigenvalues to detect lymph node candidates. As lymph nodes can be characterized as blob-like structures of varying size and shape within a specific intensity interval, we can utilize these characteristics to reduce the number of false positive candidates significantly. We applied our method to 5 cases suspected to have lung cancer. The processing time of our algorithm did not exceed 6 minutes, and we achieved an average sensitivity of 82.1% and an average precision of 13.3%.
In recent years, an increasing number of liver tumor indications were treated by minimally invasive laparoscopic
resection. Besides the restricted view, a major issue in laparoscopic liver resection is the enhanced visualization
of (hidden) vessels, which supply the tumorous liver segment and thus need to be divided prior to the resection.
To navigate the surgeon to these vessels, pre-operative abdominal imaging data can hardly be used due to intraoperative
organ deformations mainly caused by appliance of carbon dioxide pneumoperitoneum and respiratory
motion. While regular respiratory motion can be gated and synchronized intra-operatively, motion caused by
pneumoperitoneum is individual for every patient and difficult to estimate.
Therefore, we propose to use an optically tracked mobile C-arm providing cone-beam CT imaging capability intraoperatively.
The C-arm is able to visualize soft tissue by means of its new flat panel detector and is calibrated
offline to relate its current position and orientation to the coordinate system of a reconstructed volume. Also
the laparoscope is optically tracked and calibrated offline, so both laparoscope and C-arm are registered in the
same tracking coordinate system.
Intra-operatively, after patient positioning, port placement, and carbon dioxide insufflation, the liver vessels are
contrasted and scanned during patient exhalation. Immediately, a three-dimensional volume is reconstructed.
Without any further need for patient registration, the volume can be directly augmented on the live laparoscope
video, visualizing the contrasted vessels. This augmentation provides the surgeon with advanced visual aid for
the localization of veins, arteries, and bile ducts to be divided or sealed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.