KEYWORDS: In vivo imaging, Heart, Histograms, Photoacoustic spectroscopy, Transducers, Computer simulations, Phased arrays, Data processing, Education and training, Deep learning
SignificanceInterventional cardiac procedures often require ionizing radiation to guide cardiac catheters to the heart. To reduce the associated risks of ionizing radiation, photoacoustic imaging can potentially be combined with robotic visual servoing, with initial demonstrations requiring segmentation of catheter tips. However, typical segmentation algorithms applied to conventional image formation methods are susceptible to problematic reflection artifacts, which compromise the required detectability and localization of the catheter tip.AimWe describe a convolutional neural network and the associated customizations required to successfully detect and localize in vivo photoacoustic signals from a catheter tip received by a phased array transducer, which is a common transducer for transthoracic cardiac imaging applications.ApproachWe trained a network with simulated photoacoustic channel data to identify point sources, which appropriately model photoacoustic signals from the tip of an optical fiber inserted in a cardiac catheter. The network was validated with an independent simulated dataset, then tested on data from the tips of cardiac catheters housing optical fibers and inserted into ex vivo and in vivo swine hearts.ResultsWhen validated with simulated data, the network achieved an F1 score of 98.3% and Euclidean errors (mean ± one standard deviation) of 1.02 ± 0.84 mm for target depths of 20 to 100 mm. When tested on ex vivo and in vivo data, the network achieved F1 scores as large as 100.0%. In addition, for target depths of 40 to 90 mm in the ex vivo and in vivo data, up to 86.7% of axial and 100.0% of lateral position errors were lower than the axial and lateral resolution, respectively, of the phased array transducer.ConclusionsThese results demonstrate the promise of the proposed method to identify photoacoustic sources in future interventional cardiology and cardiac electrophysiology applications.
Many cardiac interventional procedures (e.g., radiofrequency ablation) require fluoroscopy to navigate catheters in veins toward the heart. However, this image guidance method lacks depth information and increases the risks of radiation exposure for both patients and operators. To overcome these challenges, we developed a robotic visual servoing system that maintains visualization of segmented photoacoustic signals from a cardiac catheter tip. This system was tested in two in vivo cardiac catheterization procedures with ground truth position information provided by fluoroscopy and electromagnetic tracking. The 1D root mean square localization errors within the vein ranged 1.63 − 2.28 mm for the first experiment and 0.25 − 1.18 mm for the second experiment. The 3D root mean square localization error for the second experiment ranged 1.24 − 1.54 mm. The mean contrast of photoacoustic signals from the catheter tip ranged 29.8 − 48.8 dB when the catheter tip was visualized in the heart. Results indicate that robotic-photoacoustic imaging has promising potential as an alternative to fluoroscopic guidance. This alternative is advantageous because it provides depth information for cardiac interventions and enables enhanced visualization of the catheter tips within the beating heart.
Interventional cardiac procedures often require ionizing radiation to guide cardiac catheters to the heart. To reduce the associated risks of ionizing radiation, our group is exploring photoacoustic imaging in conjunc- tion with robotic visual servoing, which requires segmentation of catheter tips. However, typical segmentation algorithms are susceptible to reflection artifacts. To address this challenge, signal sources can be identified in the presence of reflection artifacts using a deep neural network, as we previously demonstrated with a linear array ultrasound transducer. This paper extends our previous work to detect photoacoustic sources received by a phased array transducer, which is more common in cardiac applications. We trained a convolutional neural network (CNN) with simulated photoacoustic channel data to identify point sources. The network was tested with an independent simulated validation data set not included during training as well as in vivo data acquired during a pig catheterization procedure. When tested on the independent simulated validation data set, the CNN correctly classified 84.2% of sources with a misclassification rate of 0.01%, and the mean absolute location error of correctly classified sources was 0.095 mm and 0.462 mm in the axial and lateral dimensions, respectively. When applied to in vivo data, the network correctly classified 91.4% of sources with a 7.86% misclassification rate. These results indicate that a CNN is capable of identifying photoacoustic sources recorded by phased array transducers, which is promising for cardiac applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.