Data-driven approaches to the quantification problem in photoacoustic imaging have shown great potential in silico, but the inherent lack of labelled ground truth data in vivo currently restricts their application and translation into clinics. In this study we leverage Fourier Neural Operator networks as surrogate models to synthesize multispectral photoacoustic human forearm images in order to replace time-consuming and not inherently differentiable state-of-the-art Monte Carlo and k-Wave simulations. We investigate the accuracy and efficiency of these surrogate models for the optical and acoustic simulation step.
The generation of realistically simulated photoacoustic (PA) images with ground truth labels for optical and acoustic properties has become a critical method for training and validating neural networks for PA imaging. As state-of-the-art model-based simulations often suffer from various inaccuracies, unsupervised domain transfer methods have been recently proposed to enhance the quality of model-based simulations. The validation of these methods, however, is challenging as there are no reliable labels for absorption or oxygen saturation in vivo. In this work, we examine various domain shifts between simulations and real images such as simulating the wrong noise model, inaccuracies in modeling the digital device twin or erroneous assumptions on tissue composition. We show in silico how a Cycle GAN, unsupervised image-to-image translation networks (UNIT) and a conditional invertible neural network handle these domain shifts and what their consequences are for blood oxygen saturation estimation.
This study delves into the largely uncharted domain of biases in photoacoustic imaging, spotlighting potential shortcut learning as a key issue in reliable machine learning. Our focus is on hardware variation biases. We identify device-specific traits that create detectable fingerprints in photoacoustic images, demonstrate machine learning's capability to use these discrepancies to determine the device that acquired the image, and highlight their potential impact on machine learning model predictions in downstream tasks, such as disease classification.
Optical and acoustic imaging techniques enable noninvasive visualization of structural and functional tissue properties. Data-driven approaches for quantification of these properties are promising, but they rely on highly accurate simulations due to the lack of ground truth knowledge. We recently introduced the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit that has quickly been adopted by the community in the context of the IPASC consortium for standardized reconstruction. We present new developments in the toolkit including e.g. improved tissue and device modeling and provide an outlook on future directions aiming at improving the realism of simulations.
Peripheral artery disease (PAD) is widespread among the elderly population where narrowing arteries in lower limbs are causing a lack of perfusion. This work explores the benefit of volumetric photoacoustic imaging (v-PAI) over conventional 2D PAI for PAD diagnosis and monitoring. To this end, we leverage the recently proposed approach of Tattoo tomography, which generates a v-PAI representation from a set of 2D PAI slices. Preliminary results of the ongoing study indicate that v-PAI can increase the sensitivity of early-stage PAD detection. Conclusively our Tattoo approach has the potential to become a valuable tool in PAD diagnostics.
Significance: Optical and acoustic imaging techniques enable noninvasive visualisation of structural and functional properties of tissue. The quantification of measurements, however, remains challenging due to the inverse problems that must be solved. Emerging data-driven approaches are promising, but they rely heavily on the presence of high-quality simulations across a range of wavelengths due to the lack of ground truth knowledge of tissue acoustical and optical properties in realistic settings.
Aim: To facilitate this process, we present the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit. SIMPA is being developed according to modern software design standards.
Approach: SIMPA enables the use of computational forward models, data processing algorithms, and digital device twins to simulate realistic images within a single pipeline. SIMPA’s module implementations can be seamlessly exchanged as SIMPA abstracts from the concrete implementation of each forward model and builds the simulation pipeline in a modular fashion. Furthermore, SIMPA provides comprehensive libraries of biological structures, such as vessels, as well as optical and acoustic properties and other functionalities for the generation of realistic tissue models.
Results: To showcase the capabilities of SIMPA, we show examples in the context of photoacoustic imaging: the diversity of creatable tissue models, the customisability of a simulation pipeline, and the degree of realism of the simulations.
Conclusions: SIMPA is an open-source toolkit that can be used to simulate optical and acoustic imaging modalities. The code is available at: https://github.com/IMSY-DKFZ/simpa, and all of the examples and experiments in this paper can be reproduced using the code available at: https://github.com/IMSY-DKFZ/simpa_paper_experiments.
KEYWORDS: Data modeling, Monte Carlo methods, Scattering, Photoacoustic imaging, Optical simulations, Neural networks, Machine learning, Light scattering, In vivo imaging, Imaging spectroscopy
Photoacoustic imaging (PAI) is an emerging medical imaging modality that provides high contrast and spatial resolution. A core unsolved problem to effectively support interventional healthcare is the accurate quantification of the optical tissue properties, such as the absorption and scattering coefficients. The contribution of this work is two-fold. We demonstrate the strong dependence of deep learning-based approaches on the chosen training data and we present a novel approach to generating simulated training data. According to initial in silico results, our method could serve as an important first step related to generating adequate training data for PAI applications.
Previous work on 3D freehand photoacoustic imaging has focused on the development of specialized hardware or the use of tracking devices. In this work, we present a novel approach towards 3D volume compounding using an optical pattern attached to the skin. By design, the pattern allows context-aware calculation of the PA image pose in a pattern reference frame, enabling 3D reconstruction while also making the method robust against patient motion. Due to its easy handling optical pattern-enabled context-aware PA imaging could be a promising approach for 3D PA in a clinical environment.
In this work, we present the open source “Simulation and Image Processing for Photoacoustic Imaging (SIMPA)” toolkit that facilitates simulation of multispectral photoacoustic images by streamlining the use of state-of-the-art frameworks that numerically approximate the respective forward models. SIMPA provides modules for all the relevant steps for photoacoustic forward simulation: tissue modelling, optical forward modelling, acoustic modelling, noise modelling, as well as image reconstruction. We demonstrate the capabilities of SIMPA by performing image simulation using MCX and k-Wave for the optical and acoustic forward modelling, as well as an experimentally determined noise model and a custom tissue model.
Photoacoustic imaging (PAI) has the potential to revolutionize healthcare due to the valuable information on tissue physiology that is contained in multispectral signals. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral PA images to facilitate interpretability of recorded images. Based on a validation study with experimentally acquired data of healthy human volunteers, we show that a combination of tissue segmentation, sO2 estimation, and uncertainty quantification can create powerful analyses and visualizations of multispectral photoacoustic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.