Open Access
12 July 2024 Challenges and advances in two-dimensional photoacoustic computed tomography: a review
Shunyao Zhang, Jingyi Miao, Lei S. Li
Author Affiliations +
Abstract

Significance

Photoacoustic computed tomography (PACT), a hybrid imaging modality combining optical excitation with acoustic detection, has rapidly emerged as a prominent biomedical imaging technique.

Aim

We review the challenges and advances of PACT, including (1) limited view, (2) anisotropy resolution, (3) spatial aliasing, (4) acoustic heterogeneity (speed of sound mismatch), and (5) fluence correction of spectral unmixing.

Approach

We performed a comprehensive literature review to summarize the key challenges in PACT toward practical applications and discuss various solutions.

Results

There is a wide range of contributions from both industry and academic spaces. Various approaches, including emerging deep learning methods, are proposed to improve the performance of PACT further.

Conclusions

We outline contemporary technologies aimed at tackling the challenges in PACT applications.

1.

Introduction

Biomedical imaging plays a pivotal role in the diagnosis and management of various diseases, offering invaluable insights into the human body’s anatomy and intricate physiological processes.14 Traditional imaging modalities, such as X-ray [Fig. 1(a)(i)] and ultrasound (US) [Fig. 1(a)(ii)], have long been the cornerstones of medical diagnostics, each endowed with unique strengths and limitations.5,8,9 Photoacoustic tomography (PAT)1014 is a medical imaging technique that employs both optical and acoustic energy, as shown in Fig. 1(a)(iii–iv).6 PAT, based on photoacoustic (PA) effect [Fig. 1(b)], transforms absorbed light energy into sound waves.15 As shown in Fig. 1(a), PAT, providing high-resolution imaging of breast cancer,7,16,17 has recently been approved by the Food and Drug Administration as a complementary tool to X-ray mammography and US for breast cancer diagnosis and screening.18

Fig. 1

Comparison between X-ray, US, and PA imaging modalities. Reprinted with permission from Refs. 56.7. (a)(i) The X-ray image of the left breast displays a suspicious mass, with the white box indicating the field of view for the PA image. (a)(ii) The US image of the palpable mass confirms a highly suspicious mass. (a)(iii) The MAP of the PA volume depicts vessel density maps with tumors identified by a green circle. (a)(iv) A 3D volume rendering of the PA image exhibits a distinctive ring-like appearance. (b) Evaluation of resolution and depth characteristics across various imaging modalities, including US, optical, and PA imaging.

JBO_29_7_070901_f001.png

Figure 2(a) illustrates the principle of PAT. Upon pulsed laser light excitation, temperature arises from the absorption of laser light by tissues, followed by thermal expansion and then the generation of acoustic waves, called PA waves. Ultrasonic transducer array (UTA) detects these waves for image reconstruction (IR).19 PA computed tomography (PACT),2025 a major incarnation of PAT, has enjoyed remarkable progress and widespread adoption in medical imaging in the past 10 years.7,2632 PACT utilizes the PA effect, enabling the detection of ultrasonic waves generated by both ballistic and scattered photons excited by a light source. As a result, PACT can penetrate much deeper into tissues compared with traditional optical microscopy, which primarily relies on ballistic photons.33,34 In addition, acoustic waves experience significantly less scattering within soft tissues, leading to PACT offering substantially superior spatial resolution when compared with pure optical imaging methods in deep tissue.35 Moreover, thanks to the light–matter interactions, PACT utilizes various molecular contrasts,3644 including endogenous contrasts, such as hemoglobin, melanin, deoxyribonucleic acid/ribonucleic acid, water, protein, and lipid,27,37,4552 and exogenous contrast agents, such as fluorescent proteins, organic dyes, and nanoparticles.36,38,40,5356 Understanding the fundamental principles and applications of PACT is crucial for unlocking its full potential, which paves the way for exploring diverse PACT acoustic detection geometries that play a pivotal role in acquiring high-quality images.

Fig. 2

Principle and applications of PACT. Reprinted with permission from Ref. 19. (a) Imaging principle of PACT. (b)(i) PACT system with a linear UTA. (b)(ii) PACT system with a ring-shaped UTA. (b)(iii) PACT system with a hemisphere-shaped UTA.

JBO_29_7_070901_f002.png

PACT employs diverse acoustic detection geometries, including linear, ring-shaped, and hemisphere-shaped arrays [Fig. 2(b)(i–iii)].19 While curved UTAs such as ring-shaped and hemispherical arrays can yield high-quality PACT images, they typically require customization and come at a significant expense. In addition, these arrays necessitate accessibility from multiple sides of the target.57,58 On the contrary, linear UTAs can produce images from a single side of the samples, and they are easily accessible at a lower cost, offering the convenience of a handheld approach.58,59 In conclusion, the choice of acoustic detection geometry in PACT depends on the specific application and resource availability.

In this paper, we mainly discuss acoustical inverse problems and an additional optical inverse problem–fluence correction. The acoustical inverse problem involves reconstructing the distribution of initial pressure within the tissue based on the detected acoustic signals, while the optical inverse problem relates to the reconstruction of the optical properties within samples based on measurements of PA signals.

For acoustic inverse problems, practical reconstruction algorithms have been developed. One widely employed approach is the universal back-projection (UBP) algorithm,6062 where the solid-angle weighting factor is introduced to the back-projection algorithm to compensate for the variations of detection views.60 Another alternative algorithm based on the wave physics principle is time reversal (TR).63 In TR, the recorded PA signals are mathematically time-reversed and re-emitted into the tissue. As these waves travel back through the tissue, they naturally converge to the location of the original PA source. By detecting and recording the converging waves, an image with optimized spatial resolution and enhanced signal-to-noise ratio (SNR) is generated.64,65 Model-based reconstruction methods have also been developed.6668 This process involves optimization algorithms that iteratively refine the image by minimizing the least square errors between the measurements and predicted signals according to the exact PA propagation model.6971

In this review paper, our primary objective is to conduct a thorough examination of some specific challenges inherent to the application of two-dimensional (2D) PACT, as shown in Fig. 3. These challenges include limited view, anisotropy spatial resolution, and acoustic heterogeneity (especially sound speed mismatch) for IR and spectral unmixing with unknown fluence. Through a review of the existing literature, we seek to pinpoint specific hurdles that may impede the full realization of PACT in medical diagnostics. Furthermore, we will identify and dissect research papers and studies that have pioneered innovative solutions to address these challenges. By summarizing and categorizing these solutions, we intend to provide a comprehensive resource for researchers, clinicians, and practitioners eager to harness the capabilities of PACT while effectively mitigating its inherent limitations.

Fig. 3

Diagram showing the challenges in 2D PACT and current methods dealing with those challenges. The structure of this review follows this diagram. (SS, spatial sampling; IR, image reconstruction; SOS, speed of sound).

JBO_29_7_070901_f003.png

2.

Hardware/Geometry-Induced Issues

PACT images are reconstructed from the signals recorded by all the elements of the UTA. Thus, different UTA geometries and detector designs of the transducer itself induce issues to the PACT, e.g., limited view, anisotropy resolution, and spatial aliasing.

2.1.

Limited View and Solutions

Due to their low cost, hand-held convenience, wide selection of bandwidths, and US imaging capability, linear UTAs have been widely used in PACT to provide real-time cross-sectional images.58 However, linear-array-based/planar-array-based systems suffer from the limitation of their viewing angles, resulting in missing features, called the limited view problem.7274 Linear array detectors exhibit high sensitivity to PA waves propagating perpendicular to the array’s surface. As illustrated in Fig. 4a(i), a linear ultrasonic array is strategically placed orthogonally to a line-shaped numerical phantom. In Fig. 4(a)(ii), the initial pressure rise is visualized. The linear array received PA signals exclusively from the two extremities, displaying the limited view issue. A simple and direct approach to address this issue is to enlarge the detection viewing angles by rotating either the linear array or the object59,79 but sacrificing the imaging temporal resolution. In this section, we review other solutions, including ultrasonic heating encoding, deployment of acoustic reflectors, and advanced deep learning approach.

Fig. 4

Limited view of challenges and solutions. Reprinted with permission from Refs. 73 and 7576.77.78. (a)(i) Enhanced initial pressure rise at the heating site. (a)(ii) Consistent initial pressure rise across the line phantom. (a)(iii) Reconstructed PA image of line phantom from both ends only. (a)(iv) Ultrasonic heating boosts initial pressure rises at the heated location (center of line phantom). (a)(v) Reconstructed PA image of line phantom from both ends and the center as well. (b)(i) Imaging of a hair phantom with three straight human hairs (labeled as “1–3”). (b)(ii) PA image acquired by conventional PACT. (b)(iii) Two acoustic reflectors are positioned at a relative angle of 120 deg. (b)(iv) PA image acquired by employing double 120-deg acoustic reflector PACT. (c)(i) Human finger joint image reconstructed by non-iterative method—TR. (c)(ii) Human finger joint image reconstructed by iterative method—TV. (d)(i) The global architecture of Y-Net. (d)(ii) Ground truth of initial pressure. (d)(iii) DAS beamformed image. (d)(iv) Reconstructed image from Y-Net. (e) 3D progressive U-Net architecture.

JBO_29_7_070901_f004.png

2.1.1.

Ultrasonic heating encoding

The PA amplitude is linearly proportional to the Grueneisen parameter, which is temperature-dependent in various biological tissues; thus, the PA generation can be encoded via temperature encoding. The heat generated by a focused UTA causes a local temperature rise, as depicted in Fig. 4(a)(i), and the Grueneisen parameter at the heated spot is also increased. Then, upon laser light excitation,8082 the amplitude of the PA signal originating from the heated voxel is higher than the neighboring voxels and remains unchanged, as evidenced in Fig. 4(a)(v). This selective PA signal amplification creates a point PA source, leading to the propagation of PA waves with increased amplitude in all directions. Consequently, these amplitude-enhanced PA waves can be detected by the linear array, addressing the limited view issues. Given the ability to focus ultrasonic heating at considerable depths, this approach holds potential for deep tissue imaging.75,83 Although full-view PACT is demonstrated using ultrasonic heating encoding, there are still some concerns of tissue damage from heating and heat dissipation to surrounding tissues, in turn lowering encoding efficiency.75

2.1.2.

Acoustic reflectors

To address the limited view issue, employing acoustic reflectors to enlarge the detection view has also been proposed, in turn augmenting the detection coverage angles and recovering the missing features.72,76,84 Huang et al.72 employed a 45-deg acoustic reflector, which acts as a virtual array perpendicular to a physical array. Ellwood et al.84 and Li et al.76 independently presented an alternative configuration in which two acoustic reflectors were used to increase the effective detection aperture. Experiments76 showed that a hair phantom containing three straight human hairs, denoted as “1 to 3,” is subjected to imaging using a linear array detector [Fig. 4(b)(i)]. In Fig. 4(b)(ii), the reconstructed image from conventional linear-array PACT only displayed the horizontal hair “1,” while hairs “2” and “3” were mis-detected due to the limited view. Figure 4(b)(iii) illustrates the configuration of the acoustic reflectors arranged at an enclosed angle of 120 deg. When combined with the reflectors, the detection angle coverage was significantly enhanced, as shown in Fig. 4(b)(iv), and all three hairs were distinctly recovered. One drawback of the acoustic reflector approach is that it constrains imaging space and loses the handheld imaging convenience, making it less suitable for applications requiring larger imaging volumes, such as whole-body imaging of rodents and human imaging.

2.1.3.

Iterative optimization

Model-based iterative IR methods have recently been explored to address limited view issues with planar detection geometry in PACT as well.73 The image reconstructed from the TR method exhibits that the small vessel indicated by the white arrow is poorly visualized, which is caused by limited view problems, as depicted in Fig. 4(c)(i). Least squares minimization-based iterative approaches were evaluated using the same in vivo data. It shows that the small vessel can be clearly visualized in the reconstructed image by the total variation (TV) regularization method, where the missing features are well recovered, as shown in Fig. 4(c)(ii).

2.1.4.

Deep learning

Deep learning (DL) methods have been increasingly popular in various PA applications, including exploring the limited view issue.77,78,8589 The delay-and-sum (DAS) beamformed image, as shown in Fig. 4(d)(iii), acquired from a linear array-based PACT, missed a lot of features (especially the vertical vessels) due to the limited view. To tackle this problem, a supervised learning model based on Y-Net architecture [Fig. 4(d)(i)] has been developed. The proposed Y-Net inputs the raw PA signals to encoder II and processes the raw data to obtain an imperfect beamformed image as the input of encoder I, where encoders I and II encode the texture and physical features, respectively, to realize hybrid reconstruction.89 Finally, the reconstructed vessel structure [Fig. 4(d)(iv)] resides near ground truth [Fig. 4(d)(ii)], and the shape is well-preserved. The above results demonstrate obvious improvements over DAS reconstruction; however, this work has not been generalized to in vivo applications.77 Besides, Choi et al.78 developed a three-dimensional (3D) progressive U-Net [Fig. 4(e)] to address limited view issues and produced volumetric PACT images by improving the solid angle range by 3.77 times, and then, missing features were well recovered. The performance was successfully demonstrated in vivo.78 DL methods show promise in enhancing IR accuracy for limited view PACT, but the effectiveness of DL reconstruction is highly sensitive to the quality of training data.8688,90

2.2.

Anisotropy Resolution Solutions

In 2D PACT, many focused transducer arrays were used for cross-sectional imaging with high temporal resolution. However, the acoustic focus of the transducer induced anisotropy resolution, an intrinsic defect of this design. Anisotropy resolution always exists even though the UTA has received perfect PA signals (e.g., well sampled, no limited view effect). The transducers in PACT are usually designed with an acoustic lens or geometrical focus to enhance their in-plane sensitivity and provide acoustic sectioning for fast 2D imaging.59,91,92 However, this design leads to anisotropy resolution, especially in 2D PACT imaging systems (e.g., linear array PACT).93,94 As shown in Fig. 5(a), the 3D resolution of a linear array can be characterized in terms of axial, lateral, and elevational resolution. The axial resolution, denoting the spatial resolution along the normal direction (x axis) of the UTA, is limited by both the speed of sound (SOS) within the acoustic medium and the bandwidth of the transducer elements. The axial resolution is the best and typically can reach half of the central acoustic wavelength. Lateral resolution, which pertains to spatial resolution along the row of transducer elements within the array (y axis), is mainly determined by the element pitch. Usually, the lateral resolution equals one acoustic wavelength, a bit worse than the axial resolution. The elevational resolution, the spatial resolution along the direction perpendicular to the axial and lateral imaging plane (z axis), is determined by the central frequency of the transducer elements and the numerical aperture (NA) of the acoustic lens or geometrical focus. The elevational resolution is usually one order of magnitude worse than the axial resolution. Anisotropy resolution also exists in PA microscopy (PAM) employing a focused transducer element. One commonly used method in PACT and PAM to achieve isotropy resolution or improve the elevational resolution is rotational scanning of the object from multiple angles to incorporate the high-frequency information from the axial or lateral direction to the elevational direction.58,93,9699 In addition to the rotation, the anisotropic resolution problem can also be handled by adding a slit or using the data-driven method (deep learning).91,95

Fig. 5

Challenges and solutions of anisotropy resolution. (a) Illustration of the poor elevational resolution due to the acoustic focus zone. Reproduced with permission from Ref. 91. (b)(i) Illustration of the system hardware setting of the IRT-PACT. The probe is fixed to a linear stage, and the object is placed on a rotation stage. (b)(ii) In vivo rat brain image acquired by PACT. (b)(iii) In vivo rat brain image acquired by IRT- PACT. Panel (b) is reproduced with permission from Ref. 58. (c)(i) Illustration of the rotate-translate scanning geometry in Ref. 93. (c)(ii) Reconstruction of a complex-shaped 3D leaf skeleton object; Image starting from the left side: ground truth image, elevational axis MAP in rotate-translate mode, and elevational axis MAP in translate-only mode. Panel (c) is reproduced with permission from Ref. 93. (d)(i) Illustration of 2D reconstruction, 3D direct reconstruction, and 3D-focal line reconstruction. A, point of reconstruction. A, the reconstructed point of A in 2D reconstruction. B, projection point of A in the x-y plane. AC, 2D reconstruction delay. AC, 3D direct reconstruction delay. AE, 3D-focal line reconstruction delay. x-y, 2D reconstruction plane. DC equals DE. (d)(ii) Illustration of the conventional linear PACT array and its receiving aperture along elevation direction. (d)(iii) Illustration of the slit-PAT and its receiving aperture along elevation direction. Panel (d)(i) is reproduced with permission from Ref. 94. Panel (d)(ii-v) is reproduced with permission from Ref. 95. (e)(i) Illustrations of the Deep-E model data flow. (e)(ii) Illustrations of the imaging results reconstructed by conventional methods (2D stack and 3D-focal line) and Deep-E. Panel (e) is reproduced with permission from Ref. 91. FD, fully dense.

JBO_29_7_070901_f005.png

2.2.1.

Rotate-translate scanning geometry

The rotation operation mixes the poor-resolution axis (elevational axis) with the high-resolution axis (axial or lateral axis), and the translation operation ensures that there are enough overlapping files of the view area. Thus, the rotate translate-based scanning geometry can improve the elevational resolution.

PACT through inverse Radon transform (IRT-PACT) rotates the probe alone on the axial axis, mixing the elevational axis with the lateral axis. IRT-PACT introduces the Radon transform to decode the high-resolution information from the multi-direction scanned data. In IRT-PACT, as shown in Fig. 5(b)(i), the linear array probe is affixed to a linear scanning stage, and the object is placed on a rotation stage, which rotates 2 deg after each linear scanning (rotates 90 times in total). IRT-PACT employs the UBP reconstruction to generate all the B-scan frames throughout all the scanning and generates the projection along each scanning direction (elevational direction) by integrating all the tomography frames acquired within each scanning.58 Finally, similar to the X-ray CT, the 3D image is reconstructed through inverse Radon transform.100

The elevational axis projection and the inverse Radon transform make the elevational resolution almost equivalent to the in-plane lateral resolution. The results presented in Fig. 5(b)(ii–iii) depict the maximum amplitude projections (MAPs) along the depth (z) axis of the 3D images of a rat brain obtained through both conventional PACT and IRT-PACT. IRT-PACT significantly enhances elevational (vertical) resolution, producing sharper and clearer images. Further quantitative results revealed that the elevational resolution in IRT-PACT improved almost 10 times (from 1237 to 140  μm). However, in IRT-PACT, the object is scanned 90 times to obtain one 3D image, leading to a much-prolonged imaging time.

Gateau et al.93 rotated the probe alone on the lateral axis, mixing the elevational axis with the axial axis, as shown in Fig. 5(c)(i). The probe changes its pitch angle after each linear scanning, and the final rendered 3D image is reconstructed via 3D UBP with all the data from all the scanning data. Quotative results show that the elevational resolution can improve up to nine times. The complex 3D phantom results are shown in Fig. 6(c)(ii).

Fig. 6

Challenges and solutions of spatial aliasing of a full ring array. (a)(i) Illustration of a full ring UTA, a transducer element r, and a source point r. (a)(ii–iv) Visualizations of three relative sizes of the three regions S0, S1, and S2. The solid lines mean no aliasing, while the dotted lines mean aliasing for different location combinations of source points and reconstruction points. (a) (ii–iv) Spatial aliasing in UBP only, UBP + spatial interpolation, and UBP + spatial interpolation + temporal lowpass filtering respectively. (b)(i) Ground truth of a simple initial pressure distribution. (b)(ii) UBP reconstruction. (b)(iii) UBP with SI. (b)(iv) UBP with TF and SI. S0, the region within the ring array. (b)(v) Comparison of the STDs in the ROIs A–E marked with the green boxes. (b)(vi–vii) Comparisons of the profiles of lines P and Q, respectively, based on the three methods. S1, the one-way Nyquist zone. S2, the two-way Nyquist zone. SI, spatial interpolation. TF, temporal filtering. Panels (a) and (b) are reproduced with permission from Ref. 101.

JBO_29_7_070901_f006.png

The method proposed by Gateau et al.98 shows good performance in improving the elevational resolution. However, generating a 3D image using all the scanning data via UBP is mathematically equivalent to reconstructing the 3D images of each scan first and then summing them up. Considering that, in the PAM field, deconvolution-based methods have been developed to solve the anisotropy resolution problem, it can be applied in PACT as well to decode the high-resolution information more efficiently to improve the performance further or reduce the number of scans.97,98

2.2.2.

3D-focal line

Xia et al.94 proposed 3D-focal line reconstruction to improve the elevational resolution of a focused transducer array. 3D-focal line proposed a new way to calculate the time delay, which can generate fewer artifacts and improve the elevational resolution as well as the SNR. Figure 5(d)(i) illustrates the time delays in 2D reconstruction, direct 3D reconstruction, and 3D-focal line. First, point A is projected to the focal plane (xy plane) as point B. Second, connect point B to the center of the transducer (point C) crossing the focal line at point D. Finally, connect points A and D and extend the line to reach the transducer at point E. The line AE is used to calculate the delay time between imaging point A and the transducer. The results in Fig. 5(d)(iv) show that compared with 2D stack, 3D-focal line reconstruction improves the resolution by up to twofold.

2.2.3.

Slit-enabled PAT

The idea of the aforementioned 3D-focal line can be implemented in hardware by adding an additional slit to a linear PACT system at its focal line [as shown in Fig. 5(d)(ii–iii)], named slit-PAT.95 The slit diffracts the incoming PA waves so source points outside the transducer focal zone can still be detected, which improves the receiving aperture along the elevation direction. The thin slit is formed by two metal blades with foam covered to block the acoustic waves transmitting directly through the blade. Thus, all the PA signals received at the transducer are from the slit. The time delay in slit-PAT is the sum of the source point to the slit and the slit to the transducer, which exactly is the time delay of the 3D-focal line.

The table in Fig. 5(d)(iv) shows the elevational resolutions and the SNRs of 2D stack, 3D-focal line, and slit-PAT. A 2D stack provides the worst elevation resolution. With a 3D-focal line reconstruction, the resolution was improved by two times, and the value is close to the height of the transducer elevation focus (1.5 mm). Slit-PAT further improves resolution by almost five times to 0.33 mm, which is close to the 0.3 mm slit opening. In total, slit-PAT offers 10 times better elevation resolution than the 2D stack. Though the slit also blocks some of the incoming PA signals, the slit-PAT SNR is still four times better than that of the 2D stack. This is due to the fact that, in slit-PAT, the transducer receives the signal from all 400 scanning positions (large receiving aperture along the elevation direction). The in vivo experiment shown in Fig. 5(d)(v) shows that the intestine and several additional skin vessels can be identified in slit-PAT, which is hard to recognize in the 3D-focal line image.

Compared with the rotate-translate scanning methods, slit-PAT is efficient, does not need to change the scanning geometry, and can improve the elevational resolution with only a single scan. However, how to build the thin slit and make it stable may be an issue when applying slit-PAT to high-frequency probes because the slit needs to be much thinner.

2.2.4.

Deep learning

Deep-E is a fully dense U-Net102-based deep learning method designed to enhance the elevational resolution in PACT. Given that the axial and lateral resolution typically surpass elevational resolution by a significant margin, Deep-E decomposes the 3D anisotropy resolution problem into 2D (axial-elevational), specifically focusing on the axial-elevational plane during training. This approach enhances the efficiency of both simulation and model training. As shown in Fig. 5(d)(i), Deep-E takes an axial-elevation B-scan image formed by stacking all the A-lines in sequence as the input. The output of Deep-E is a 2D image with improved elevational resolution. During model inference, all generated axial-elevational images are concatenated together along the lateral direction to form the final 3D image. The pencil lead phantom shows that Deep-E can improve the elevational resolution by up to 50 times. Deep-E is also evaluated in vivo on humans, as shown in Fig. 5(d)(ii). Compared with conventional methods such as 2D stack and 3D-focal line, Deep-E gives shaper vascular structures with a clean background, and more importantly, Deep-E is able to extract vascular structures in deep tissue (colored in orange and red) which are difficult to recognize in the 2D stack and 3D-focal line images.94,103

Deep-E brings a new idea of utilizing the axial-elevational 2D training data to solve a 3D problem, which simplifies and accelerates the training data generation. Moreover, Deep-E makes the program independent from the number of elements because the experimental data were processed element by element independently in the axial-elevation plane.

2.3.

Spatial Aliasing

Signal sampling in PACT includes both temporal and spatial sampling (SS). Temporal sampling refers to sampling a continuous-time signal to a discrete-time signal, and Nyquist sampling requires the sampling frequency to be at least twice the maximum frequency of the signal.21 According to different UTA geometries, the transducers around the object can be viewed as SS. Ideally, UTA should provide dense SS to satisfy the Nyquist sampling theorem,21,27,104 where the SS interval on the tissue surface should be less than half of the lowest detectable acoustic wavelength. If the spatial Nyquist criterion is not met, aliasing in SS causes artifacts in reconstructed images, even when the temporal Nyquist criterion has been fulfilled. Due to the high cost of a UTA with a large number of elements or limited scanning time, SS is usually spare in practice. In addition to SS, the backpropagation during the IR should satisfy the Nyquist sampling theorem as well.101,105 Hu et al.101 analyzed spatial aliasing in a ring-array-based PACT and discovered that the combination of spatial interpolation and temporal filtering can effectively mitigate artifacts caused by aliasing in either IR or SS.

2.3.1.

Spatial aliasing in SS

The spatial aliasing analysis of SS has the following Nyquist sampling constraints where R denotes the radius of the ring array, N denotes the total number of transducers, α denotes the angle formed by the connection of the source point and the transducer, and λc denotes the cutoff wavelength of the cutoff frequency [Fig. 6(a)(i)].

2πR|cosα|N<λc2.

After transforming this inequality to a constraint for the source point location r via the Law of Sines, we get the smallest upper limit of r

r<Nλc4π.

The region within this constraint is defined as the one-way Nyquist zone S1. For any source points inside S1, there is no spatial aliasing during SS because the sampling spacing is less than half of the lower cutoff wavelength [Fig. 6(a)(ii)].

2.3.2.

Spatial aliasing in IR

Similar to the spatial aliasing analysis of SS, IR also has the Nyquist sampling constraints, and the final result can be written as

r+r<Nλc4π,
where r is the source point and r is the reconstruction point.

A region S2 within the following constraint is defined as the two-way Nyquist zone.

S2={r||r|<Nλc8π}.

Spatial aliasing in IR depends on the locations of the source point and the reconstruction points. Spatial aliasing does not appear for objects and reconstruction locations inside S2 but appears for other combinations of objects and reconstruction locations [Fig. 6(a)(ii)].

2.3.3.

Spatial antialiasing in SS and IR

Spatial aliasing solely in IR but not in SS can be well addressed by spatial interpolation. To extend the region S2, we can numerically double the number of detection elements N=2N based on the interpolation. Thus, the new two-way Nyquist zone S2 becomes the same as S1, indicating that spatial interpolation successfully removes spatial aliasing in IR [Fig. 6(a)(iii)]. Hakakzadeh et al.106 stated that reducing the number of transducers causes artifacts, but the structure similarity improved by 30% after interpolation. Wang et al.107 tested different interpolation methods and proposed an interpolation method named extremum-guided interpolation, which does not require complex calculations and can effectively improve the quality of PA reconstruction under sparse sampling. However, interpolation cannot recover the information lost for the spatial aliasing outside the S1 because SS has aliasing.

Hu et al.101 introduced temporal lowpass filtering to eliminate the spatial aliasing in SS, given that S1 is defined by the cutoff wavelength λc and a temporal lowpass filter replaces λc with a longer wavelength λc. Thus, the one-way Nyquist zone is extended [Fig. 6(a)(iv)] through temporal lowpass filtering at the expense of spatial resolution, blurring the reconstructed images. To balance between spatial antialiasing and high resolution, Hu et al.101 proposed radius-dependent temporal filtering: for the region within S1, the PA raw signal should be interpolated and perform reconstruction; for the region outside S1, a temporal lowpass filter should be applied to the raw signal and then perform spatial interpolation and reconstruction.

The spatial interpolation and radius-dependent temporal filtering are evaluated in Fig. 6(b). The reconstruction quality is improved by spatial interpolation, and the aliasing artifacts are further mitigated by temporal filtering.

It should be noted that even though there is no limited view or anisotropy resolution issue in the PACT system, due to the insufficient SS, the best reconstruction quality can be guaranteed only within the two-way Nyquist zone S2. After spatial interpolation, the well-reconstructed area can be enlarged to the one-way Nyquist zone S1. The concepts of one-way and two-way zones are useful to guide people in system design. For example, based on the S1, a 10 MHz full-ring array should have at least 1024 elements to have a perfect reconstruction area of 24 mm in diameter, which is enough for whole mouse imaging.

3.

Acoustic Heterogeneity (SOS Mismatch)

In PACT reconstruction, a crucial factor is the distribution of acoustic properties (e.g., SOS and acoustic impedance) within the acoustic propagation pathway.108112 SOS plays an important role as it directly determines the time arrivals of PA signals [Fig. 7(a)]. In this review paper, we mainly focus on the SOS mismatch although acoustic heterogeneity could be broad. Notably, the SOS distribution along the acoustic propagation path is inherently heterogeneous, especially for in vivo imaging, exhibiting variations among the coupling medium, water (1480  m/s at 20°C), and tissue (1580  m/s). Any misalignment in the SOS setting can lead to inaccuracies in the reconstructed initial pressure, causing artifacts in the reconstructed images.116

Fig. 7

Challenges and solutions of acoustic heterogeneity. (a) Heterogeneous SOS affects the time delay of the PA signal. (b)(i) MSFC divides the ring array into eight subgroups and reconstructs a region with different SOS independently. (b)(ii) Illustrations of reconstructions with different SOS. MSFC measures the correlation coefficients to evaluate SOS marching. (b)(iii) SOS matching results. The peak is assumed to be the mean SOS along the direction through the two opposite subgroups. (b)(iv) In vivo animal imaging result reconstructed by MSFC. (b)(v) In vivo animal imaging result reconstructed with single (homogeneous) SOS. (b)(vi–viii) The cryotomy photos of the mouse’s stomach. Spine and spleen are marked by yellow dashed line boxes, the corresponding region in the cryotomy photo. (b)(ix–xi) The estimated SOS distribution generated MSFC roughly at the three cryotomy layers shown in panel (b)(vi–viii). Panel (b) is reproduced with permission from Ref. 113. (c)(i) Visualization of the raw transducer data. (c)(ii) The identified object surface signal. (c)(iii) The reconstructed object shape based on the identified object surface signal in panel (c)(ii). (c)(iv) In vivo animal imaging result reconstructed with single SOS. (c)(v) In vivo animal imaging result reconstructed by the dual SOS reconstruction. The scale bar is 5 mm. Panel (c) is reproduced with permission from Ref. 114. (d)(i) Illustrations of the system hardware setup of the ADS-USPACT. (d)(ii) Illustrations of the US transmission and the US/PA data acquisition. The red dot represents the sequentially activated transmission element, and the green dots represent the receivers. (d)(iii–iv) Reconstruction results with different SOS. The single SOS reconstruction cannot achieve global focus. (d)(v) The estimated dual SOS map. (d)(vi) The dual SOS reconstruction image generated by ADS-USPACT. The scale bar is 4 mm. Panel (d) is reproduced with permission from Ref. 115.

JBO_29_7_070901_f007.png

The SOS mismatch issue is particularly pronounced in the full-ring array-based PACT compared with the linear array systems.27 Due to the symmetry of a full-ring geometry, the reconstruction is contributed by the transducers located at two opposite sides of the ring. Consequently, the SOS setting should be very precise; otherwise, the source points reconstructed from transducers on opposite sides may fail to align properly, causing artifacts such as shadows, arcs, and double copies. As for the linear array-based PACT systems, though they also could suffer from the SOS mismatch issue, the reconstruction artifacts are not as severe as the artifacts from full-ring array-based PACT.

3.1.

Single SOS Searching

Reconstructing PACT images while assuming a single, universal SOS simplifies the process despite the fact that this assumption is not entirely correct and leads to reconstruction artifacts. Thus, researchers often opt for this simplification and try to find the optimal SOS value that has the least artifacts.117

3.2.

Joint Reconstruction

Joint reconstruction (JR) is an iterative model-based method that reconstructs the initial pressure and the SOS distribution simultaneously.110,118 The two subproblems are solved alternatively until a convergence condition is satisfied. The reconstruction of the initial pressure is a convex optimization problem since the objective function is convex for fixed SOS. However, the SOS distribution reconstruction is a non-convex problem. Huang et al.119 found that accurate JR images were not produced when the spatially variant absorbed optical energy density distribution (initial pressure) is deficient, but the jointly reconstructed initial pressure could be more accurate than the one reconstructed with a constant SOS. In addition, the jointly reconstructed initial pressure was more accurate than the jointly reconstructed SOS distribution, which indicated that the inverse problem of reconstructing SOS distribution is more unstable compared with the reconstruction of initial pressure.119

Another JR solver is adaptive PACT.120 Cui et al.,120 inspired by adaptive optics, tried to introduce the indirect wavefront measurement idea to PACT to solve the JR problem. The image is reconstructed patch by patch. Within each patch, the wavefront distortion is almost identical (“isoplanatic patch”) and can be extracted from the local point spread function (PSF). Similar to the “phase diversity,” the local PSF, which has long been regarded as an unknown, can be computationally found from a stack of local images reconstructed with different delays.121 Thereby, the full image can be better focused via piecewise deconvolution. After the wavefronts of all the patches are determined, they can be used collectively to compute the global SOS map. Thus, it bypasses the cumbersome global searching of the SOS map and improves the stability and reliability of the solution.

3.3.

Multi-segmented Feature Coupling

As shown in Fig. 7(b)(i), it was demonstrated that SOS mismatch leads to a misalignment of the reconstructed source points from opposite transducers. Thus, the reconstruction results of opposite transducers can serve as a good indicator to evaluate the accuracy of the SOS setting. The feature coupling method divides the transducers into two semicircles and reconstructs two images independently.122 The SOS distribution is iteratively adjusted to maximize the correlation between the two reconstructed images. Building upon the concept of feature coupling, multi-segmented feature coupling (MSFC) divides the ring array into eight subgroups. Two subgroups located at opposite sides reconstruct a region with different SOS.113 MSFC measures the correlation coefficients between the two reconstructed images from two opposite subgroup transducers [Fig. 7(b)(ii)]. The peak determines the mean SOS along the direction through the two opposite subgroups [Fig. 7(b)(iii)].

The results are shown in Fig. 7(b)(vi–viii). If reconstructed properly, a vessel perpendicular to the imaging plane will be reconstructed as a point [Fig. 7(b)(iv)]. If the SOS estimation is wrong, the vessel will be distorted into a ring shape [Fig. 7(b)(v)]. The estimated SOS distributions [Fig. 7(b)(ix–xi)] show that the SOS of the stomach region (coconut oil) is significantly lower, with the profile and location roughly matching those in the cryotomy photos.

MSFC optimizes the SOS distribution based on the feature coupling, which avoids cumbersome matrix calculations and saves a lot of computation time (compared with JR). However, the feature coupling relies on the object features, which may limit its generalizability, as not all tissue areas are rich in features suitable for SOS estimation. In addition, the operator is asked to select the features and draw boundaries manually. A fully automatic method would be much preferred for future practical applications.

3.4.

Dual SOS Reconstruction

To simplify the SOS map estimation and reconstruction while improving the image quality, the dual SOS assumption has been adopted in PACT reconstruction.27 In dual SOS reconstruction, a binary SOS map is created, consisting of two SOS values representing the water area and the tissue object area. This simplification is made based on the premise that the SOS difference within soft tissue is relatively small compared with the difference between water and tissue. The effectiveness of dual SOS reconstruction hinges on two key components: (1) estimated object boundary and (2) estimated SOS values.

3.4.1.

Object surface PA signal detection

Reference 114 utilized a U-Net123 model to identify the object PA surface signal in raw data and reconstruct the object shape [shown in Fig. 7(c)(i–iii)]. However, in this method, the two SOS values assigned to the binary SOS map are predefined as 1480 and 1570  m/s, which are two commonly used preset SOS values in water and soft tissues.124 The results shown in Fig. 7(c)(iv–v) demonstrate the benefits of the dual SOS approach. It not only corrects the SOS distribution but also suppresses the artifacts. The idea of utilizing the object surface PA signal to reconstruct the object boundary is promising because the surface signal only travels in water, and the SOS in water is known. However, the SOS in the object is also preset by the operator, which may not be the best solution. The idea of utilizing the surface PA signal can be further developed to adaptively estimate the SOS in the object.

3.4.2.

US + PACT

Instead of estimating the SOS distribution, the object boundary and the optimal SOS can also be detected by US imaging. Jose et al.125 proposed passive element-enriched PACT where a passive point source was introduced to profile the SOS distribution. References 115 and 126 integrated active US source and PA imaging to develop an adaptive dual-speed US and PACT (ADS-USPACT) system that automatically segments the object boundary and determines the optimized SOS values. In ADS-USPACT, the SOS in water is determined by the water temperature, and the object boundary is detected by US imaging. To find the optimal SOS in the object, ADS-USPACT searches for the maximum coherence factor among the US signals at various sample SOS values.

Figure 7(f)(iii–vi) provides a visual comparison between ADS-USPACT and single SOS reconstruction. Single SOS cannot achieve global improvement in imaging quality, e.g., 1508.9  m/s makes boundary vessels in focus, and 1518  m/s makes the central vascular features in focus, but there is no optimal single SOS that can make the whole object focus. ADS-USPACT, on the other hand, can keep both the boundary and the central vessels in focus.

ADS-USPACT performs good dual SOS reconstruction quality at the expense of additional US imaging hardware and reconstruction overhead. Dual SOS reconstruction is a potential solution as it simplifies the SOS distribution and can generate high-quality images. However, it still needs to be further developed to make it computational- and hardware-friendly.

3.5.

Deep Learning

Though linear array PACT systems are not as sensitive to the SOS mismatch as ring array PACT systems, SOS mismatch also causes artifacts in linear array PACT images, e.g., a point source may be reconstructed as an arc if the SOS is not matched [Fig. 8(c)]. Reference 117 proposed a deep learning-based SOS calibration method. They evaluated their method on U-Net, Segnet, and a proposed hybrid model of U-net and Segnet, named SegU-net [Fig. 8(a)]. As shown in Fig. 8(b), the input data are a group of reconstructed images based on eight different single SOS reconstructions, starting from 1460 to 1600 m/s, and the target is the corresponding ground truth image. Though all the training data are generated in a homogeneous medium by K-wave simulation, SegU-net shows its ability to reconstruct and alleviate artifacts in a heterogeneous medium. Figure 8(c) shows the in vivo human forearm PA imaging results reconstructed by the single SOS reconstruction and the SegU-net. The SoS aberration and streak artifacts are remarkably reduced in the SegU-net-corrected PA images. We expected that deep learning can be further extended to ring arrays or 3D geometry arrays such as planar or spherical arrays.

Fig. 8

(a) Model architecture of U-net, Segnet, and SegU-net. (b) Illustration of the data flow of SegU-net. The model takes reconstruction with different SOS as input. Based on the training dataset, the deep neural network is trained to correct the SOS aberration and streak artifacts in the PA images. (c) In vivo human forearm PA images reconstructed via conventional beamforming (left) and the SegU-net (right). BF, beam forming. All panels are reproduced with permission from Ref. 117.

JBO_29_7_070901_f008.png

4.

Fluence Correction

The amplitude of the PA signal depends on both the optical absorption and laser fluence. However, it is important to note that tissue attenuation varies with wavelength, as illustrated in Fig. 9(a)(i). Consequently, when estimating the optical absorption from a PA image, accuracy can be compromised, resulting in significant changes in shape and a shift in the wavelength of maximum absorption, particularly in deeper tissue regions, as shown in Fig. 9(a)(ii). Optical attenuation in fluence measurements can distort PA signals, potentially impacting the accuracy and the ability to quantitatively interpret the resulting images.130 Implementing fluence correction techniques becomes essential to mitigate these challenges and ensure precise quantification. In this section, spectroscopic PA imaging, iterative optimization methodology, and deep learning models have been employed to achieve fluence compensation.

Fig. 9

Fluence incorrection challenges and solutions. Reprinted with permission from Refs. 127128.129. (a)(i) Wavelength- and depth-dependent optical fluence in tissue can significantly influence optical absorption spectrum measurements. (a)(ii) Spectrum of gold nanorods shifts as image depth increases. (a)(iii) Scanning system comprises a kHz-rate, wavelength-tunable diode-pumped laser, a fiber delivery system, and a US scanner, with the laser emitting variable-wavelength pulses triggered by the scanner while maintaining a high repetition rate. (a)(iv) Light from various fibers travels varying distances to reach a target. (a)(v) The amplitude of partial PA images, obtained by single-fiber irradiation, is influenced by light absorption and scattering in tissue, and it is dependent on the distance between each fiber and a typical absorber within the imaging field. (a)(vi) Real-time compensation to get wavelength-independent fluence. (b)(i) Optimization process for extracting the light fluence distribution and conducting fluence correction. (b)(ii) Initial image. Red arrows highlight a decrease in image intensity caused by optical attenuation (b)(iii) Fluence-corrected image. The scale bar is 3 mm. (c)(i) Reference image. (c)(ii) TR reconstruction image. (c)(iii) Fluence correction result using the U-Net deep learning model.

JBO_29_7_070901_f009.png

4.1.

Spectroscopic PA Imaging

Spectroscopic imaging approaches are proposed for the automated correction of wavelength-dependent fluence variations.127,131,132 Kim et al.131,132 succeeded in correcting the wavelength-dependent fluence distribution and demonstrated its performance in phantom studies using a conventional handheld US probe and validated the performance based on phantom studies. Jeng et al.127 proposed that 10 fibers are evenly distributed along each elevational edge of the US transducer array, as depicted in Fig. 9(a)(iii). Unlike previous systems that simultaneously delivered laser pulses into all fibers in a bundle, this setup sequentially couples light into individual fibers. Partial PA IR is generated for each laser pulse, contributing to the estimation of laser fluence. Importantly, as shown in Fig. 9(a)(iv), light emerging from different fibers travels distinct distances to reach a target. Figure 9(a)(v) illustrates how the PA signal amplitude varies with fiber index, while the upper right plot in Fig. 9(a)(v) showcases the PA signal loss with distance due to light attenuation, resulting in computational error. It is worth noting that fluence losses with depth will differ across different wavelengths. Amplitude variations concerning the distance between any pixel and the source for numerous points are acquired, where these points exhibit partial PA image amplitudes above the noise floor. These measurements serve as input data for the fluence reconstruction process, which leverages the light diffusion model. With this procedure repeated for all wavelengths, fluence can be disentangled from the PA image, leading to the retrieval of the true light absorption spectrum of molecular absorbers, as shown in Fig. 9(a)(vi). This method has demonstrated its superiority in phantoms, ex vivo and in vivo experiments.127

4.2.

Iterative Optimization

Iterative optimization methodology can be applied for fluence correction.133135 The optimization process [Fig. 9(b)(i)] begins with a 2D reconstruction of an initial image [Fig. 9(b)(ii)] using model-based acoustic reconstruction, where the low-image intensity caused by optical attenuation is along the red arrows, as shown in Fig. 9(b)(ii). To expedite the optimization, the initial image is segmented into regions based on prior knowledge of the object structure, with constant optical properties (including absorption and scattering coefficients) that can be tuned during optimization within each region. This optimization problem utilizes a δ-Eddington approximation of the radiative transfer equation as the light fluence model. Notably, artifacts resulting from optical attenuation in Fig. 9(b)(ii) are effectively eliminated after fluence correction, as shown in Fig. 9(b)(iii). In this research, invariant system response and parameters are assumed at first to carry out phantom experiments, but it is hard to use the same parameter settings in future in vivo experiments.128 Another work proposed by Naser et al.134 is to combine finite-element-based local fluence correction with SNR regularization and validate its performance in both ex vivo and in vivo experiments.

4.3.

Deep Learning

A deep learning approach can be used to recover the optical absorption maps by correcting for the fluence effect.90,136138 Figure 9(c)(i) presents the reference ground truth image, while in Fig. 9(c)(ii), the image is reconstructed using TR, which is blurred and noisy. In Fig. 9(c)(ii), the yellow arrow points to noticeable reconstruction artifacts, and the orange arrow highlights the impact of fluence on small vasculature in deep tissue regions. In addition, there is the presence of undesirable vasculature, indicated by a green arrow, in the reconstructed images. The impact of optical fluence on PA images can be removed by employing end-to-end map training as a supervised learning problem. A neural network is trained to minimize the loss function to obtain the fluence-corrected images. Figure 9(c)(iii) displays the corresponding reconstruction outcomes using the U-Net DL model, where the shape of vasculature is successfully recovered in deep regions.129 DL models proposed by Arumugaraj et al.138 were shown to be 17 times faster than solving the diffusion equation for fluence correction. Complex, and non-homogeneous medium, background tissue properties are all considered for fluence compensation, which is critical for future clinical usage.129,138 Chen et al.137 proposed a DL approach to recover the optical absorption coefficients of biological tissues and verified it in phantom experiments, while Arumugaraj et al. validated their DL models with both in silico and in vivo datasets.

5.

Conclusion

In summary, although 2D PACT has been widely used in pre-clinical studies and clinical translations,139141 it still faces challenges for quantitative measurements. These challenges encompass issues such as limited view,79,142 anisotropy in resolution along varying spatial axes,126,143 spatial aliasing,101 reconstruction artifacts caused by acoustic heterogeneity,144,145 and quantitative spectral unmixing with fluence correction.146,147 Effectively mitigating these challenges necessitates innovative strategies spanning the domains of hardware engineering,20,148 signal processing methodologies,115,149 and deep learning paradigms.78,85,144

The challenge of limited view imaging, stemming from the inherent constraints of linear/planar transducer arrays, has been addressed through diverse methodologies. These solutions have included the deployment of acoustic reflectors and ultrasonic heating encoding, iterative optimization, and the integration of advanced deep-learning approaches. These interventions are purposefully devised to expand the scope of IR, even when confronted with linear detectors possessing limited viewing angles, thus facilitating a marked enhancement in imaging fidelity. DL provides a solution to address the limited view problem, allowing for precise high-resolution PACT reconstruction even with sparse viewing angles. Enhancing DL methods, such as incorporating transformers, offers a means to handle long-range dependencies effectively.

Anisotropy resolution issues have been methodically approached through techniques that include rotate-translate scanning, slit-PAT, and deep learning methodologies. These tactical measures substantially augment elevational resolution and achieve isotropic resolutions in the resultant images. This enables a more lucid visualization of intricate structures inherent in biological tissues. However, those methods still need to be further developed. The rotate-translate scanning methods require multi-angle scanning, which is too time-consuming. Slit-PAT may have some issues when applying to the high-frequency probe as the slit needs to be much thinner.

Spatial aliasing issues can be mitigated by spatial interpolation and temporal lowpass filtering. However, there is still a trade-off between spatial antialiasing and high-resolution reconstruction for regions outside the one-way Nyquist zone, which could be addressed via location-dependent antialiasing but with significantly increased computational cost.105

Acoustic heterogeneity (SOS mismatch), characterized by disparities in the SOS within distinct tissue types, has been systematically addressed via innovative techniques, such as JR, dual SOS reconstruction, and deep learning-driven SOS calibration. These strategies rectify artifacts associated with SOS variations and refine image quality, thereby facilitating a more precise and reliable interpretation of PA images. Though current solutions have proved that the reconstructed image qualities can be improved a lot at the expense of additional hardware or huge computation overheads (iterative methods), an efficient and adaptive solution is necessary to address the SOS mismatch problem in future research.

Incorporating fluence correction has emerged as an imperative facet of PACT to account for fluctuations in laser fluence and its interaction with tissue absorption properties. Sequential fiber-based data acquisition, iterative optimization methodologies, and deep learning models have been adeptly employed to disentangle fluence-induced effects from PA images. These endeavors culminate in more accurate representations of absorption characteristics and bolster the credibility of quantitative analyses. However, implementing real-time fluence correction is still challenging but crucial for dynamic imaging scenarios. Methods that can adapt to changes in tissue geometry and optical properties in real time are desirable. Another concern is that fluence correction may need to be adapted for different wavelengths used in multispectral PACT, where each wavelength experiences distinct absorption and scattering properties in tissues.

In summative contemplation, the realm of PA imaging is continually evolving,150153 with advancements spanning both hardware and software domains that are meticulously tailored to surmount the intrinsic impediments. These strides hold the promise of significantly enhancing the accuracy, resolution, and reliability of PACT, positioning it as an invaluable tool in diverse biomedical applications, particularly for the high-fidelity imaging of biological tissues and structures.

Disclosures

L.S.L. has a financial interest in BLOCH Quantum Imaging Solutions, although they did not support this work. The other authors declare no competing financial interests.

Code and Data Availability

Data sharing is not applicable to this article as no new data were created or analyzed.

Acknowledgments

The authors would like to acknowledge funding supports from Rice University (Grant Nos. F10000205, G10002659, G10003101, and G10003409), the National Institutes of Health (Grant No. U54 EB034652), and the Cancer Prevention and Research Institute of Texas (Grant No. RP240091).

References

1. 

R. Weissleder and M. Nahrendorf, “Advancing biomedical imaging,” Proc. Natl. Acad. Sci. U. S. A., 112 (47), 14424 –14428 https://doi.org/10.1073/pnas.1508524112 (2015). Google Scholar

2. 

C. M. Tempany and B. J. McNeil, “Advances in biomedical imaging,” JAMA, 285 (5), 562 –567 https://doi.org/10.1001/jama.285.5.562 JAMAAP 0098-7484 (2001). Google Scholar

3. 

V. Ntziachristos et al., “Looking and listening to light: the evolution of whole-body photonic imaging,” Nat. Biotechnol., 23 (3), 313 –320 https://doi.org/10.1038/nbt1074 NABIF9 1087-0156 (2005). Google Scholar

4. 

V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods, 7 (8), 603 –614 https://doi.org/10.1038/nmeth.1483 1548-7091 (2010). Google Scholar

5. 

M. Heijblom et al., “Photoacoustic image patterns of breast carcinoma and comparisons with magnetic resonance imaging and vascular stained histopathology,” Sci. Rep., 5 (1), 11778 https://doi.org/10.1038/srep11778 SRCEC3 2045-2322 (2015). Google Scholar

6. 

E. Najafzadeh et al., “Evaluation of multi-wavelengths LED-based photoacoustic imaging for maximum safe resection of glioma: a proof of concept study,” Int. J. Comput. Assist. Radiol. Surg., 15 1053 –1062 https://doi.org/10.1007/s11548-020-02191-2 (2020). Google Scholar

7. 

L. Lin et al., “Single-breath-hold photoacoustic computed tomography of the breast,” Nat. Commun., 9 (1), 2352 https://doi.org/10.1038/s41467-018-04576-z NCAOBW 2041-1723 (2018). Google Scholar

8. 

S. J. Schambach et al., “Application of micro-CT in small animal imaging,” Methods, 50 (1), 2 –13 https://doi.org/10.1016/j.ymeth.2009.08.007 MTHDE9 1046-2023 (2010). Google Scholar

9. 

A. Greco et al., “Ultrasound biomicroscopy in small animal research: applications in molecular and preclinical imaging,” BioMed Res. Int., 2012 14 https://doi.org/10.1155/2012/519238 (2012). Google Scholar

10. 

K. H. Song, G. Stoica and L. V. Wang, “In vivo three-dimensional photoacoustic tomography of a whole mouse head,” Opt. Lett., 31 (16), 2453 –2455 https://doi.org/10.1364/OL.31.002453 OPLEDP 0146-9592 (2006). Google Scholar

11. 

L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science, 335 (6075), 1458 –1462 https://doi.org/10.1126/science.1216210 SCIEAS 0036-8075 (2012). Google Scholar

12. 

X. Wang et al., “Three-dimensional laser-induced photoacoustic tomography of mouse brain with the skin and skull intact,” Opt. Lett., 28 (19), 1739 –1741 https://doi.org/10.1364/OL.28.001739 OPLEDP 0146-9592 (2003). Google Scholar

13. 

X. Wang et al., “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnol., 21 (7), 803 https://doi.org/10.1038/nbt839 NABIF9 1087-0156 (2003). Google Scholar

14. 

H. F. Zhang et al., “Functional photoacoustic microscopy for high-resolution and noninvasive in vivo imaging,” Nat. Biotechnol., 24 (7), 848 –851 https://doi.org/10.1038/nbt1220 NABIF9 1087-0156 (2006). Google Scholar

15. 

P. Beard, “Biomedical photoacoustic imaging,” Interface Focus, 1 (4), 602 –631 https://doi.org/10.1098/rsfs.2011.0028 (2011). Google Scholar

16. 

N. Nyayapathi et al., “Dual scan mammoscope (DSM)—a new portable photoacoustic breast imaging system with scanning in craniocaudal plane,” IEEE Trans. Biomed. Eng., 67 (5), 1321 –1327 https://doi.org/10.1109/TBME.2019.2936088 IEBEAX 0018-9294 (2020). Google Scholar

17. 

M. Toi et al., “Visualization of tumor-related blood vessels in human breast by photoacoustic imaging system with a hemispherical detector array,” Sci. Rep., 7 41970 https://doi.org/10.1038/srep41970 SRCEC3 2045-2322 (2017). Google Scholar

18. 

“Seno Medical’s Market-Ready Imagio® OA/US breast imaging system receives supplemental FDA PMA approval,” (2022). https://link.gale.com/apps/doc/A708462653/HRCA?u=anon~f8b372d&sid=sitemap&xid=25f9a125 Google Scholar

19. 

L. V. Wang and J. Yao, “A practical guide to photoacoustic tomography in the life sciences,” Nat. Methods, 13 (8), 627 –638 https://doi.org/10.1038/nmeth.3925 1548-7091 (2016). Google Scholar

20. 

S. Na and L. V. Wang, “Photoacoustic computed tomography for functional human brain imaging,” Biomed. Opt. Express, 12 (7), 4056 –4083 https://doi.org/10.1364/BOE.423707 BOEICL 2156-7085 (2021). Google Scholar

21. 

C. Tian et al., “Spatial resolution in photoacoustic computed tomography,” Rep. Progr. Phys., 84 (3), 036701 https://doi.org/10.1088/1361-6633/abdab9 (2021). Google Scholar

22. 

D. Wang et al., “Deep tissue photoacoustic computed tomography with a fast and compact laser system,” Biomed. Opt. Express, 8 (1), 112 –123 https://doi.org/10.1364/BOE.8.000112 BOEICL 2156-7085 (2017). Google Scholar

23. 

L. V. Wang, “Multiscale photoacoustic microscopy and computed tomography,” Nat. Photonics, 3 (9), 503 –509 https://doi.org/10.1038/nphoton.2009.157 NPAHBY 1749-4885 (2009). Google Scholar

24. 

L. V. Wang, “Tutorial on photoacoustic microscopy and computed tomography,” IEEE J. Sel. Top. Quantum Electron., 14 (1), 171 –179 https://doi.org/10.1109/JSTQE.2007.913398 IJSQEN 1077-260X (2008). Google Scholar

25. 

J. Yao et al., “Noninvasive photoacoustic computed tomography of mouse brain metabolism in vivo,” NeuroImage, 64 257 –266 https://doi.org/10.1016/j.neuroimage.2012.08.054 NEIMEF 1053-8119 (2013). Google Scholar

26. 

Y. Li et al., “Snapshot photoacoustic topography through an ergodic relay for high-throughput imaging of optical absorption,” Nat. Photonics, 14 164 –170 https://doi.org/10.1038/s41566-019-0576-2 NPAHBY 1749-4885 (2020). Google Scholar

27. 

L. Li et al., “Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution,” Nat. Biomed. Eng., 1 (5), 0071 https://doi.org/10.1038/s41551-017-0071 (2017). Google Scholar

28. 

J. Yao et al., “High-speed label-free functional photoacoustic microscopy of mouse brain in action,” Nat. Methods, 12 (5), 407 https://doi.org/10.1038/nmeth.3336 1548-7091 (2015). Google Scholar

29. 

J. Aguirre et al., “Precision assessment of label-free psoriasis biomarkers with ultra-broadband optoacoustic mesoscopy,” Nat. Biomed. Eng., 1 (5), 0068 https://doi.org/10.1038/s41551-017-0068 (2017). Google Scholar

30. 

D. Razansky et al., “Multispectral opto-acoustic tomography of deep-seated fluorescent proteins in vivo,” Nat. Photonics, 3 (7), 412 –417 https://doi.org/10.1038/nphoton.2009.98 NPAHBY 1749-4885 (2009). Google Scholar

31. 

L. Li et al., “Snapshot photoacoustic topography through an ergodic relay of optical absorption in vivo,” Nat. Protoc., 16 (5), 2381 –2394 https://doi.org/10.1038/s41596-020-00487-w 1754-2189 (2021). Google Scholar

32. 

Y. Gu et al., “Application of photoacoustic computed tomography in biomedical imaging: a literature review,” Bioeng. Transl. Med., 8 (2), e10419 https://doi.org/10.1002/btm2.10419 (2023). Google Scholar

33. 

J. Yang et al., “Focusing light inside live tissue using reversibly switchable bacterial phytochrome as a genetically encoded photochromic guide star,” Sci. Adv., 5 (12), eaay1211 https://doi.org/10.1126/sciadv.aay1211 STAMCV 1468-6996 (2019). Google Scholar

34. 

L. Li, J. Yao, L. V. Wang, “Photoacoustic tomography of neural systems,” Neural Engineering, 349 –378 Springer International Publishing, Cham (2020). Google Scholar

35. 

L. Li and L. V. Wang, “Recent advances in photoacoustic tomography,” BME Front., 2021 9823268 https://doi.org/10.34133/2021/9823268 (2021). Google Scholar

36. 

L. Li et al., “Integration of multitargeted polymer-based contrast agents with photoacoustic computed tomography: an imaging technique to visualize breast cancer intratumor heterogeneity,” ACS Nano, 15 (2), 2413 –2427 https://doi.org/10.1021/acsnano.0c05893 ANCAC3 1936-0851 (2021). Google Scholar

37. 

J. Shi et al., “High-resolution, high-contrast mid-infrared imaging of fresh biological samples with ultraviolet-localized photoacoustic microscopy,” Nat. Photonics, 13 (9), 609 –615 https://doi.org/10.1038/s41566-019-0441-3 NPAHBY 1749-4885 (2019). Google Scholar

38. 

Z. Wu et al., “A microrobotic system guided by photoacoustic computed tomography for targeted navigation in intestines in vivo,” Sci. Robot., 4 (32), eaax0613 https://doi.org/10.1126/scirobotics.aax0613 (2019). Google Scholar

39. 

P. Zhang et al., “In vivo superresolution photoacoustic computed tomography by localization of single dyed droplets,” Light Sci. Appl., 8 (1), 36 https://doi.org/10.1038/s41377-019-0147-9 (2019). Google Scholar

40. 

L. Li et al., “Multiscale photoacoustic tomography of a genetically encoded near-infrared FRET biosensor,” Adv. Sci., 8 (21), 2102474 https://doi.org/10.1002/advs.202102474 (2021). Google Scholar

41. 

J. Weber, P. C. Beard and S. E. Bohndiek, “Contrast agents for molecular photoacoustic imaging,” Nat. Methods, 13 (8), 639 –650 https://doi.org/10.1038/nmeth.3929 1548-7091 (2016). Google Scholar

42. 

J. Kim et al., “Deep learning acceleration of multiscale superresolution localization photoacoustic imaging,” Light Sci. Appl., 11 (1), 131 https://doi.org/10.1038/s41377-022-00820-w (2022). Google Scholar

43. 

Y. Qu et al., “Dichroism-sensitive photoacoustic computed tomography,” Optica, 5 (4), 495 –501 https://doi.org/10.1364/OPTICA.5.000495 (2018). Google Scholar

44. 

Y. S. Zhang et al., “Optical-resolution photoacoustic microscopy for volumetric and spectral analysis of histological and immunochemical samples,” Angew. Chem. Int. Ed., 53 (31), 8099 –8103 https://doi.org/10.1002/anie.201403812 (2014). Google Scholar

45. 

L. Li et al., “Label-free photoacoustic tomography of whole mouse brain structures ex vivo,” Neurophotonics, 3 (3), 035001 https://doi.org/10.1117/1.NPh.3.3.035001 (2016). Google Scholar

46. 

T. Imai et al., “High-throughput ultraviolet photoacoustic microscopy with multifocal excitation,” J. Biomed. Opt., 23 (3), 036007 https://doi.org/10.1117/1.JBO.23.3.036007 JBOPFO 1083-3668 (2018). Google Scholar

47. 

Z. Xu, C. Li and L. V. Wang, “Photoacoustic tomography of water in phantoms and tissue,” J. Biomed. Opt., 15 (3), 036019 https://doi.org/10.1117/1.3443793 JBOPFO 1083-3668 (2010). Google Scholar

48. 

Z. Xu, Q. Zhu and L. V. Wang, “In vivo photoacoustic tomography of mouse cerebral edema induced by cold injury,” J. Biomed. Opt., 16 (6), 066020 https://doi.org/10.1117/1.3584847 JBOPFO 1083-3668 (2011). Google Scholar

49. 

Y. He et al., “In vivo label-free photoacoustic flow cytography and on-the-spot laser killing of single circulating melanoma cells,” Sci. Rep., 6 39616 https://doi.org/10.1038/srep39616 SRCEC3 2045-2322 (2016). Google Scholar

50. 

L. Li, Multi-contrast photoacoustic computed tomography, California Institute of Technology( (2019). Google Scholar

51. 

R. Cao et al., “Optical-resolution photoacoustic microscopy with a needle-shaped beam,” Nat. Photonics, 17 (1), 89 –95 https://doi.org/10.1038/s41566-022-01112-w NPAHBY 1749-4885 (2023). Google Scholar

52. 

Y. Zhang et al., “Ultrafast longitudinal imaging of haemodynamics via single-shot volumetric photoacoustic tomography with a single-element detector,” Nat. Biomed. Eng., 7 1 –14 https://doi.org/10.1038/s41551-023-01149-4 (2023). Google Scholar

53. 

J. Yao et al., “Multiscale photoacoustic tomography using reversibly switchable bacterial phytochrome as a near-infrared photochromic probe,” Nat. Methods, 13 (1), 67 https://doi.org/10.1038/nmeth.3656 1548-7091 (2016). Google Scholar

54. 

M. Zhou et al., “Nanoparticles for photoacoustic imaging of vasculature,” Design and Applications of Nanoparticles in Biomedical Imaging, 337 –356 Springer, Cham (2017). Google Scholar

55. 

L. Li et al., “Small near-infrared photochromic protein for photoacoustic multi-contrast imaging and detection of protein interactions in vivo,” Nat. Commun., 9 (1), 2734 https://doi.org/10.1038/s41467-018-05231-3 NCAOBW 2041-1723 (2018). Google Scholar

56. 

R. Zhang et al., “Multiscale photoacoustic tomography of neural activities with GCaMP calcium indicators,” J. Biomed. Opt., 27 (9), 096004 https://doi.org/10.1117/1.JBO.27.9.096004 JBOPFO 1083-3668 (2022). Google Scholar

57. 

J. Yang, S. Choi and C. Kim, “Practical review on photoacoustic computed tomography using curved ultrasound array transducer,” Biomed. Eng. Lett., 12 19 –35 https://doi.org/10.1007/s13534-021-00214-8 (2022). Google Scholar

58. 

G. Li et al., “Isotropic-resolution linear-array-based photoacoustic computed tomography through inverse Radon transform,” Proc. SPIE, 9323 93230I https://doi.org/10.1117/12.2076660 PSISDG 0277-786X (2015). Google Scholar

59. 

G. Li et al., “Multiview Hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt., 20 (6), 066010 https://doi.org/10.1117/1.JBO.20.6.066010 JBOPFO 1083-3668 (2015). Google Scholar

60. 

M. Xu and L. V. Wang, “Universal back-projection algorithm for photoacoustic computed tomography,” Phys. Rev. E, 71 (1), 016706 https://doi.org/10.1103/PhysRevE.71.016706 (2005). Google Scholar

61. 

P. Burgholzer et al., “Temporal back-projection algorithms for photoacoustic tomography with integrating line detectors,” Inverse Probl., 23 (6), S65 https://doi.org/10.1088/0266-5611/23/6/S06 INPEEY 0266-5611 (2007). Google Scholar

62. 

B. Wang et al., “Back-projection algorithm in generalized form for circular-scanning-based photoacoustic tomography with improved tangential resolution,” Quant. Imaging Med. Surg., 9 (3), 491 https://doi.org/10.21037/qims.2019.03.12 (2019). Google Scholar

63. 

E. Bossy et al., “Time reversal of photoacoustic waves,” Appl. Phys. Lett., 89 (18), 184108 https://doi.org/10.1063/1.2382732 APPLAB 0003-6951 (2006). Google Scholar

64. 

B. E. Treeby, E. Z. Zhang and B. T. Cox, “Photoacoustic tomography in absorbing acoustic media using time reversal,” Inverse Probl., 26 (11), 115003 https://doi.org/10.1088/0266-5611/26/11/115003 INPEEY 0266-5611 (2010). Google Scholar

65. 

B. T. Cox and B. E. Treeby, “Artifact trapping during time reversal photoacoustic imaging for acoustically heterogeneous media,” IEEE Trans. Med. Imaging, 29 (2), 387 –396 https://doi.org/10.1109/TMI.2009.2032358 ITMID4 0278-0062 (2009). Google Scholar

66. 

A. Rosenthal, D. Razansky and V. Ntziachristos, “Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography,” IEEE Trans. Med. Imaging, 29 (6), 1275 –1285 https://doi.org/10.1109/TMI.2010.2044584 ITMID4 0278-0062 (2010). Google Scholar

67. 

S. Bu et al., “Model-based reconstruction integrated with fluence compensation for photoacoustic tomography,” IEEE Trans. Biomed. Eng., 59 (5), 1354 –1363 https://doi.org/10.1109/TBME.2012.2187649 IEBEAX 0018-9294 (2012). Google Scholar

68. 

X. L. Dean-Ben et al., “Accurate model-based reconstruction algorithm for three-dimensional optoacoustic tomography,” IEEE Trans. Med. Imaging, 31 (10), 1922 –1928 https://doi.org/10.1109/TMI.2012.2208471 ITMID4 0278-0062 (2012). Google Scholar

69. 

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci., 2 (1), 183 –202 https://doi.org/10.1137/080716542 (2009). Google Scholar

70. 

A. Pattyn et al., “Model-based optical and acoustical compensation for photoacoustic tomography of heterogeneous mediums,” Photoacoustics, 23 100275 https://doi.org/10.1016/j.pacs.2021.100275 (2021). Google Scholar

71. 

M. Mozaffarzadeh et al., “Model-based photoacoustic image reconstruction using compressed sensing and smoothed L0 norm,” Proc. SPIE, 10494 104943Z https://doi.org/10.1117/12.2291535 PSISDG 0277-786X (2018). Google Scholar

72. 

B. Huang et al., “Improving limited-view photoacoustic tomography with an acoustic reflector,” J. Biomed. Opt., 18 (11), 110505 https://doi.org/10.1117/1.JBO.18.11.110505 JBOPFO 1083-3668 (2013). Google Scholar

73. 

J. Zhu et al., “Mitigating the limited view problem in photoacoustic tomography for a planar detection geometry by regularised iterative reconstruction,” IEEE Trans. Med. Imaging, 42 (9), 2603 –2615 https://doi.org/10.1109/TMI.2023.3271390 ITMID4 0278-0062 (2023). Google Scholar

74. 

W. Liu et al., “Correcting the limited view in optical-resolution photoacoustic microscopy,” J. Biophotonics, 11 (2), e201700196 https://doi.org/10.1002/jbio.201700196 (2018). Google Scholar

75. 

L. Wang et al., “Ultrasonic-heating-encoded photoacoustic tomography with virtually augmented detection view,” Optica, 2 (4), 307 –312 https://doi.org/10.1364/OPTICA.2.000307 (2015). Google Scholar

76. 

G. Li et al., “Tripling the detection view of high-frequency linear-array-based photoacoustic computed tomography by using two planar acoustic reflectors,” Quant. Imaging Med. Surg., 5 (1), 57 https://doi.org/10.3978/j.issn.2223-4292.2014.11.09 (2015). Google Scholar

77. 

D. Waibel et al., “Reconstruction of initial pressure from limited view photoacoustic images using deep learning,” Proc. SPIE, 10494 104942S https://doi.org/10.1117/12.2288353 PSISDG 0277-786X (2018). Google Scholar

78. 

S. Choi et al., “Deep learning enhances multiparametric dynamic volumetric photoacoustic computed tomography in vivo (DL-PACT),” Adv. Sci., 10 (1), 2202089 https://doi.org/10.1002/advs.202202089 (2023). Google Scholar

79. 

P. Zhang et al., “High-resolution deep functional imaging of the whole mouse brain by photoacoustic computed tomography in vivo,” J. Biophotonics, 11 (1), e201700024 https://doi.org/10.1002/jbio.201700024 (2018). Google Scholar

80. 

L. Wang et al., “Ultrasound-heated photoacoustic flowmetry,” J. Biomed. Opt., 18 (11), 117003 https://doi.org/10.1117/1.JBO.18.11.117003 JBOPFO 1083-3668 (2013). Google Scholar

81. 

L. Wang et al., “Ultrasonically encoded photoacoustic flowgraphy in biological tissue,” Phys. Rev. Lett., 111 (20), 204301 https://doi.org/10.1103/PhysRevLett.111.204301 PRLTAO 0031-9007 (2013). Google Scholar

82. 

T. M. Bücking et al., “Processing methods for photoacoustic Doppler flowmetry with a clinical ultrasound scanner,” J. Biomed. Opt., 23 (2), 026009 https://doi.org/10.1117/1.JBO.23.2.026009 JBOPFO 1083-3668 (2018). Google Scholar

83. 

J. Xia, J. Yao and L. V. Wang, “Photoacoustic tomography: principles and advances,” Electromagn. Waves, 147 1 –22 https://doi.org/10.2528/PIER14032303 (2014). Google Scholar

84. 

R. Ellwood et al., “Photoacoustic imaging using acoustic reflectors to enhance planar arrays,” J. Biomed. Opt., 19 (12), 126012 https://doi.org/10.1117/1.JBO.19.12.126012 JBOPFO 1083-3668 (2014). Google Scholar

85. 

H. Deng et al., “Machine-learning enhanced photoacoustic computed tomography in a limited view configuration,” Proc. SPIE, 11186 111860J https://doi.org/10.1117/12.2539148 PSISDG 0277-786X (2019). Google Scholar

86. 

S. Guan et al., “Limited-view and sparse photoacoustic tomography for neuroimaging with deep learning,” Sci. Rep., 10 (1), 8510 https://doi.org/10.1038/s41598-020-65235-2 SRCEC3 2045-2322 (2020). Google Scholar

87. 

J. Schwab et al., “Deep learning of truncated singular values for limited view photoacoustic tomography,” Proc. SPIE, 10878 1087836 https://doi.org/10.1117/12.2508418 PSISDG 0277-786X (2019). Google Scholar

88. 

H. Zhang et al., “A new deep learning network for mitigating limited-view and under-sampling artifacts in ring-shaped photoacoustic tomography,” Comput. Med. Imaging Graph., 84 101720 https://doi.org/10.1016/j.compmedimag.2020.101720 (2020). Google Scholar

89. 

H. Lan et al., “Hybrid neural network for photoacoustic imaging reconstruction,” in 41st Annu. Int. Conf. IEEE Eng. in Med. and Biol. Soc. (EMBC), 6367 –6370 (2019). https://doi.org/10.1109/EMBC.2019.8857019 Google Scholar

90. 

C. Yang et al., “Review of deep learning for photoacoustic imaging,” Photoacoustics, 21 100215 https://doi.org/10.1016/j.pacs.2020.100215 (2021). Google Scholar

91. 

H. Zhang et al., “Deep-E: a fully-dense neural network for improving the elevation resolution in linear-array-based photoacoustic tomography,” IEEE Trans. Med. Imaging, 41 (5), 1279 –1288 https://doi.org/10.1109/TMI.2021.3137060 ITMID4 0278-0062 (2021). Google Scholar

92. 

L. Li et al., “Multiview Hilbert transformation in full-ring transducer array-based photoacoustic computed tomography,” J. Biomed. Opt., 22 (7), 076017 https://doi.org/10.1117/1.JBO.22.7.076017 JBOPFO 1083-3668 (2017). Google Scholar

93. 

J. Gateau et al., “Single-side access, isotropic resolution, and multispectral three-dimensional photoacoustic imaging with rotate-translate scanning of ultrasonic detector array,” J. Biomed. Opt., 20 (5), 056004 https://doi.org/10.1117/1.JBO.20.5.056004 JBOPFO 1083-3668 (2015). Google Scholar

94. 

J. Xia et al., “Three-dimensional photoacoustic tomography based on the focal-line concept,” J. Biomed. Opt., 16 (9), 090505 https://doi.org/10.1117/1.3625576 JBOPFO 1083-3668 (2011). Google Scholar

95. 

Y. Wang et al., “Slit-enabled linear-array photoacoustic tomography with near isotropic spatial resolution in three dimensions,” Opt. Lett., 41 (1), 127 –130 https://doi.org/10.1364/OL.41.000127 OPLEDP 0146-9592 (2016). Google Scholar

96. 

Y. Garje et al., “Multiview compounding for linear array-based 3D photoacoustic imaging,” Proc. SPIE, 12379 123790B https://doi.org/10.1117/12.2650782 PSISDG 0277-786X (2023). Google Scholar

97. 

Y. Wang et al., “Review of methods to improve the performance of linear array-based photoacoustic tomography,” J. Innov. Opt. Health Sci., 13 (02), 2030003 https://doi.org/10.1142/S1793545820300037 (2020). Google Scholar

98. 

J. Gateau et al., “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: whole-body tomographic system for small animals,” Med. Phys., 40 (1), 013302 https://doi.org/10.1118/1.4770292 MPHYA6 0094-2405 (2013). Google Scholar

99. 

M. Schwarz, A. Buehler and V. Ntziachristos, “Isotropic high resolution optoacoustic imaging with linear detector arrays in bi-directional scanning,” J. Biophotonics, 8 (1–2), 60 –70 https://doi.org/10.1002/jbio.201400021 (2015). Google Scholar

100. 

H. H. Barrett, “III the radon transform and its applications,” Progress in Optics, 217 –286 Birkhäuser, Boston (1984). Google Scholar

101. 

P. Hu et al., “Spatiotemporal antialiasing in photoacoustic computed tomography,” IEEE Trans. Med. Imaging, 39 (11), 3535 –3547 https://doi.org/10.1109/TMI.2020.2998509 ITMID4 0278-0062 (2020). Google Scholar

102. 

S. Guan et al., “Fully dense UNet for 2-D sparse photoacoustic tomography artifact removal,” IEEE J. Biomed. Health Inf., 24 (2), 568 –576 https://doi.org/10.1109/JBHI.2019.2912935 (2019). Google Scholar

103. 

D. Wang et al., “Three-dimensional photoacoustic tomography through coherent-weighted focal-line-based image reconstruction,” Proc. SPIE, 10064 100643G https://doi.org/10.1117/12.2253276 PSISDG 0277-786X (2017). Google Scholar

104. 

Y. Xu, M. Xu and L. V. Wang, “Exact frequency-domain reconstruction for thermoacoustic tomography. II. Cylindrical geometry,” IEEE Trans. Med. Imaging, 21 (7), 829 –833 https://doi.org/10.1109/TMI.2002.801171 ITMID4 0278-0062 (2002). Google Scholar

105. 

P. Hu, L. Li and L. V. Wang, “Location-dependent spatiotemporal antialiasing in photoacoustic computed tomography,” IEEE Trans. Med. Imaging, 42 (4), 1210 –1224 https://doi.org/10.1109/TMI.2022.3225565 ITMID4 0278-0062 (2022). Google Scholar

106. 

S. Hakakzadeh, Z. Kavehvash and M. Pramanik, “Artifact removal factor for circular-view photoacoustic tomography,” in IEEE Int. Ultrason. Symp. (IUS), 1 –4 (2022). https://doi.org/10.1109/IUS54386.2022.9958228 Google Scholar

107. 

H. Wang et al., “An extremum-guided interpolation for sparsely sampled photoacoustic imaging,” Photoacoustics, 32 100535 https://doi.org/10.1016/j.pacs.2023.100535 (2023). Google Scholar

108. 

J. Poudel et al., “Mitigation of artifacts due to isolated acoustic heterogeneities in photoacoustic computed tomography using a variable data truncation-based reconstruction method,” J. Biomed. Opt., 22 (4), 041018 https://doi.org/10.1117/1.JBO.22.4.041018 JBOPFO 1083-3668 (2017). Google Scholar

109. 

T. P. Matthews et al., “Compensation for air voids in photoacoustic computed tomography image reconstruction,” Proc. SPIE, 9708 970841 https://doi.org/10.1117/12.2213307 PSISDG 0277-786X (2016). Google Scholar

110. 

T. P. Matthews et al., “Parameterized joint reconstruction of the initial pressure and sound speed distributions for photoacoustic computed tomography,” SIAM J. Imaging Sci., 11 (2), 1560 –1588 https://doi.org/10.1137/17M1153649 (2018). Google Scholar

111. 

S. Manohar et al., “Concomitant speed-of-sound tomography in photoacoustic imaging,” Appl. Phys. Lett., 91 (13), 131911 https://doi.org/10.1063/1.2789689 APPLAB 0003-6951 (2007). Google Scholar

112. 

J. Jose et al., “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys., 39 (12), 7262 –7271 https://doi.org/10.1118/1.4764911 MPHYA6 0094-2405 (2012). Google Scholar

113. 

K. Deng et al., “Multi-segmented feature coupling for jointly reconstructing initial pressure and speed of sound in photoacoustic computed tomography,” J. Biomed. Opt., 27 (7), 076001 https://doi.org/10.1117/1.JBO.27.7.076001 JBOPFO 1083-3668 (2022). Google Scholar

114. 

T. Yue et al., “Double speed-of-sound photoacoustic image reconstruction at 10 frames-per-second with automatic segmentation,” Proc. SPIE, 12320 123201D https://doi.org/10.1117/12.2651263 PSISDG 0277-786X (2022). Google Scholar

115. 

Y. Zhang and L. Wang, “Adaptive dual-speed ultrasound and photoacoustic computed tomography,” Photoacoustics, 27 100380 https://doi.org/10.1016/j.pacs.2022.100380 (2022). Google Scholar

116. 

R. G. Willemink et al., “Imaging of acoustic attenuation and speed of sound maps using photoacoustic measurements,” Proc. SPIE, 6920 692013 https://doi.org/10.1117/12.770061 PSISDG 0277-786X (2008). Google Scholar

117. 

S. Jeon et al., “A deep learning-based model that reduces speed of sound aberrations for improved in vivo photoacoustic imaging,” IEEE Trans. Image Process., 30 8773 –8784 https://doi.org/10.1109/TIP.2021.3120053 IIPRE4 1057-7149 (2021). Google Scholar

118. 

J. Poudel, Y. Lou and M. A. Anastasio, “A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography,” Phys. Med. Biol., 64 (14), 14TR01 https://doi.org/10.1088/1361-6560/ab2017 PHMBA7 0031-9155 (2019). Google Scholar

119. 

C. Huang et al., “Joint reconstruction of absorbed optical energy density and sound speed distributions in photoacoustic computed tomography: a numerical investigation,” IEEE Trans. Comput. Imaging, 2 (2), 136 –149 https://doi.org/10.1109/TCI.2016.2523427 (2016). Google Scholar

120. 

M. Cui et al., “Adaptive photoacoustic computed tomography,” Photoacoustics, 21 100223 https://doi.org/10.1016/j.pacs.2020.100223 (2021). Google Scholar

121. 

R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng., 21 (5), 829 –832 https://doi.org/10.1117/12.7972989 (1982). Google Scholar

122. 

C. Cai et al., “Feature coupling photoacoustic computed tomography for joint reconstruction of initial pressure and sound speed in vivo,” Biomed. Opt. Express, 10 (7), 3447 –3462 https://doi.org/10.1364/BOE.10.003447 BOEICL 2156-7085 (2019). Google Scholar

123. 

O. Ronneberger, P. Fischer and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” Lect. Notes Comput. Sci., 9351 234 –241 https://doi.org/10.1007/978-3-319-24574-4_28 LNCSD9 0302-9743 (2015). Google Scholar

124. 

E. E. Christensen, T. S. Curry and J. E. Dowdey, “An introduction to the physics of diagnostic radiology,” (1978). Google Scholar

125. 

J. Jose et al., “Passive element enriched photoacoustic computed tomography (PER PACT) for simultaneous imaging of acoustic propagation properties and light absorption,” Opt. Express, 19 (3), 2093 –2104 https://doi.org/10.1364/OE.19.002093 OPEXFF 1094-4087 (2011). Google Scholar

126. 

E. Merčep et al., “Transmission–reflection optoacoustic ultrasound (TROPUS) computed tomography of small animals,” Light Sci. Appl., 8 (1), 18 https://doi.org/10.1038/s41377-019-0130-5 (2019). Google Scholar

127. 

G.-S. Jeng et al., “Real-time interleaved spectroscopic photoacoustic and ultrasound (PAUS) scanning with simultaneous fluence compensation and motion correction,” Nat. Commun., 12 (1), 716 https://doi.org/10.1038/s41467-021-20947-5 NCAOBW 2041-1723 (2021). Google Scholar

128. 

F. M. Brochu et al., “Towards quantitative evaluation of tissue absorption coefficients using light fluence correction in optoacoustic tomography,” IEEE Trans. Med. Imaging, 36 (1), 322 –331 https://doi.org/10.1109/TMI.2016.2607199 ITMID4 0278-0062 (2016). Google Scholar

129. 

A. Madasamy et al., “Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging,” J. Biomed. Opt., 27 (10), 106004 https://doi.org/10.1117/1.JBO.27.10.106004 JBOPFO 1083-3668 (2022). Google Scholar

130. 

B. Mc Larney et al., “Uniform light delivery in volumetric optoacoustic tomography,” J. Biophotonics, 12 (6), e201800387 https://doi.org/10.1002/jbio.201800387 (2019). Google Scholar

131. 

M. Kim et al., “Correction of wavelength-dependent laser fluence in swept-beam spectroscopic photoacoustic imaging with a hand-held probe,” Photoacoustics, 19 100192 https://doi.org/10.1016/j.pacs.2020.100192 (2020). Google Scholar

132. 

M. Kim et al., “Fluence compensation for real-time spectroscopic photoacoustic imaging,” (2020). Google Scholar

133. 

X. Zhou et al., “Evaluation of fluence correction algorithms in multispectral photoacoustic imaging,” Photoacoustics, 19 100181 https://doi.org/10.1016/j.pacs.2020.100181 (2020). Google Scholar

134. 

M. A. Naser et al., “Improved photoacoustic-based oxygen saturation estimation with SNR-regularized local fluence correction,” IEEE Trans. Med. Imaging, 38 (2), 561 –571 https://doi.org/10.1109/TMI.2018.2867602 ITMID4 0278-0062 (2018). Google Scholar

135. 

F. Guerra and D. S. Dumani, “An iterative method of light fluence distribution estimation for quantitative photoacoustic imaging,” Proc. SPIE, 11642 116423H https://doi.org/10.1117/12.2582647 PSISDG 0277-786X (2021). Google Scholar

136. 

H. Deng et al., “Deep learning in photoacoustic imaging: a review,” J. Biomed. Opt., 26 (4), 040901 https://doi.org/10.1117/1.JBO.26.4.040901 JBOPFO 1083-3668 (2021). Google Scholar

137. 

T. Chen et al., “A deep learning method based on U-Net for quantitative photoacoustic imaging,” Proc. SPIE, 11240 112403V https://doi.org/10.1117/12.2543173 PSISDG 0277-786X (2020). Google Scholar

138. 

M. Arumugaraj, “Deep learning methods for light fluence compensation in two-dimensional and three-dimensional photoacoustic imaging,” J. Biomed. Opt., 27 (10), 106004 https://doi.org/10.1117/1.JBO.27.10.106004 JBOPFO 1083-3668 (2022). Google Scholar

139. 

L. Lin et al., “High-speed three-dimensional photoacoustic computed tomography for preclinical research and clinical translation,” Nat. Commun., 12 (1), 882 https://doi.org/10.1038/s41467-021-21232-1 NCAOBW 2041-1723 (2021). Google Scholar

140. 

L. Lin et al., “Photoacoustic computed tomography of breast cancer in response to neoadjuvant chemotherapy,” Adv. Sci., 8 (7), 2003396 https://doi.org/10.1002/advs.202003396 (2021). Google Scholar

141. 

Y. Bao et al., “Development of a digital breast phantom for photoacoustic computed tomography,” Biomed. Opt. Express, 12 (3), 1391 –1406 https://doi.org/10.1364/BOE.416406 BOEICL 2156-7085 (2021). Google Scholar

142. 

P. Omidi et al., “A novel dictionary-based image reconstruction for photoacoustic computed tomography,” Appl. Sci., 8 (9), 1570 https://doi.org/10.3390/app8091570 (2018). Google Scholar

143. 

Y. Duan et al., “Spherical-matching hyperbolic-array photoacoustic computed tomography,” J. Biophotonics, 14 (6), e202100023 https://doi.org/10.1002/jbio.202100023 (2021). Google Scholar

144. 

T. Vu et al., “A generative adversarial network for artifact removal in photoacoustic computed tomography with a linear-array transducer,” Exp. Biol. Med., 245 (7), 597 –605 https://doi.org/10.1177/1535370220914285 EXBMAA 0071-3384 (2020). Google Scholar

145. 

T. Wang, W. Liu and C. Tian, “Combating acoustic heterogeneity in photoacoustic computed tomography: a review,” J. Innov. Opt. Health Sci., 13 (03), 2030007 https://doi.org/10.1142/S1793545820300074 (2020). Google Scholar

146. 

M. R. Chatni et al., “Tumor glucose metabolism imaged in vivo in small animals with whole-body photoacoustic computed tomography,” J. Biomed. Opt., 17 (7), 0760121 https://doi.org/10.1117/1.JBO.17.7.076012 JBOPFO 1083-3668 (2012). Google Scholar

147. 

H. Zuo et al., “Spectral crosstalk in photoacoustic computed tomography,” Photoacoustics, 26 100356 https://doi.org/10.1016/j.pacs.2022.100356 (2022). Google Scholar

148. 

S. Agrawal et al., “Light-emitting-diode-based multispectral photoacoustic computed tomography system,” Sensors, 19 (22), 4861 https://doi.org/10.3390/s19224861 SNSRES 0746-9462 (2019). Google Scholar

149. 

P. Omidi et al., “PATLAB: a graphical computational software package for photoacoustic computed tomography research,” Photoacoustics, 28 100404 https://doi.org/10.1016/j.pacs.2022.100404 (2022). Google Scholar

150. 

T. Zhao et al., “Minimally invasive photoacoustic imaging: current status and future perspectives,” Photoacoustics, 16 100146 https://doi.org/10.1016/j.pacs.2019.100146 (2019). Google Scholar

151. 

S. Manohar and M. Dantuma, “Current and future trends in photoacoustic breast imaging,” Photoacoustics, 16 100134 https://doi.org/10.1016/j.pacs.2019.04.004 (2019). Google Scholar

152. 

K. Kratkiewicz et al., “Ultrasound and photoacoustic imaging of breast cancer: clinical systems, challenges, and future outlook,” J. Clin. Med., 11 (5), 1165 https://doi.org/10.3390/jcm11051165 (2022). Google Scholar

153. 

A. Zare et al., “Clinical theranostics applications of photo-acoustic imaging as a future prospect for cancer,” J. Control. Release, 351 805 –833 https://doi.org/10.1016/j.jconrel.2022.09.016 JCREEC 0168-3659 (2022). Google Scholar

Biography

Shunyao Zhang is a PhD student at Rice University, Houston, Texas, United States. He received his master’s degree in electrical and computer engineering from Carnegie Mellon University, Pittsburgh, Pennsylvania, United States. His research interests are photoacoustic imaging and deep learning.

Jingyi Miao is a PhD student at Rice University, Houston, Texas, United States. She received her bachelor’s degree in electrical engineering from University of Glasgow and University of Electronic Science and Technology of China. Her research interest is photoacoustic imaging.

Lei S. Li is an assistant professor of electrical and computer engineering and bioengineering at Rice University. He obtained his PhD from the Department of Electrical Engineering at California Institute of Technology in 2019. He received his MS degree at Washington University in St. Louis in 2016. His research focuses on developing next-generation medical imaging technology for understanding the brain better, diagnosing early-stage cancer, and wearable monitoring of human vital signs.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Shunyao Zhang, Jingyi Miao, and Lei S. Li "Challenges and advances in two-dimensional photoacoustic computed tomography: a review," Journal of Biomedical Optics 29(7), 070901 (12 July 2024). https://doi.org/10.1117/1.JBO.29.7.070901
Received: 18 October 2023; Accepted: 19 June 2024; Published: 12 July 2024
Advertisement
Advertisement
KEYWORDS
Image restoration

Photoacoustic tomography

Acoustics

Tissues

Aliasing

Transducers

3D image reconstruction

Back to Top