Open Access
13 May 2023 Photoacoustic-enabled automatic vascular navigation: accurate and naked-eye real-time visualization of deep-seated vessels
Shu Pan, Li Wang, Yuanzheng Ma, Guangyu Zhang, Rui Liu, Tao Zhang, Kedi Xiong, Siyu Chen, Jian Zhang, Wende Li, Sihua Yang
Author Affiliations +
Abstract

Accurate localization of blood vessels with image navigation is a key element in vascular-related medical research and vascular surgery. However, current vascular navigation techniques cannot provide naked-eye visualization of deep vascular information noninvasively and with high resolution, resulting in inaccurate vascular anatomy and diminished surgical success rates. Here, we introduce a photoacoustic-enabled automatic vascular navigation method combining photoacoustic computed tomography with augmented and mixed reality, for the first time, to our knowledge, enabling accurate and noninvasive visualization of the deep microvascular network within the tissues in real time on a real surgical surface. This approach achieves precise vascular localization accuracy (<0.89 mm) and tiny vascular relocation latency (<1 s) through a zero-mean normalization idea-based visual tracking algorithm and a curved surface-fitting algorithm. Further, the subcutaneous vessels of minimum diameter (∼0.15 mm) in rabbit thigh and the maximum depth (∼7 mm) in human arm can be vividly projected on the skin surface with a computer vision-based projection tracking system to simulate preoperative and intraoperative vascular localization. Thereby, this strategy provides a way to visualize deep vessels without damage on the surgical surface and with precise image navigation, opening an avenue for the application of photoacoustic imaging in surgical operations.

1.

Introduction

Accurate localization of the vascular trajectory with preoperative image navigation is essential for vascular-related surgery, especially in cases where congenital abnormalities may accompany the anatomical position of blood vessels.1 Identifying these abnormalities through preoperative imaging and planning vascular surgery in advance can reduce the damage to blood vessels and surrounding tissues and avoid complications. For example, in the perforator flap transplantation process, high variability in the vascular anatomy is a major challenge. Therefore, preoperative planning is critical to rapidly and accurately finding perforators in an effort to minimize the sacrifice of muscle tissue around the perforators and enhance the efficiency of the surgery.2,3 As another example, in coronary intervention surgery, using the radial artery as the catheter entrance has the advantages of high success rate and few complications. As anatomical abnormalities of the radial artery will affect the success rate and operation time,4,5 locating and identifying anatomical abnormalities of the radial artery through preoperative images is vital. However, preoperative images are often presented on a 2D display screen; the combination of preoperative digital images and intraoperative patient surgical surface requires a high degree of physician experience, resulting in inefficient and unsafe image navigation.

With the development of augmented-reality and mixed-reality technology, a series of augmented-reality and mixed-reality devices have emerged to combine preoperative images with real surgical surfaces, greatly improving the efficiency of surgery and reducing surgical risks. For example, using the Microsoft HoloLens for perforator flap transplantation, computed tomography angiography (CTA)6,7 was used to image the complete vascular anatomy, and the HoloLens was used to combine the real surgical surface with CTA vascular images to accurately locate the perforators.810 In another case, researchers proposed directly projecting the preoperative CTA vascular image on the surgical surface to locate perforators.1113 However, in these cases, combined with augmented reality or mixed reality, the preoperative imaging method using CTA cannot perform noninvasive detection of blood vessels. CTA involves ionizing radiation and requires intravenous contrast media, which may lead to serious complications, such as an allergy, to the contrast media and impaired renal function.14,15 Some researchers have proposed using transmission mode near-infrared (NIR) imaging, which is noninvasive, combined with augmented reality, to rapidly locate blood vessels.16,17 However, this method of NIR imaging has a shallow imaging depth of 5.5 mm in phantoms18 and 3 mm in tissue.19 Doppler ultrasound20 is also a common modality for locating perforators and radial arteries.21,22 However, Doppler ultrasound has a low sensitivity for imaging small vessels.23

Photoacoustic imaging (PAI)2428 utilizes the specific absorption properties of hemoglobin to achieve direct imaging of blood vessels with high sensitivity and deep penetration. Noninvasive imaging of blood vessels can be performed by PAI to provide high-resolution vascular imaging for preoperative planning. Photoacoustic computed tomography (PACT)29,30 is an embodiment of PAI. Compared with ultrasound imaging, PACT has rich internal and external optical contrast, and it has advantages in high-resolution imaging of subcutaneous microvessels.31,32 Additionally, in contrast to CTA, PACT does not require intravenous injection of contrast agents, and the method is free of radiation. Notably, the imaging depth of PACT is much greater than that of transmission mode NIR imaging. Wang et al. demonstrated that the imaging depth of PACT in vivo is up to 4 cm.3335 However, there is no reported use of PACT in combination with augmented reality and mixed reality for vascular localization in vascular surgery.

Based on the above, we propose a photoacoustic-enabled automatic vascular navigation method that combines PACT with augmented reality and mixed reality for noninvasive and accurate localization of blood vessels. In this navigation strategy, PACT was used to noninvasively reconstruct 2D and 3D vascular images. 3D surface reconstruction technology was used to reconstruct a 3D surface model of the surgical surface. With the assistance of 3D point cloud registration, the 3D vascular image and 3D surface model were fused to augment the interactivity between the 3D vascular image and surgical surface on the computer screen for vascular navigation. In addition, high-resolution 2D vascular images were modulated with a miniaturization-projector-based spatial light modulator (SLM). By means of a robot operation system-based visual localization and tracking technology, the 2D vascular images were precisely superimposed on the real surgical surface, enabling the deep vessels of the real surgical site to be visualized on the surgical surface in real time. Moreover, a curved surface fitting algorithm and a zero-mean normalization idea-based visual tracking algorithm were proposed to enhance the accuracy of vascular localization. This approach provides reliable assistance for locating blood vessels noninvasively and accurately by means of augmented reality and mixed reality, promising to improve the safety and success rate of vascular operations.

2.

Structure and Method

2.1.

Photoacoustic-Enabled Automatic Vascular Navigation

As shown in Fig. 1(a), the experimental facility for photoacoustic-enabled automatic vascular navigation consists of a PAI system, a computer vision-based projection tracking system (VPTS), and a computer. The PAI system is based on our previous work,36 which involved a PACT system with a hyperbolic-array transducer that consists of 128 elements with a central frequency of 5.4 MHz and nominal bandwidth of 65%. The PACT system is used to perform preoperative imaging of blood vessels; then the VPTS is used to accurately overlay preoperative photoacoustic (PA) vascular images on the surgical surface in real time. The VPTS includes an RGBD (R: Red, G: Green, B: Blue, D: Depth) camera (Intel Realsense D435i, Intel, United States) and a projector, where the projector is designed and manufactured based on an SLM (PLUTO-NIR-011, HOLOEYE, Germany). The SLM has the advantages of stable phase delay, a multifocal plane, aberration calibration by software, a simple optical engine, and high optical efficiency,37 which has been proven to be used in making miniaturized high-resolution projectors.38,39 During projection, the preoperative images are registered with the real surgical surface using visual localization technology, and the RGBD camera monitors the target in real time during the surgery. The pose transformation can be estimated when the target moves involuntarily; then the preoperative images are transformed and reprojected to the surgical surface so that the vascular images can still be projected on the surgical surface in situ after the target moves. All vision-based image registration and tracking calculations are performed on a computer.

Fig. 1

(a) Schematic diagram of the experimental facility for photoacoustic-enabled automatic vascular navigation. The PA probe and the PA imaging system are used to image the target, the reconstructed image is accurately projected on the target surface in real time by the VPTS, and the RGBD camera locates and tracks the target in real time, so that the preoperative images can still be accurately reprojected on the target surface when the target moves. (b) Device diagram of the VPTS, including the light path of the projector and RGBD camera. L1, L2, and L3 are convex lenses. HWF is a half-wave plate. (c) System data flow diagram.

APN_2_4_046001_f001.png

The specific optical path of the projector is shown in Fig. 1(b). We use a small size common light emitting diode (LED; CL-P10, LIGITEK, China) with red, green, white, and blue colors as the light source to increase the integration. The whole optical path is divided into four parts.

  • 1. LED lighting part. A four-color array LED with an external heat dissipation device is used to provide 10 W incoherent light to suppress the zero-order diffraction of SLM modulation.

  • 2. Beam control part. The beam sequentially passes through a 4F beam expansion system consisting of lens L1, a pinhole, and lens L2 (f1=25.4  mm, f2=30  mm), then passes through a polarization filter and a half-wave plate (HWF) to adjust the relative phase delay of the beam on the SLM. Among them, the incident light passes through the polarizer and HWF in sequence in front of the SLM so that the light can have a specified polarization state (such as 30 deg) to maximize the SLM modulation efficiency.40

  • 3. SLM modulation part. This part receives LED light from the beam control part. By changing the voltage of conductive elements under the line-up liquid crystal on silicon, one is able to adjust the reflection direction of the incident laser with the resolution of the SLM.41 To make the structure of the projector smaller, a reflector is used to reflect the light source at 45 deg. Another polarizer regarded as a polarization analyzer is employed after the SLM in the beam path to screen stray light generated by reflections at the surfaces of the SLM.

  • 4. Beam focusing part. Due to the space limitation of the integration, only one focusing lens L3 (f3=150  mm) is used in this part to focus light on the target.

The data flow diagram of the whole system is shown in Fig. 1(c). The whole process is divided into two parallel parts, with two sensors as inputs. The first input is the PAI system. The PAI system is used to perform preoperative imaging of the target and reconstruct 2D and 3D PA vascular images. The second input is the images of the RGBD camera, which are used to reconstruct the 3D surface model of the target. At the same time, the RGBD camera is also used to locate the target in real time and estimate the pose transform when the target moves involuntarily. Once the data obtained from the two inputs are ready, the augmented reality and mixed reality start working.

2.2.

Augmented Reality and Mixed Reality

The specific implementation of augmented reality and mixed reality is shown in Fig. 2. The algorithm flow chart of the whole system is shown in Fig. 2(a). 2D and 3D PA vascular images as well as RGBD camera images are the two inputs of the whole system. First, unifying the coordinate is needed. Once the homography matrix HCP between the projector and camera is solved, the calibration is complete. The specific calibration principle is shown in Fig. 2(d). To make the calibration results have the physical characteristics of the imaging system and thus make the accurate vascular localization after the PA vascular image projection, we use the imaging data of the PA imaging system for calibration directly. After the calibration, the system enters into the concrete implementation stage, which is divided into two parts: augmented reality and mixed reality.

Fig. 2

(a) Flow chart of the system algorithm. (b) Detailed implementation steps of the preoperative image registration algorithm. (c) Schematic diagram of the implementation of the intraoperative image-tracking algorithm. (d) Schematic of the calibration of the camera and projector. (e)–(g) 3D surface models of the target reconstructed using computer vision. (h) 3D point cloud image of the 3D PA image. (i) Augmented-reality image after fusion of the 3D surface model and the 3D PA image.

APN_2_4_046001_f002.png

In the augmented-reality part, RGBD images are first used to reconstruct the 3D surface of the target,42 and then principal component analysis (PCA) is performed on the 3D surface model [Figs. 2(e)2(g)] and 3D PA vascular image [Fig. 2(h)] to align their direction. After that, the iterative closest point (ICP) algorithm is used to fuse the 3D surface model of the target and the 3D PA vascular image, and the fused augmented-reality image is finally displayed on the screen, as shown in Fig. 2(i). The use of PCA can reduce the number of data dimensions and reduce the amount of computation, while improving the accuracy and success of ICP registration. It can be seen that the 3D PA image does not perfectly overlap with the 3D surface model because there is an error in 3D surface reconstruction using vision.

In the mixed-reality part, the goal is to precisely project the preoperative PA vascular images on the surgical surface, which is divided into two parts: (1) preoperative image registration and (2) intraoperative image tracking.

  • 1. For preoperative image registration, we utilize the VPTS to identify any posture of the surgical surface after the end of the preoperative imaging procedure. First, the camera takes a picture of the surgical surface and transforms it into the projection coordinate system using the calibration result. Then, feature points are extracted from the PA image and the surgical surface image of the projection coordinate system, respectively, assuming that s1,i is a feature point of the PA image and s2,i is a feature point of the surgical surface. If the preoperative image is successfully registered with the surgical surface, then there is the following relationship:

    Eq. (1)

    s2,i=Rs1,i+t,
    where RSO(3) is a rotation matrix and t is a translation vector. After defining the error, the extracted feature points are used to construct a least squares problem to solve R and t. Supposing that there are n feature points, there is the following optimization relationship:

    Eq. (2)

    argminR,ti=1ns2,i(Rs1,i+t)2.

After solving the transformation relationship for R and t, the preoperative PA image s1 in the projection coordinate system can be transformed to the projected image. However, when the target is a curved surface, a projection error will occur if the 2D images are directly projected to the curved surface; therefore, it is necessary to design an algorithm for curved surface fitting. The proposed curved-surface-fitting algorithm is specifically shown in Fig. 3. After curved-surface fitting, the final projected images can be projected on the surgical surface by the projector and registered with the real surgical surface. The specific implementation steps of preoperative image registration are shown in Fig. 2(b).

  • 2. Considering the involuntary movement of the patient during surgery, target pose estimation is used to transform the projected images to reregister them with the surgical surface after the patient moves. This process is called intraoperative image tracking. During the surgery, the camera captures the surgical surface images at a frame rate of 30 fps, and the pose transformation after the target moves is estimated. Then, the projected images can be transformed and reprojected in situ on the surgical surface to realize naked-eye real-time visualization, as shown in Fig. 2(c). This figure shows the detailed principle. Before explaining it, we need to clarify several symbol definitions:

Fig. 3

(a) PA image of the phantom. (b) Schematic diagram of 2D image projection on the 3D surface. A projection error will be caused by directly projecting a 2D image on the curved surface. c1 and c2 are two points on the curved surface, c1c2 is the 2D projection of the 3D image on the Z=0 plane, p0 is the center of the projector, and p1 and p2 are two points in the projected image. p1 and p2 projected by the projector do not coincide with points c1 and c2 on the real surface. (c) Schematic diagram of the proposed curve surface fitting method. c0 is the center of the camera, and the values of a1, a2, and b can be calculated using 3D points. oc1 and oc2 can be solved by approximate ellipse fitting. The points c1 and c2 are the two points in the 2D PA image on the Z=0 plane after curved-surface fitting, p1 and p2 are the two points in the projected image after the curved-surface fitting. (d) Result of the PA image is projected on the curved surface before surface fitting. (e) Result of the PA image is projected on the curved surface after surface fitting (Video 2, MP4, 6.51 MB [URL: https://doi.org/10.1117/1.APN.2.4.046001.s2]).

APN_2_4_046001_f003.png

TCT is the pose transformation of the target relative to the camera and includes rotation R and translation t, where TCTSE(3); Π is the projection function from the 3D world to the camera plane; and Π1 is the back projection function from the camera plane to the 3D world.

In the camera coordinate system, suppose that x1 is the pixel point of the 3D point P1 before the target moves and x2 is the pixel point of the 3D point P2 after the target moves. p1 and p2 are the pixel points of the 3D points P1 and P2 in the projection coordinate system, respectively, and in the 3D space P1 and P2 have the following relationship:

Eq. (3)

P2=TCTP1.

The pixel relationship between two images can be obtained from the camera projection function,

Eq. (4)

x2=Π(TCTP1).

Based on the photometric invariance principle, a point x2 in the image I2 after the target moves that is the most similar to x1 in I1 before the target moves can be found. The error is defined as:

Eq. (5)

e=x1Π(TCTP1).

Assuming that the target has N points and a least squares problem is constructed, the following pose optimization equation is obtained:

Eq. (6)

minTCTJ(TCT)=i=1NI1(x1,i)I2(x2,i)2.

When the textures of the surgical surface are weak, or the environmental light changes during the surgery, the pose solution will not be accurate enough or even impossible to solve.43,44 To cope with these problems, the idea of traditional template-matching is added. The zero-mean normalization function zero-mean normalization cross correlation (ZNCC) is used to calculate the cross-correlation coefficient of the two points in the two images, and then the coefficient is used as a weight to optimize the optimization function. The final optimization equation is

Eq. (7)

minTCTJ(TCT)=i=1NαZNCC(x1,i,x2,i)[I1(x1,i)I2(x2,i)]2,
where α is the weight, which can be adjusted according to the actual situation and experience. In the actual experimental process of this system, α is set to 1.2. In addition, we use numerical optimization in the program, taking into account the treatment of convex and nonconvex functions to converge the problem to global minima. After the pose transformation TCT is solved, and combined with the calibration parameter HCP, the vascular images can be accurately reprojected. In other words, the relationship between p1 and p2 is

Eq. (8)

p2=HCP1Π[TCTΠ1(HCPp1)].

If a projected PA vascular image Spre is transformed, then the transformed projected image Sproj is expressed as

Eq. (9)

Sproj(i)=HCP1Π{TCTΠ1[HCPSpre(i)]},
where Sproj(i) and Spre(i) represent a pixel point in the projected image and the PA image before transformation, respectively.

The reason for dividing these two processes is that the problem of weak texture on the surgical surface makes it difficult to extract enough feature points on the surgical surface. Fortunately, the preoperative image registration process has the promise of finding feature points on the static surgical surface. Even if extraction fails, the pose of the surgical surface can be changed until success. However, during the intraoperative image-tracking process, it is more difficult to extract feature points on the moving surgical surface, which will lead to image registration failure.

2.3.

Curved-Surface-Fitting Algorithm

To solve the problem of projection error when projecting 2D PA images on a curved surface, a curved-surface-fitting algorithm was proposed. A hemisphere phantom composed of agar was used to verify the proposed algorithm. Six groups of tungsten wires with a certain radian were covered on the hemisphere surface to simulate blood vessels. The length of the tungsten wires was 13  mm, and the tungsten wires had three different diameters of 0.5, 0.4, and 0.3 mm. The 2D PA image reconstructed after imaging is shown in Fig. 3(a). This 2D PA image is the projection of the 3D image on the same plane, as shown in Fig. 3(b). Suppose that c1c2 is the 2D projection of a 3D PA image on the highest plane of the target, then p1 and p2 are the points of the 2D PA image in the projection plane, p0 is the center of the projector, o is the highest point of the 3D surface, and p1 and p2 are the projection points of p1 and p2, respectively. In the physical world, there are oc1=op1 and oc2=op2. As can be seen, if a 2D image is projected on a 3D curved surface, projection points p1 and p2 do not coincide with real points c1 and c2 on the real surface. To solve this problem, the camera c0 is introduced, as shown in Fig. 3(c). The camera is used to reconstruct a 3D surface model of the target. Then the real physical coordinates of each point on the 3D surface can be obtained, and the error ec of the 3D surface reconstruction can be calibrated by calculating the point coordinates of the surface markers. The coordinates of o, c1, c2 in the 3D surface model can be used to calculate the constants a1, a2, and b. With these variables, the curved edges oc1 and oc2 can be fitted by the approximate ellipse circumference equation, namely,

Eq. (10)

oc1=π2[(a1ec)2+(bec)],

Eq. (11)

oc2=π2[(a2ec)2+(bec)].

Then, c1 and c2 are transformed into c1 and c2, respectively. Thus, there are: oc1=oc1 and oc2=oc2. At the same time, p1 and p2 in the PA image on the projection plane are transformed into p1 and p2, which can be obtained from c1 and c2 respectively; the relationships are

Eq. (12)

p1=HCP1Π(c1),

Eq. (13)

p2=HCP1Π(c2).

Finally, p1 and p2 are projected on the surface as p1 and p2, which coincide with c1 and c2. At this point, curved-surface fitting is completed. The same methods are used to fit the entire surface. For different skin surfaces, different regions can be divided according to different shapes; then the proposed curved surface fitting algorithm can be used for each region. Figures 3(d) and 3(e) show the results of curved-surface fitting. It can be seen that there was a projection error (1.91  mm) when the PA image was projected on the surface before the fitting in Fig. 3(d), and the projection error was significantly reduced after the fitting, as shown by the white arrow in Fig. 3(e). However, due to the inevitable error in 3D surface reconstruction and the approximate surface-fitting method, there was still a small error after surface fitting (0.16  mm).

3.

Experiments and Results

3.1.

Localization Accuracy Verification with Phantom Experiments

To verify the localization accuracy of blood vessels during mixed reality, phantom verification experiments were conducted. In the first experiment, a blood vessel-like network was designed and fabricated using tungsten wires, which were randomly placed in agar at different heights. The tungsten wires had three different diameters of 0.5, 0.3, and 0.18 mm to simulate blood vessels of different sizes, as shown in Fig. 4(a). The size of the overall vascular-like network was 30  mm×40  mm. The imaging time of the phantom was 60  s using PACT; the reconstructed 2D PA image is shown in Fig. 4(b). Surface fitting was first conducted, and then localization accuracy of blood vessels for preoperative image registration was performed. The results of nonregistered and registered are shown in Figs. 4(c1) and 4(c2), respectively. To keep the target within the projection area (40  mm×45  mm) as much as possible, movement of 6 mm in the x direction and 3 mm in the y direction were chosen. The image reprojection results after target movement during intraoperative image tracking are shown in Figs. 4(d1) and 4(d2), respectively. The above steps were repeated 10 times, and the image projection errors were quantified. As shown in Fig. 4(e), the projection errors of both the preoperative image registration and the intraoperative image tracking did not exceed 0.8 mm in the projection area.

Fig. 4

(a) Picture of a vascular-like network phantom composed of tungsten wires. (b) Corresponding 2D maximum amplitude projection PA image. (c1) A result that is not registered during preoperative image registration. (c2) A result after registration in the process of preoperative image registration. (d1) The reprojection result after moving 6 mm in the x direction during intraoperative image tracking. (d2) Reprojection result of the intraoperative image tracking process after moving 6 mm in the x direction and 3 mm in the y direction. (e) Error statistics of preoperative image registration and intraoperative image tracking quantified by 10 repeated experiments. (f) Box plots combining the statistical errors of the first and second phantom experiments (Video 1, MP4, 8.70 MB [URL: https://doi.org/10.1117/1.APN.2.4.046001.s1]).

APN_2_4_046001_f004.png

In the second experiment, we adopted the hemispherical phantom shown in Fig. 3 and verified the accuracy of vessel localization by repeating the same procedure 10 times and calculating the projection errors in the preoperative image registration and intraoperative image-tracking process. The final results are shown in Fig. S1 in the Supplementary Material. To better evaluate the ability to locate blood vessels in mixed reality, comprehensive statistics of the two validation experiments, a box diagram was plotted to more intuitively represent the error distribution. As shown in Fig. 4(f), the deviation of mean projection errors between the two experiments was less than 0.1 mm, indicating a stable localization performance. It is obvious that the maximum error of the two validation experiments did not exceed 0.75 mm.

The detailed error calculation method for the two phantom experiments is shown in Fig. S2 in the Supplementary Material. Detailed demonstration videos of phantom experiments 1 and 2 are available in Videos 1 and 2. In addition, light sources of different colors can be switched to adapt to different environments so that the best projection effect can be achieved. The projection results of the two phantom experiments after switching the light source to green are shown in Fig. S3 in the Supplementary Material.

3.2.

Validation Experiments of Vascular Localization in Vivo

We further verified vascular localization ability in vivo. An area of 30  mm×38  mm on the thigh of a living rabbit was first selected, as shown in the white dashed box in Fig. 5(a). Two copper wire marks, P1 and P2, were randomly placed in the selected area for registration; they will move simultaneously with the surgical surface. Then pose transformation of the entire surgical surface can be obtained by solving the pose transformation of the copper wire marks directly. In addition, the copper wire marks can be useful for judging the accuracy of vascular localization. The marks were 7 mm in length and 0.5 mm in diameter. PACT was performed in the selected area. A 1064 nm laser (VIBRANT, OPOTEK Inc, United States) with pulse width from 8 to 10 ns and a repetition rate of 10 Hz was used to excite PA signals. The 1064-nm wavelength was chosen because it has a deeper penetration depth than the 532-nm wavelength, which enhances the imaging depth. The laser fluence (20  mJ/cm2) used in the in vivo experiments was well within the American National Standards Institutes safety limit for laser exposure (100  mJ/cm2 at 1064 nm at a 10-Hz pulse repetition rate).45 All in vivo experiments complied with the ethical review of South China Normal University (review number: SCNU-BIP-2022-044). To visualize smaller blood vessels, a small line spot was chosen; the size of the laser beam focus on the tissue surface was 35  mm×2  mm. After 60 s of imaging, a 2D PA image was reconstructed, as shown in Fig. 5(b). During image reconstruction, 10 sets of acquired radio frequency (RF) data for each B-scan were averaged to reduce image artifacts due to respiratory jitter, and the imaging speed can be increased by reducing the number of RF data acquired by each B-scan, but the imaging quality may reduce. The skin and tissue signals were removed from this PA image, with the aim of removing information that was not of interest to us. We could calculate that the maximum imaging depth was 2.9  mm from the tomography image along the white dashed line in Fig. 5(b), as shown in Fig. 5(c). According to the previous section, curved surface fitting is first required. Then, preoperative image registration can be carried out using marks P1 and P2. This process takes only 10 s if the feature extraction goes well; otherwise the pose of the surgical surface needs to be changed until the feature extraction is successful. According to the data from multiple experiments, the process time remains within 1 min. The results of nonregistered and registered on the surface are shown in Figs. 5(d) and 5(e), in which the part indicated by the white arrow represents the projected PA vascular image that was accurately registered with the visible blood vessel on the rabbit thigh. When the position of the thigh moved, the PA vascular images were reprojected in real time and still registered with the visible blood vessels and marks on the thigh, as shown in Figs. 5(f1)5(f3). The white dashed lines show movement up, down, and left. The specific demonstration video is given in Video 3. The white dashed box in Fig. 5(f3) shows the missing blood vessels; this is due to the image loss caused by the rotation and translation of the 2D image after the target moved. The missing area can be identified to be outside the projection area, but this did not affect the accurate visualization of blood vessels on the surgical surface when the target returned to the projection area. The projection area of the current experiment is set to 40  mm×45  mm by the software. This projection area can be changed; the size of the projection area depends mainly on how large an area corresponding to the PA image is used for calibration, but this requires a new camera-projector calibration. Through this mixed-reality method, the subcutaneous blood vessels can be directly visualized on the surgical surface in real time by the naked eye, which can be very convenient for noninvasive and accurate localization of blood vessels. As shown in Fig. 5(g), approximately 2.9-mm-deep vessels and 0.15  mm diameter vessels could be directly visualized on the surface of the rabbit thigh.

Fig. 5

(a) Photo of the rabbit thigh. The white dashed box area is selected for imaging, and P1 and P2 are randomly placed marks. (b) 2D PA vascular image corresponding to the dashed box in panel (a). (c) Tomography image corresponding to the white dashed line in panel (b). (d) and (e) Nonregistered and registered results during preoperative image registration. (f1)–(f3) Three results of vascular image reprojection after movement during intraoperative image tracking; the white dashed box in panel f3 is outside the projection area. (g) Mixed-reality effect on the rabbit thigh, deep blood vessels, and microvessels can be directly visualized on the surface. (h) Error statistics chart of the whole demonstration process based on the demonstration video of the rabbit thigh (Video 3, MP4, 7.79 MB [URL: https://doi.org/10.1117/1.APN.2.4.046001.s3]). (i) Box plot of vascular localization accuracy obtained in rabbit thigh and human arm under two experimental conditions. The symbol *** indicates statistical significance of p<0.001.

APN_2_4_046001_f005.png

To further quantify the localization accuracy of blood vessels in vivo, we calculated the vascular localization accuracy during the entire mixed reality demonstration on the rabbit thigh. Since the deep blood vessels were projected on the surgical surface, we could not calculate the localization error of the deep blood vessels. Therefore, we calculated whether the marks randomly placed on the surgical surface coincided with their projection images as the error calculation standard. As shown in Fig. 5(g), four endpoints, B1, B2, B3, and B4, on the real marks and four endpoints, A1, A2, A3, and A4, on its projection image were selected to calculate the root mean square error (RMSE) and convert it into actual errors, respectively. The demonstration video (Video 3) on the rabbit thigh is 85 s. It was divided into 425 pictures to calculate their errors and finally form an error statistical graph, as shown in Fig. 5(h). As seen in the green dashed boxes of the statistical graph, there were large errors at the beginning of the experiment when the images were not registered. After the registration, the errors quickly dropped. In the rabbit thigh demonstration, the movement occurred after 17 s; after the target moved, vascular image reprojection was performed. The pink dotted box indicates the relocalization errors after the movement. According to the statistical graph, the relocation time was within 1 s. The average RMSE in the demonstration video of the rabbit thigh was calculated to be 8.6 pixels, with an actual average error of 0.72 mm.

In addition, we also demonstrated the use of mixed reality to locate blood vessels in real time on the skin surface of the human arm; the results are shown in Fig. S4 in the Supplementary Material and Video 4. After calculation, the average RMSE in the demonstration video of the human arm was 11.7 pixels and the actual average error was 0.89 mm. To analyze the statistical significance of the results, we performed a mixed ANOVA for the localization accuracy of blood vessels in the rabbit thigh and arm, with a significance threshold of p-value=0.005, as shown in Fig. 5(i). As can be seen, the average values of the two sets are close, indicating that our proposed method is applicable to both animals and human beings, and can still work at different parts. The marked *** symbols indicate statistical significance (p<0.001) between the experimental conditions. After statistical analysis, our p-value was 5.75×107. Therefore, it shows that our calculated vascular localization accuracy is statistically significant. The results show very high vascular localization accuracy, which is better than the 3.47-mm vascular localization error in perforator flap surgery using the Microsoft HoloLens previously reported.9 This is also better than the minimum error of 1.35 mm in the previously reported experiment using the HoloLens to verify the vascular localization error in the phantom model.46 Additionally, it is better than the 1.7 mm error already reported for vascular localization using the VascuLens,13 even within the clinically acceptable vascular localization accuracy range of 5 mm for perforator flap surgery.10 The specific vascular localization accuracy comparison is shown in Table 1, where case 1 and case 2 are cases of vascular localization using the combination of CTA and augmented reality, and case 3 is a case of vascular localization using the combination of CTA and mixed reality. This table shows that our method has high performance in vascular localization.

Table 1

Error statistics for current cases.

Minimum Error (mm)Maximum Error (mm)Average Error (mm)
Case 193.47
Case 2461.353.18
Case 3131.7
Ours0.291.320.89

3.3.

Ability of Augmented Reality to Assist Vascular Localization

In this work, we proposed combining augmented reality and mixed reality to provide reliable help for rapid and accurate localization of blood vessels. The above experiments have all verified the vascular localization performance in mixed reality, but the ability of augmented reality to assist vascular localization cannot be ignored. Therefore, a region of 30  mm×38  mm on a human arm was selected, as shown in region of interest (ROI) A in Fig. 6(b). Two copper wire marks, F1 and F2, were randomly placed in the area for registration. The length and diameter of the marks were 6 and 0.5 mm, respectively. PACT was first performed on the selected area, with an imaging time of 60 s. Different from the previous experiment on the rabbit thigh, the size of the laser beam focus on the tissue surface was 35  mm×5  mm to a greater depth. The reconstructed 3D vascular image is shown in Fig. 6(c). As seen from the 3D vascular image, the blood vessels 7 mm under the skin can be clearly visualized. Then the RGBD camera was used to reconstruct the 3D surface model of the arm, which was fused with the 3D PA vascular image into an augmented-reality model in a 3D point cloud space. The fused model was finally displayed on the computer screen in the form of dense point cloud using PCL libraries and C++ programming, as shown in Fig. 6(a). The final augmented-reality model can be rotated and scaled in 3D space; the detailed demonstration result is shown in Video 5. ROI B was selected to visualize the results of rotation and scaling, as shown in Figs. 6(d)6(g). Moreover, the coordinates of each point of the augmented-reality model in the 3D point cloud space can be easily obtained, which means that the position and structure relationships between the 3D vascular image and the 3D surface model can be calculated with the help of 3D coordinate information. As shown in Fig. 6(e), we can calculate the coordinates of points in 3D space to measure the relationship. Moreover, the subcutaneous vessels can be directly visualized through the skin by enlarging the augmented-reality model to provide perspective, which can conveniently provide reliable information for preoperative planning. However, we do not calculate the accuracy of vascular localization in the augmented-reality part, but in the mixed-reality part.

Fig. 6

(a) Result of augmented reality after fusing the 3D surface model of the arm with the 3D PA vascular image. (b) Photo of the arm. ROI A was selected for PACT imaging; F1 and F2 were two randomly placed marks. (c) 3D PA vascular image corresponding to ROI A. (d)–(g) Visualization results of the augmented-reality model after rotation and scaling in ROI B marked by the red dashed box in panel (a) (Video 4, MP4, 7.40 MB [URL: https://doi.org/10.1117/1.APN.2.4.046001.s4]; Video 5, MP4, 12.2 MB [URL: https://doi.org/10.1117/1.APN.2.4.046001.s5]).

APN_2_4_046001_f006.png

4.

Discussion

This work verified the accuracy of vessel localization in phantoms and in vivo. The localization error was <0.75  mm in the phantoms, and the average localization error was <0.89  mm in vivo, which is sufficient to prove the excellent performance of our vascular localization strategy. In this work, the imaging area for in vivo experiments is 30  mm×38  mm, but this area is not fixed: a larger imaging area can be selected to cover the clinically required imaging area. However, our PACT system was limited by the laser energy, so the maximum imaging depth was 7 mm in this work, while PACT has been demonstrated to have a maximum imaging depth of 4 cm.3335 If we improve the hardware to allow deeper and more vascular information to be visualized on the surgical surface, then the proposed method will be more helpful for rapid and accurate locating blood vessels in clinical surgery.

The proposed method showed excellent vascular localization performance in demonstrations. Even if the living tissue produced slight nonrigid deformation, this method could still stably and accurately visualize the vascular images on the body surface in real time. However, when the nonrigid deformation of the tissue on the surgical surface increased, the projection error also increased. As shown in the error statistics in Fig. 5(h), from 34 to 38 s, the error increased due to the nonrigid deformation of the tissue. Nevertheless, this did not affect the accuracy of vascular localization after the restoration of the nonrigid deformation. Introducing a mechanical model or using deep learning to predict nonrigid deformation to solve the increased errors should be the next area of study. In addition, vision-based 3D reconstruction has inevitable errors due to depth measurement or depth estimation problems; after measurement and comparison with the real size, the inevitable error was 1.3  mm. This error will exist in the whole system, even if some error corrections had been made in engineering, such as error calibration of the reconstructed 3D surface model. This error was considered in the construction of the approximate elliptical model to improve the accuracy of curved-surface fitting in 2D projection images. However, there was still a projection error (0.16  mm) after surface fitting. Therefore, more accurate and robust algorithms to further reduce this error and improve the accuracy of vascular localization need to be designed in the future. In this work, we adopted the scheme of imaging before real-time projection instead of real-time PA imaging, which is a decision we made, aiming at the target of clinical application. The reason is that PA imaging needs coupling. Real-time PA imaging will block the projection. Moreover, in the clinical process, it needs to leave time for doctors to evaluate. Therefore, the current method of imaging before real-time projection is a solution that fits the application scenario.

As SLM is a wavelength-sensitive device, it cannot modulate different wavelengths of light simultaneously. Therefore, we cannot project a depth-encoded image to carry the depth information of the blood vessels. By using the time-division multiplexing, one is able to project vascular image with depth information, which may be implemented in our future work.

5.

Conclusion

Our proposed and experimentally demonstrated photoacoustic-enabled automatic vascular navigation method can noninvasively and accurately locate blood vessels under the navigation of PA images on the surgical surface. This is the first study to utilize PACT in conjunction with augmented and mixed reality for accurate and naked-eye real-time visualization of deep-seated vessels. The PACT used in this system can specifically identify blood vessels with high resolution and sufficient depth. In this work, we used PACT to noninvasively obtain vessels in the thigh of rabbit with a minimum diameter of 0.15  mm and a maximum depth of 2.9mm, which is not possible with ultrasound, CTA and NIR imaging. In addition, we proposed a curved-surface-fitting algorithm and zero-mean normalization idea-based visual-tracking algorithm to achieve high-precision and low-latency vessel localization. With these two algorithms, the average error of vessel localization was within 0.89 mm, and the vessel relocation latency was within 1 s. Moreover, the use of augmented reality gives doctors the ability to perform perspective in the constructed 3D space, providing reliable information such as the position and structure information between blood vessels and surgical surface for preoperative planning. In addition, the mixed reality allows doctors to perform perspective in the real world by directly projecting deep vessels on the surgical surface.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant Nos. 61822505 and 11774101), the Natural Science Foundation of Guangdong Province (Grant No. 2022A1515010548), the Science and Technology Program of Guangzhou (Grant Nos. 2019050001 and 202206010094), the National Key R&D Program of China (Grant No. 2022YFC2304205), and the Special Funds for the Cultivation of Guangdong College Students’ Scientific and Technological Innovation (Grant No. pdjh2023a0134).

References

1. 

J. Kiely et al., “The accuracy of different modalities of perforator mapping for unilateral DIEP flap breast reconstruction: a systematic review and meta-analysis,” J. Plast. Reconstr. Aesthet. Surg., 74 (5), 945 –956 https://doi.org/10.1016/j.bjps.2020.12.005 (2021). Google Scholar

2. 

A. D. Knox et al., “Comparison of outcomes following autologous breast reconstruction using the DIEP and pedicled TRAM flaps: a 12-year clinical retrospective study and literature review,” Plast. Reconstr. Surg., 138 (1), 16 –28 https://doi.org/10.1097/PRS.0000000000001747 (2016). Google Scholar

3. 

H. Marks et al., “A paintable phosphorescent bandage for postoperative tissue oxygen assessment in DIEP flap reconstruction,” Sci. Adv., 6 (51), eabd1061 https://doi.org/10.1126/sciadv.abd1061 STAMCV 1468-6996 (2020). Google Scholar

4. 

G. Eid-Lidt et al., “Distal radial artery approach to prevent radial artery occlusion trial,” JACC Cardiovasc. Interv., 14 (4), 378 –385 https://doi.org/10.1016/j.jcin.2020.10.013 (2021). Google Scholar

5. 

M. Gaudino et al., “Radial-artery or saphenous-vein grafts in coronary-artery bypass surgery,” N. Engl. J. Med., 378 (22), 2069 –2077 https://doi.org/10.1056/NEJMoa1716026 NEJMAG 0028-4793 (2018). Google Scholar

6. 

L. J. Sandberg, “Tracing: a simple interpretation method for the DIEP flap CT angiography to help operative decision-making,” Plast. Reconstr. Surg. Glob. Open., 8 (11), e3218 https://doi.org/10.1097/GOX.0000000000003218 (2020). Google Scholar

7. 

K. Frank et al., “Improving the safety of DIEP flap transplantation: detailed perforator anatomy study using preoperative CTA,” J. Pers. Med., 12 (5), 701 https://doi.org/10.3390/jpm12050701 (2022). Google Scholar

8. 

T. S. Wesselius et al., “Holographic augmented reality for DIEP flap harvest,” Plast. Reconstr. Surg., 147 (1), 25e –29e https://doi.org/10.1097/PRS.0000000000007457 (2021). Google Scholar

9. 

T. Jiang et al., “A novel augmented reality-based navigation system in perforator flap transplantation—a feasibility study,” Ann. Plast. Surg., 79 (2), 192 –196 https://doi.org/10.1097/SAP.0000000000001078 APCSD4 (2017). Google Scholar

10. 

P. Pratt et al., “Through the HoloLens™ looking glass: augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels,” Eur. Radiol. Exp., 2 (1), 1 –7 https://doi.org/10.1186/s41747-017-0033-2 (2018). Google Scholar

11. 

S. Hummelink et al., “An innovative method of planning and displaying flap volume in DIEP flap breast reconstructions,” J. Plast. Reconstr. Aesthet. Surg., 70 (7), 871 –875 https://doi.org/10.1016/j.bjps.2017.04.008 (2017). Google Scholar

12. 

S. Hummelink et al., “A new and innovative method of preoperatively planning and projecting vascular anatomy in DIEP flap breast reconstruction: a randomized controlled trial,” Plast. Reconstr. Surg., 143 (6), 1151e –1158e https://doi.org/10.1097/PRS.0000000000005614 (2019). Google Scholar

13. 

S. Gonzalez et al., “The vascuLens: a handsfree projector-based augmented reality system for surgical guidance during DIEP flap harvest,” in CMBES Proc., (2021). Google Scholar

14. 

S. Josephson et al., “Evaluation of carotid stenosis using CT angiography in the initial evaluation of stroke and TIA,” Neurology, 63 (3), 457 –460 https://doi.org/10.1212/01.WNL.0000135154.53953.2C NEURAI 0028-3878 (2004). Google Scholar

15. 

M. C. Kock et al., “Multi-detector row computed tomography angiography of peripheral arterial disease,” Eur. Radiol., 17 (12), 3208 –3222 https://doi.org/10.1007/s00330-007-0729-4 (2007). Google Scholar

16. 

D. Ai et al., “Augmented reality based real-time subcutaneous vein imaging system,” Biomed. Opt. Express, 7 (7), 2565 –2585 https://doi.org/10.1364/BOE.7.002565 BOEICL 2156-7085 (2016). Google Scholar

17. 

W. Xiang et al., “FPGA-based two-dimensional matched filter design for vein imaging systems,” IEEE J. Transl. Eng. Health. Med., 9 1 –10 https://doi.org/10.1109/JTEHM.2021.3119886 (2021). Google Scholar

18. 

N. J. Cuper et al., “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. Phys., 35 (4), 433 –440 https://doi.org/10.1016/j.medengphy.2012.06.007 MEPHEO 1350-4533 (2013). Google Scholar

19. 

C. A. Mela et al., “Real-time dual-modal vein imaging system,” Int. J. Comput. Assist. Radiol. Surg., 14 (2), 203 –213 https://doi.org/10.1007/s11548-018-1865-9 (2019). Google Scholar

20. 

A. Debelmas et al., “Reliability of color Doppler ultrasound imaging for the assessment of anterolateral thigh flap perforators: a prospective study of 30 perforators,” Plast. Reconstr. Surg., 141 (3), 762 –766 https://doi.org/10.1097/PRS.0000000000004117 (2018). Google Scholar

21. 

O. F. Dogan et al., “Assessment of the radial artery and hand circulation by computed tomography angiography: a pilot study,” Heart Surg. Forum., 8 (1), E28 –E33 https://doi.org/10.1532/HSF98.20041042 (2005). Google Scholar

22. 

J. González Martínez et al., “Preoperative vascular planning of free flaps: comparative study of computed tomographic angiography, color Doppler ultrasonography, and hand-held Doppler,” Plast. Reconstr. Surg., 146 (2), 227 –237 https://doi.org/10.1097/PRS.0000000000006966 (2020). Google Scholar

23. 

A. W. Pollak et al., “Multimodality imaging of lower extremity peripheral arterial disease: current role and future directions,” Circ-Cardiovasc. Imag., 5 (6), 797 –807 https://doi.org/10.1161/CIRCIMAGING.111.970814 (2012). Google Scholar

24. 

M. Erfanzadeh and Q. Zhu, “Photoacoustic imaging with low-cost sources; A review,” Photoacoustics, 14 1 –11 https://doi.org/10.1016/j.pacs.2019.01.004 (2019). Google Scholar

25. 

T. Chen et al., “Dedicated photoacoustic imaging instrument for human periphery blood vessels: a new paradigm for understanding the vascular health,” IEEE Trans. Biomed. Eng., 69 (3), 1093 –1100 https://doi.org/10.1109/TBME.2021.3113764 IEBEAX 0018-9294 (2021). Google Scholar

26. 

A. Khadria et al., “Long-duration and non-invasive photoacoustic imaging of multiple anatomical structures in a live mouse using a single contrast agent,” Adv. Sci., 9 (28), 2202907 https://doi.org/10.1002/advs.202202907 (2022). Google Scholar

27. 

M. Li et al., “Three-dimensional deep-tissue functional and molecular imaging by integrated photoacoustic, ultrasound, and angiographic tomography (PAUSAT),” IEEE Trans. Med. Imaging, 41 (10), 2704 –2714 https://doi.org/10.1109/TMI.2022.3168859 ITMID4 0278-0062 (2022). Google Scholar

28. 

X. Zhu et al., “Real-time whole-brain imaging of hemodynamics and oxygenation at micro-vessel resolution with ultrafast wide-field photoacoustic microscopy,” Light-Sci. Appl., 11 (1), 1 –15 https://doi.org/10.1038/s41377-022-00836-2 (2022). Google Scholar

29. 

X. Wang et al., “Integrated thermoacoustic and ultrasound imaging based on the combination of a hollow concave transducer array and a linear transducer array,” Phys. Med. Biol., 66 (11), 115011 https://doi.org/10.1088/1361-6560/abfc91 PHMBA7 0031-9155 (2021). Google Scholar

30. 

Y. Zhang and L. Wang, “Adaptive dual-speed ultrasound and photoacoustic computed tomography,” Photoacoustics, 27 100380 https://doi.org/10.1016/j.pacs.2022.100380 (2022). Google Scholar

31. 

I. Tsuge et al., “Photoacoustic tomography shows the branching pattern of anterolateral thigh perforators in vivo,” Plast. Reconstr. Surg., 141 (5), 1288 –1292 https://doi.org/10.1097/PRS.0000000000004328 (2018). Google Scholar

32. 

J. Xia et al., “Photoacoustic tomography: principles and advances,” Prog. Electromagn Res., 147 1 –22 https://doi.org/10.2528/PIER14032303 (2014). Google Scholar

33. 

L. Lin et al., “High-speed three-dimensional photoacoustic computed tomography for preclinical research and clinical translation,” Nat. Commun., 12 (1), 1 –10 https://doi.org/10.1038/s41467-021-21232-1 NCAOBW 2041-1723 (2021). Google Scholar

34. 

S. Na et al., “Massively parallel functional photoacoustic computed tomography of the human brain,” Nat. Biomed. Eng., 6 (5), 584 –592 https://doi.org/10.1038/s41551-021-00735-8 (2022). Google Scholar

35. 

L. Lin et al., “Photoacoustic computed tomography of breast cancer in response to neoadjuvant chemotherapy,” Adv. Sci., 8 (7), 2003396 https://doi.org/10.1002/advs.202003396 (2021). Google Scholar

36. 

Y. Duan et al., “Spherical-matching hyperbolic-array photoacoustic computed tomography,” J. Biophotonics, 14 (6), e202100023 https://doi.org/10.1002/jbio.202100023 (2021). Google Scholar

37. 

K. H. Fan-Chiang et al., “Analog LCOS SLM devices for AR display applications,” J. Soc. Inf. Display., 28 (7), 581 –590 https://doi.org/10.1002/jsid.881 JSIDE8 0734-1768 (2020). Google Scholar

38. 

C. Chang et al., “Speckle reduced lensless holographic projection from phase-only computer-generated hologram,” Opt. Express, 25 (6), 6568 –6580 https://doi.org/10.1364/OE.25.006568 OPEXFF 1094-4087 (2017). Google Scholar

39. 

M. Chlipała et al., “Wide angle holographic video projection display,” Opt. Lett., 46 (19), 4956 –4959 https://doi.org/10.1364/OL.430275 OPLEDP 0146-9592 (2021). Google Scholar

40. 

Y. Dai et al., “Calibration of a phase-only spatial light modulator for both phase and retardance modulation,” Opt. Express, 27 (13), 17912 –17926 https://doi.org/10.1364/OE.27.017912 OPEXFF 1094-4087 (2019). Google Scholar

41. 

K. M. Johnson et al., “Smart spatial light modulators using liquid crystals on silicon,” IEEE J. Quantum Electron., 29 (2), 699 –714 https://doi.org/10.1109/3.199323 IEJQA7 0018-9197 (1993). Google Scholar

42. 

T. Whelan et al., “ElasticFusion: dense SLAM without a pose graph,” in Robot.: Sci. and Syst., (2015). https://doi.org/10.15607/RSS.2015.XI.001 Google Scholar

43. 

N. Mahmoud et al., “Live tracking and dense reconstruction for handheld monocular endoscopy,” IEEE Trans. Med. Imaging, 38 (1), 79 –89 https://doi.org/10.1109/TMI.2018.2856109 ITMID4 0278-0062 (2018). Google Scholar

44. 

W. Xia et al., “A robust edge-preserving stereo matching method for laparoscopic Images,” IEEE Trans. Med. Imaging, 41 (7), 1651 –1664 https://doi.org/10.1109/TMI.2022.3147414 ITMID4 0278-0062 (2022). Google Scholar

45. 

L. I. O. America, “American National Standard for Safe Use of Lasers,” (2014). Google Scholar

46. 

T. Jiang et al., “HoloLens-based vascular localization system: precision evaluation study with a three-dimensional printed model,” J. Med. Internet Res., 22 (4), e16852 https://doi.org/10.2196/16852 (2020). Google Scholar

47. 

T. Tansatit et al., “Periorbital and intraorbital studies of the terminal branches of the ophthalmic artery for periorbital and glabellar filler placements,” Aesthetic Plast. Surg., 41 678 –688 https://doi.org/10.1007/s00266-016-0762-2 (2017). Google Scholar

48. 

K. T. D. Loh et al., “Successfully managing impending skin necrosis following hyaluronic acid filler injection, using high-dose pulsed hyaluronidase,” Plast. Reconstr. Surg. Glob. Open, 6 (2), e1639 https://doi.org/10.1097/GOX.0000000000001639 (2018). Google Scholar

Biographies of the authors are not available.

CC BY: © The Authors. Published by SPIE and CLP under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Shu Pan, Li Wang, Yuanzheng Ma, Guangyu Zhang, Rui Liu, Tao Zhang, Kedi Xiong, Siyu Chen, Jian Zhang, Wende Li, and Sihua Yang "Photoacoustic-enabled automatic vascular navigation: accurate and naked-eye real-time visualization of deep-seated vessels," Advanced Photonics Nexus 2(4), 046001 (13 May 2023). https://doi.org/10.1117/1.APN.2.4.046001
Received: 10 February 2023; Accepted: 12 April 2023; Published: 13 May 2023
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
KEYWORDS
3D modeling

3D image processing

Visualization

Blood vessels

Error analysis

3D acquisition

3D image reconstruction

Back to Top