Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Photoacoustic computed tomography for functional human brain imaging [Invited]

Open Access Open Access

Abstract

The successes of magnetic resonance imaging and modern optical imaging of human brain function have stimulated the development of complementary modalities that offer molecular specificity, fine spatiotemporal resolution, and sufficient penetration simultaneously. By virtue of its rich optical contrast, acoustic resolution, and imaging depth far beyond the optical transport mean free path (∼1 mm in biological tissues), photoacoustic computed tomography (PACT) offers a promising complementary modality. In this article, PACT for functional human brain imaging is reviewed in its hardware, reconstruction algorithms, in vivo demonstration, and potential roadmap.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With approximately 100 billion neurons and 100 trillion connections, the brain remains one of the greatest mysteries and challenges in science and medicine [1,2]. Despite the tremendous advances in neuroscience, the underlying causes of many neurological and psychiatric disorders, such as Parkinson’s disease, Alzheimer’s disease, autism, and schizophrenia, remain largely unknown due to the limited information extractable from the brain and the limited knowledge to interpret the extracted information [24]. To catalyze neuroscience discovery, one primary goal of the community is to develop imaging tools that provide rich functional contrast, fine spatiotemporal resolution, and sufficient penetration.

Several neuroimaging technologies have been developed. Blood-oxygen-level dependent (BOLD) functional magnetic resonance imaging (fMRI) at high or ultrahigh fields has made tremendous improvements in spatiotemporal resolution, allowing brain function to be investigated at the level of cortical layers and columns [58]. However, the BOLD signal shows a nonlinear relationship with the deoxyhemoglobin (HbR) concentration and suffers substantial tissue background [9,10]. Moreover, the MRI system’s bulkiness, high cost, and requirement of confining the subject in a noisy and magnetic enclosure limit use for certain subjects and activities [11]. Alternatively, positron emission tomography (PET) can image neurometabolism but requires radioactive tracers and lacks fine spatiotemporal resolution [12]. Diffuse optical tomography (also termed functional near-infrared spectroscopy (fNIRS)) is advantageous in molecular specificity, temporal resolution, cost, and portability but suffers from low spatial resolution due to light diffusion [13,14]. Electroencephalography (EEG) and magnetoencephalography (MEG) can detect electrical activities of neurons at a high speed but with poor spatial resolution [15]. Functional ultrasound (fUS) has been demonstrated as a valuable tool for monitoring newborns’ brain activities through the fontanelles, to reveal adult brain micro-vasculature and hemodynamics through the temporal bone by the use of microbubbles, and to monitor brain activities during brain surgery [1620]. Nevertheless, fUS lacks molecular specificity, and the round-trip acoustic attenuation and aberration induced by the skull remain challenges for label-free imaging. Generally, fUS is less sensitive to blood vessels running perpendicularly to the acoustic axis than those running in parallel with the acoustic axis. Generally, fUS uses an ultrasound probe in direct contact with the head, and thus it may be difficult to image blood vessels running along the cortical surface in a large field of view (FOV). Among the above neuroimaging modalities, there is a gap between those with fine spatial resolution represented by high/ultrahigh-filed fMRI and those with molecular specificity represented by fNIRS. Bridging this gap provides a unique opportunity for photoacoustic (PA) computed tomography (PACT).

PACT is a hybrid modality that images the distribution of molecular-specific optical absorption by irradiating the tissues with a non-ionizing diffuse laser pulse and recording the thermo-elastically induced acoustic waves [21]. Since biological tissues are much less scattering to sound than to light, PACT can image beyond the optical transport mean free path (∼1 mm), which is a limitation for high-resolution optical imaging [22]. The penetration depth allows for imaging the human cortex—the largest neural integration in the central nervous system of the brain [23]. Compared to fUS, PACT is less sensitive to blood vessels perpendicular to the array surface but more suitable for imaging those along the cortical surface in a large FOV due to the widely adopted panoramic detection scheme. At certain near-infrared wavelengths, PA signals are almost exclusively from hemoglobin, which is orders of magnitude more absorptive than other tissue components, resulting in low background and a high sensitivity [2426]. Hence, PACT can quantify the concentrations of both oxyhemoglobin (HbO2) and HbR based on their distinct spectral signatures in a linear relationship. The quantified hemoglobin concentrations are convertible to oxygen saturation (sO2) and cerebral blood volume [26]. The diverse functional contrast of PACT enables earlier detection of functional responses than BOLD fMRI [27,28]. With the help of functionalized contrast agents, PACT has also shown potential for cerebral disease diagnosis and therapy [29,30]. Additionally, PACT promises portable, open, magnet-free, and quiet-operation designs at a low cost and low maintenance [31].

Two major challenges remain to be addressed in translating PACT to the human brain. First, existing PACT systems are either insufficiently sensitive to detect functional signals or too slow to overcome motion artifacts [3234]. Second, the skull-induced acoustic aberration has not been sufficiently corrected for [35]. Centering around the two challenges, Section 2 reviews the PACT systems for functional human brain imaging and the mainstream ultrasonic transducer technologies. Section 3 discusses the properties and numerical models of the human skull. The state-of-the-art image reconstruction algorithms that can correct for the skull-induced acoustic aberration are reviewed in Section 4. A recently reported work establishing in vivo PACT of human brain function is reviewed in Section 5. The summary and outlook are provided in Section 6. By describing the advances of the instruments, challenges associated with the skull-induced acoustic aberration, and presenting an overview on the de-aberration algorithms, we hope that this review can accelerate functional brain PACT transition from the bench to the clinic.

2. PACT systems for human brain imaging

2.1 Two-dimensional (2D) PACT systems

Functional PACT was first demonstrated in rodents [36]. The system consisted of five main components—a laser source to excite PA signals from the brain, an ultrasonic transducer to record the emitted PA waves, a mechanism to scan the ultrasonic transducer, a pre-amplifier to amplify the received acoustic signals, and a data acquisition system to digitize the amplified signals (Fig. 1(a)). A cylindrically focused single-element ultrasonic transducer was scanned around the head to achieve full-view data acquisition for 2D cross-sectional imaging. The transducer was focused in the elevational direction (z axis) but unfocused in the scanning plane (x-y plane). While the elevational resolution originated from the cylindrical focusing, the in-plane resolution was derived from the tomographic reconstruction. A similar system was later reported to image through an ex vivo monkey skull (2–4 mm thick) [3739]. The transducer was focused in the scanning plane but unfocused in the elevational direction. The focus in the scanning plane was used as a virtual point detector to improve the spatial resolution and mitigate reconstruction artifacts (Fig. 1(b)) [37]. The reconstructed image revealed the major cortical blood vessels. However, due to the lack of elevational resolution, the image was blurred by the PA signals outside the scanning plane. The same virtual detector concept was applied to a spherically focused transducer which offered both in-plane and elevational focusing [40]. The system was demonstrated to image through an ex vivo adult human skull of 4–9-mm thicknesses [40]. The researchers also used a photon recycler to improve the illumination efficiency and achieved a 2.4-times signal-to-noise ratio (SNR) improvement [40]. In this setup, the photon recycler also served as an acoustic barrier preventing superficial PA signals from propagating to the detector, resulting in further improved elevational resolution. The acquired image revealed a canine brain's major cortical vessels (Fig. 1(c)).

 figure: Fig. 1.

Fig. 1. 2D PACT. (a) PACT of the rat brain cortex through the skull and skin using a scanned cylindrically focused ultrasonic transducer [36]. (b) PACT of the monkey brain cortex through a monkey skull using a scanned cylindrically focused ultrasonic transducer [37]. (c) PACT of the canine brain cortex through an adult human skull using a scanned spherically focused ultrasonic transducer [40]. (d) (1) The ring-shaped animal PACT system was used to image (1) the mouse brain cortex through the skull in vivo, (2) the coronal plane of the mouse brain through the skin and skull in vivo, (3) the whole mouse brain ex vivo, and (4) the axial plane of the trunk with USCT and PACT in vivo [4146]. (e) A wearable PACT device for imaging the brain of a free-moving rat [47]. (f) PACT of the human breast using a full-ring ultrasonic transducer array [48]. (g) PACT of the deep mouse brain using a scanned linear ultrasonic probe [49]. (h) PACT for brain surgical guidance using a linear ultrasonic probe without scanning [50]. In (b) and (c), the scalp was shown in the schematics but not present in the ex vivo experiments. Adapted with permission [36,37,4050].

Download Full Size | PDF

To increase the imaging speed, researchers replaced the scanned single-element transducer with a ring-shaped confocal transducer array, which has become the primary form of modern 2D PACT systems (Fig. 1(d1)) [41,42,46,5153]. Since each transducer element was cylindrically focused in the elevational direction with a small azimuthal dimension, no virtual point detector was assumed for high-quality imaging reconstruction. With 512 elements and 1:8 electronic multiplexing, a full-ring PACT system of a 5-MHz central frequency was developed to image the brain cortex at a 1-Hz frame rate (Fig. 1(d2)) [42]. Other forms of ring-shape systems, including half-ring confocal transducer arrays consisting of 64 elements and a 3/4-ring confocal transducer array with 512 elements, were also reported to image brain cross-sections at a 10-Hz frame rate (Fig. 1(d3)) and whole-brain morphology (Fig. 1(d4)) [51,52]. Later, researchers developed full-ring systems with one-to-one-mapped acquisition channels to overcome the limited-view issues associated with the partial-ring systems and the limited imaging speed due to electronic multiplexing [43,41,46,53]. Some of these systems were also equipped with pulse-echo ultrasound imaging capability, enabling both ultrasound computed tomography (USCT) and PACT [54,46,43]. By incorporating the USCT-measured speed of sound (SOS) into the adaptive PACT reconstruction algorithms, the image distortion induced by the acoustic heterogeneity of biological tissues was substantially suppressed (Fig. 1(d5)) [46,54]. To monitor brain function in awake animals, a miniaturized PACT device consisting of three ring-shaped 64-element transducer arrays was developed, allowing the brain to be simultaneously imaged at three elevational planes (Fig. 1(e)) [47]. A scaled-up version of the animal full-ring system was later employed for human breast imaging (Fig. 1(f)) [48]. However, the transducer elements were designed to be flat and unfocused in the elevational direction due to the large array diameter—the focal distance of an element with a limited elevational dimension is far from the array center. Although the unfocused ring array was unequipped with fine elevational resolution, it was scanned along the elevational direction to form a cylindrical detection aperture to improve the elevational resolution. Therefore, the 2D breast ring-array system still holds the potential for human brain imaging.

Off-the-shelf linear ultrasonic transducer arrays were also explored for 2D PACT. Functional imaging of the deep mouse brain was reported using a 15-MHz linear ultrasonic probe scanned around the animal’s head (Fig. 1(g)) [49]. Scanning a linear probe is analogous to using a full-ring array, except that the former can provide denser sampling along the azimuthal direction and requires fewer data acquisition channels at the cost of imaging speed. A linear probe was also demonstrated for surgical guidance in an ex vivo human skull with tooltip illumination (Fig. 1(h)) [50]. However, the spatial resolution of a linear probe (without scanning) is highly anisotropic due to the limited view, leading to considerable reconstruction artifacts [31,5558]. Another major drawback of using off-the-shelf probes is that they usually do not have pre-amplifiers closely connected to the transducer elements, compromising the SNR [56].

Owing to its simplicity, relatively low cost, and acceptable image quality, a 2D PACT system, especially the full-ring array system, is promising in functional human brain imaging. Nevertheless, three potential challenges need to be considered for in vivo studies. First, without elevational scanning, a large-diameter unfocused full-ring system (e.g., the breast system in Fig. 1(d)) cannot differentiate the brain signals from the scalp signals due to the poor elevational resolution. Adding elevational scanning improves the elevational resolution but decreases the imaging speed. Second, the scalp signals can propagate into the skull and cause reverberation, fundamentally a three-dimensional (3D) problem. Third, a 2D system is less sensitive than an ideal 3D imaging system.

2.2 3D PACT systems

Hemispherical detection aperture is considered the optimal and practical solution for 3D human brain PACT because it can provide 2π-solid-angle detection and accommodate the shape of the head. A system that employed a sparse hemispherical array was reported to scan around its central axis for mouse brain imaging and along a 3D spiral trace for trunk imaging (Fig. 2(a)) [34,59]. A system with a similar transducer element arrangement but a larger array diameter was reported for human breast and extremity imaging (Fig. 2(b)) [60,61]. The transducer array was scanned along a 2D spiral pattern. It provided dense sampling, allowing a high-quality image to be reconstructed. The key limitations of the system were its long acquisition time (120 s) and shallow penetration (<10 mm) [60]. To adapt the system to human brain imaging, a longer laser wavelength (e.g., 1064 nm) can be adopted to improve the imaging depth. Also, the array can be scanned for a smaller FOV to improve the imaging speed. Another system composed of 16 arc ultrasonic transducer arrays, each consisting of 32 elements of 1-MHz central frequency, was reported to image the human breast (Fig. 2(c)) [62]. The system combined fixed and dynamic light sources to improve illumination uniformity. A similar but economic version was also reported for human breast imaging. It scanned one quarter-ring transducer array for ultrasound detection (Fig. 2(d)) [63]. The array was made of 96 wideband transducer elements, allowing for denser sampling in the elevational direction and higher resolution. Two fiber bundles were rotated together with the transducer array to illuminate the breast. For both systems in Figs. 2(c) and (d), the illumination strategies provided uniform illumination, but the inconstant light energy distribution at each scanning position violated the constant initial PA pressure assumption defined by the 3D image reconstruction algorithm [64]. Thus, the illumination needs to be fixed or numerically compensated for if the systems are used for functional human brain imaging. Recently, another similar system using fixed illumination was reported for ex vivo transcranial imaging (Fig. 2(e)) [65]. A quarter-ring transducer array of 64 elements centered at 1 MHz was scanned to form a hemispherical detection aperture. The acquired data were also used to evaluate some advanced de-aberration reconstruction algorithms to be discussed in Section 3 [66]. More recently, a massively parallel system was developed to demonstrate in vivo functional human brain imaging for the first time [27,28]. The system was made of 1024 parallel detection channels evenly distributed on four quarter-ring arrays separated at 90 degrees. By mechanical scanning, it acquired a structural image in 10 s and a functional volumetric image in 2 s. More details regarding that study can be found in Section 4.

 figure: Fig. 2.

Fig. 2. 3D PACT. (a) A sparse hemispherical array for animal brain imaging [59]. (b) A sparse hemispherical array for human breast and extremity imaging [60]. (c) A system composed of 16 arc arrays for human breast imaging [62]. (d) A system with a quarter-ring array for human breast imaging [63]. (e) A system with a quarter-ring array for transcranial imaging [65]. (f) A 1024-element parallel system for functional human brain imaging [28]. Adapted with permission [59,60,62,63,65,28].

Download Full Size | PDF

The transcranial PA spectrum has been measured to peak at ∼0.75 MHz using 1-MHz ultrasonic transducers [65]. Given that these transducers have maximum sensitivity at 1 MHz, which is higher than 0.75 MHz, one can infer that the limiting factor for detecting higher-frequency PA signals is the skull-induced frequency-dependent attenuation rather than the transducer's limited bandwidth. In other words, using higher-frequency transducers will not improve the detection of transcranial PA signals. On the other hand, although using lower-frequency transducers can improve the detection of transcranial PA signals, the upper bandwidth of such transducers is not high enough to resolve superficial features, such as scalp vessels and boundaries, for use in skull co-registration and modeling. Therefore, the 1-MHz center frequency, which optimizes the tradeoff between the acquisition of transcranial PA signals and the resolution for superficial features, is desirable for transcranial human-brain PACT.

Other forms of 3D PA imaging systems include acoustic-resolution photoacoustic microscopy (AR-PAM), optical-resolution photoacoustic microscopy (OR-PAM), and Fabry–Pérot etalon sensor-based PACT scanner, among others [67,68]. Typical AR- and OR-PAM form images by scanning a single-element transducer in a 2D plane and reconstructing depth information based on the signal arrival time. Because cross-sectional images are acquired, but an inverse reconstruction problem is not involved, PAM systems can be referred to as photoacoustic tomography (PAT) but not computed tomography. Depending on the transducer frequency, AR-PAM can typically image ∼3 mm deep, whereas OR-PAM can image ∼1 mm deep [67]. The Fabry–Pérot etalon PACT uses an optical ultrasonic transducer and reconstructs images using the time reversal algorithm [68]. Overall, these systems are considered unsuitable for noninvasive human brain imaging due to their limited FOVs, low imaging speeds, and limited penetration.

2.3 Ultrasonic transducer technologies

Based on the sensing mechanism, ultrasonic transducers fall into two main categories: electric transducers and optical detectors. Electric transducers can further be classified as conventional piezoelectric transducers, piezoelectric micromachined ultrasonic transducers (PMUTs), and capacitive micromachined ultrasonic transducers (CMUTs) [69]. Based on the detection strategies, optical ultrasonic detectors can be classified as interferometric detectors and refractometric detectors [70]. Figure 3 shows the representative structures of electric transducers and optical detectors.

 figure: Fig. 3.

Fig. 3. Schematics of electric ultrasonic transducers and optical ultrasonic detectors, including (a) piezoelectric transducers, (b) CMUTs, (c) PMUTs, and (d) optical (planar Fabry-Pérot, interferometric type) detectors. (d) was adapted with permission [70].

Download Full Size | PDF

Piezoelectric transducers are most often used [71]. In fact, the systems in Figs. 1 and 2, except Fig. 2(b), all adopted piezoelectric transducers (the transducer array in Fig. 2(b) was made of CMUTs). Compared with piezoelectric transducers, PMUTs and CMUTs are compatible with application-specific integrated circuits (ASICs), which may benefit future PACT systems [72]. Recently, transparent CMUTs have also been reported, potentially permitting more efficient illumination and detection for PACT [7375]. Various types of optical detectors with excellent performance have been developed based on optical fibers [76], free-space optics [68], and polymer waveguides [77]. Benefitting from the modern semiconductor infrastructure, miniaturized optical detectors have recently been made into photonic chips, allowing for submicrometer element size [78] and fine pitch arrays with parallel readout [79]. Nevertheless, compared with electric transducers, optical detectors offer a relatively low sensitivity for an optimal half-wavelength detection area at frequencies below ∼1 MHz [80,81].

For functional human brain imaging, which aims to detect PA signal changes of a few percent, SNR is the most critical design criteria [27,28]. The SNR is mainly determined by the transducer sensitivity and the transducer array's filling factor. For a PACT system, although the transducer elements must be small enough to provide a sufficiently large coverage angle, the element size need not be smaller than half wavelength, because an overly small element size does not improve imaging performance. Therefore, practitioners should consider whether a specific technology allows an element size of half wavelength to be manufactured and evaluate the sensitivity at the optimal half-wavelength active area. Piezoelectric transducers can be made to optimal sizes, and their sensitivities are scalable with the active areas. For CMUTs and PMUTs, although their basic unit is a cell of dozens or hundreds of micrometers, multiple cells can be densely packed into an element of arbitrary size or shapes using the microfabrication process [82,83]. A typical cell filling factor of a CMUT or PMUT element is ∼0.4 [84]. Also, the sensitivities of CMUTs and PMUTs are scalable with their active areas. In contrast, an optical detector’s sensitivity is generally independent of its active area. For a 1-MHz (upper cut-off frequency) PACT system, an element size of ∼0.75 mm should be used. Under this condition, the sensitivities, defined as the noise equivalent pressures per square root of unit bandwidth, are ∼1.5 $\textrm{mPa}/\sqrt {\textrm{Hz}} $ for piezoelectric transducers [81], ∼1.4 $\textrm{mPa}/\sqrt {\textrm{Hz}} $ for CMUTs [84], ∼0.3 $\textrm{mPa}/\sqrt {\textrm{Hz}} $ for PMUTs [85], and ∼2.0 $\textrm{mPa}/\sqrt {\textrm{Hz}} $ for optical detectors [68,80].

Overall, piezoelectric transducers, PMUTs, and CMUTs are all competitive in human brain PACT. However, PMUTs and CMUTs have not been widely commercialized, and the device performance may vary with wafer substrates, leading to potential challenges in fabricating large arrays with hundreds to thousands of device chips.

3. Properties and numerical models of the human skull

For transcranial PACT, the presence of the skull has both optical and acoustic effects. At 956-nm optical wavelength, the effective attenuation coefficient of the human skull was measured to be ∼1.9 cm–1, corresponding to a transmittance of ∼0.5 for a bone thickness of 0.4 cm [86]. A higher transmittance is expected at 1064 nm due to the lower scattering coefficient. 1064 nm is the preferred optical wavelength for transcranial PACT. First, the dominant absorber at 1064 nm, HbO2, can be directly measured with a single wavelength. Second, high-energy pulsed Nd:YAG lasers at 1064 nm are widely available. Third, the ANSI safety limits allow for a higher maximum permissible exposure at 1064 nm than at a shorter wavelength [87]. Like transcranial pulse-echo ultrasound imaging, the major obstacle for transcranial PACT is the skull-induced acoustic aberration. Unlike conventional ultrasound imaging, PACT does not suffer speckle artifacts and only experiences one-way acoustic aberration, posing a more straightforward problem to solve [88,89]. Since the development of transcranial PACT reconstruction frameworks mainly centers around the skull-induced acoustic problems, we will discuss the skull acoustic properties and numerical models in this section before reviewing the image reconstruction algorithms in Section 4.

3.1 Human skull properties

The adult human cranial bone is a sandwiched structure made of the inner table, outer table, and diploe layer in the middle (Fig. 4) [90]. The inner and outer tables are cortical bones anatomically different from the diploe layer, which is trabecular (cancellous) bone. For infants under about four years old, the cranial vaults are typically unilaminar cortical bones [91]. The cortical bone is a solid and dense material with a density ranging from 1.8 to 2.2 g/cm3 [92]. The trabecular bone consists of lamellar packets of irregular plates and rods, among which are fluid bone marrow cells [92]. The trabecular bone is highly heterogeneous, anisotropic, and porous. The element size is between 50 and 150 µm with a separation distance between 0.5 and 2 mm [93]. The trabecular bone’s density falls between 0.3 and 1.3 g/cm3 with an average porosity (void fraction) of 24% [90,93]. Since the cortical bone’s density is higher than that of the trabecular bone, acoustic energy is mostly absorbed by the cortical bone [94,95]. At 1 MHz, the cortical bone's acoustic attenuation coefficient is ∼2.7 dB/cm for longitudinal waves and ∼5.4 dB/cm for shear waves. For solid bones, the phase velocities of longitudinal and shear waves are ∼3000 m/s and ∼1500 m/s, respectively [90,96].

 figure: Fig. 4.

Fig. 4. Binarized x-ray microtomographic image of an adult cranial bone slice with 10-µm resolution. Adapted with permission [90].

Download Full Size | PDF

The cranial bone’s thickness is dependent on the anatomical location and the subject’s gender and age. Figure 5 maps the average skull thicknesses based on the data acquired from 76 people, including 66 males and 10 females aged from 10 to 60 [97]. The temporal and parietal bones demonstrate relatively small thicknesses, making them a potential sally port for transcranial PACT.

 figure: Fig. 5.

Fig. 5. Thickness maps of the human skull. (a) Skull anatomies. Average skull thicknesses of (b) males and (c) females at various anatomical locations. L/R: left/right; FB: frontal bone; TB: temporal bone; OB: occipital bone; PB: parietal bone; FCP: frontal central point; OCP: occipital central point. Error bars: standard deviations across subjects of the same genders.

Download Full Size | PDF

3.2 Skull-induced acoustic aberration

The human skull presents impedance mismatch with the ambient soft tissues and coupling media and distorts acoustic amplitudes and phases. Figure 6 categorizes the skull's acoustic effects into three main categories: attenuation, waveform distortion, and signal contamination.

 figure: Fig. 6.

Fig. 6. Acoustic effects of the skull.

Download Full Size | PDF

Wave reflection, absorption, and scattering are the main attenuation causes. Reflection results from the acoustic impedance mismatch between the cortical bone and ambient soft tissues at the inner and outer skull boundaries. Absorption is induced by the conversion of mechanical energy to heat in the cortical bone, whereas acoustic scattering mainly occurs in the diploe layer. Over the diagnostic ultrasound frequency range, the acoustic absorption can be described by the frequency power law [98,99]:

$$\alpha ({{{\boldsymbol r}},f} )= {\alpha _0}({{\boldsymbol r}} ){f^y},$$
where ${{\boldsymbol r}} \in {\boldsymbol \; }$V represents the spatial location, f denotes frequency in MHz, ${\alpha _0}({{\boldsymbol r}} )$ is the frequency-independent absorption coefficient in $\textrm{Np}/\textrm{MH}{\textrm{z}^y}/\textrm{cm}$ (1 $\textrm{Np}/\textrm{MH}{\textrm{z}^y}/\textrm{cm}$ = 8.686 $\textrm{dB}/\textrm{MH}{\textrm{z}^y}/\textrm{cm}$), and y stands for the power law exponent (typically between 0.9 and 2.0) [98]. For a homogeneous attenuation coefficient of 2.7 $\textrm{dB}/\textrm{MH}{\textrm{z}^2}/\textrm{cm}$ and a scattering coefficient of 5.5 dB/cm [90], the transmittance of a normally incident acoustic wave was estimated at 0.75 MHz for different skull thicknesses in Fig. 7. Scattering and reflection are shown to dominate the attenuation, and the transmittance is ∼22% for a skull thickness of 4 mm. For a PA source in the skull, the acoustic waves’ incident angles will differ at different skull locations, resulting in different transmission coefficients at the skull boundaries (to be shown in Fig. 8). The average acoustic transmittance measured over a hemispherical detection aperture that partially enclosed the skull was ∼4%, corresponding to ∼80% pressure attenuation [65].

 figure: Fig. 7.

Fig. 7. Acoustic transmittance at 0.75 MHz in the presence of the human skull due to the three causes.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Pressure transmission coefficients at the skull boundaries. (a) Pressure transmission coefficients versus incident angles at the soft-tissue–skull interface and (b) at the skull–soft-tissue interface. Adapted with permission [65].

Download Full Size | PDF

Waveform distortion is another acoustic effect of the skull. Signal broadening is induced by the frequency-dependent attenuation. The relationship between acoustic dispersion and acoustic absorption is governed by [100]

$$\frac{1}{{{c_2}({{\boldsymbol r}} )}} - \frac{1}{{{c_1}({{\boldsymbol r}} )}} = \; {\alpha _0}({{\boldsymbol r}} )\textrm{tan}\left( {\frac{{\pi y}}{2}} \right)({f_2^{y - 1} - f_1^{y - 1}} ),\; $$
where ${c_1}$ and ${c_2}$ are the sound speeds at frequencies ${f_1}$ and ${f_2}$, and y ≠ 1. An alternate expression for y = 1 is available [101]. Equation (2) also indicates that when the absorption coefficient is frequency square-dependent (y = 2), acoustic dispersion is negligible. Wave refraction occurs at the skull boundaries due to the sound speed mismatch between the ambient soft tissues and the cortical bone. When acoustic waves propagate from the soft tissues to the bone or vice versa, wave conversion occurs at their interfaces due to the support of shear stress by the bone. The conversion introduces acoustic distortion due to the different phase velocities of longitudinal (compressive) and shear waves. The degree of wave mode conversion is dependent on the angles of incidence and refraction at the inner and outer skull boundaries (Fig. 8) [65,102]. At the soft-tissue–skull interface, the critical angle for an incident longitudinal wave is approximately 33°, below which the incident wave is partially converted to longitudinal and shear waves [65]. Above the critical angle, only shear waves are converted and transmitted into the skull. The pressure transmission coefficient can be greater than unity due to the higher acoustic impedance of the longitudinal wave in the skull than in the soft tissues, although the intensity transmittance can never surpass unity. There is no critical angle for the longitudinal wave at the skull–soft-tissue interface due to the higher phase velocity of longitudinal waves in the skull than in the soft tissues. The critical angle for the shear wave at the skull–soft-tissue interface is around 75°.

Acoustic scattering in the diploe layer scrambles the waves. The degree of scattering depends on the ratio between the scales of the trabecular units and the acoustic wavelength, which typically falls between ∼0.025 and ∼0.075 for longitudinal waves at 0.75 MHz [90,93,96]. Another type of waveform distortion is the reverberation of the brain signals inside the skull, which prolongs the detected signals. Due to the exponential decay of optical fluence with depth, a large amplitude difference exists between the PA signals from the scalp and those from the brain. Similar to the brain signals’ reverberation, these superficial signals propagate into the skull and reverberate, contaminating the signals from the brain in the temporal domain. One can visualize the impacts of the human skull on transcranial photoacoustic imaging in a recent simulation work [103].

3.3 Numerical models of the skull

Obtaining a numerical model of the skull is essential for de-aberrating the transcranial PA signals, similar to the situations for transcranial high-intensity focused ultrasound (HIFU) [104106]. Theoretically, x-ray CT is most suitable for depicting cortical bone structures, but it has limitations for in vivo applications due to the ionizing radiation exposure [107]. In comparison, MRI is intrinsically less suitable for depicting cortical bone structures due to the low proton density (20% of water) and short signal lifetime (390 ms for T2 at 3T) [108]. Nevertheless, with the development of ultra-short echo time (UTE) and zero time-to-echo (ZT) sequences, MRI has been demonstrated to provide imaging characteristics ideally suited for depicting and segmenting the cranial bone structures [109112]. Figure 9(a) displays the skull images acquired using MRI (ZT sequences; inverse logarithmic scaling; bias correction) and x-ray CT, demonstrating an excellent agreement [111]. An approximately linear correlation was also observed between the MRI and x-ray CT images for Hounsfield numbers between 300 HU and 1,500 HU (Fig. 9(b)) [111]. Since an MRI image is convertible to a Hounsfield map using such a linear relationship, the skull's acoustic parameters can be retrieved based on the relationship between Hounsfield numbers and acoustic properties [112,113]. Alignment between the lab coordinates and the numerical skull model can be performed using the superficial blood vessels imaged by both MRI angiography and PACT [27]. When MRI or x-ray CT is used to improve transcranial PACT, PACT still holds great value in potential applications, such as routine assessment of post-operative brain restoration, studying brain networks during social interaction, which is impractical using a closed-bore MRI or x-ray CT machine, and non-invasive brain-computer interface. In these potential applications, the MRI or x-ray CT scanning only needs to be carried out once. However, the ideal form of transcranial PACT should be independent of MRI or x-ray CT.

 figure: Fig. 9.

Fig. 9. MRI and x-ray CT of the human skull. (a) Coregistered MRI and x-ray CT images. (b) The 2D-histogram distribution indicates an approximately linear correlation between x-ray CT and MRI for the bone between 300 HU and 1,500 HU. (c) Schematic of USCT (left) and a wave propagated from a single element transducer (right). (d) A simple homogeneous head model (1) was used as the initial guess for AWI (2), which was later used as the preconditioner for FWI (3). The ground truth is displayed in (4). Adapted with permission [111,114].

Download Full Size | PDF

To obviate the reliance on MRI or x-ray CT, we envisage two alternative approaches to modeling the skull. First, a PACT system can be designed to enable simultaneous USCT. It has been demonstrated that the combination of transmitted transcranial USCT with adaptive waveform inversion (AWI) followed by full-wave inversion (FWI) can produce a sub-millimeter-resolution skull model with a sufficient SNR using a sub-MHz ultrasound frequency (Fig. 9(c)) [114]. FWI is an iterative reconstruction technique originally developed by the petroleum industry to image hydrocarbon reservoirs and requires a relatively accurate initial guess of the model [115]. AWI is a modified form of FWI and is less sensitive to the preconditioner but typically cannot provide as well resolved models as those produced by FWI. Therefore, using the AWI-generated model as a preconditioner for FWI can improve the skull model quality [114]. Figure 9(d) displayed the recovered acoustic properties of the skull by FWI and AWI using USCT. In this approach, the concurrent measurements also allow for automatic co-registration between the USCT and PACT images. However, the major challenge associated with this approach is the considerable computational time. Another method of modeling the skull is to use a skull thickness atlas statistically built based on the established database, such as Fig. 5 [116]. The inner boundary of the scalp imaged by PACT can be treated as the skull’s outer boundary [65]. Combining the outer boundary and the thickness, one can estimate the skull’s inner boundary. However, this subject-nonspecific method does not measure the exact acoustic properties or geometries of each skull and may need parameter tuning and iterative reconstruction to achieve a sufficient reconstruction quality. Alternatively, this approach can potentially be combined with the first approach by using the atlas as a preconditioner for FWI, where one may not only increase the accuracy of the initial guess but also reduce the computational time by skipping the AWI step.

4. Computational frameworks for transcranial PACT

Given the considerable progress in solving the inverse problem of transcranial PACT, a survey on this topic may facilitate comparison among different algorithms and catalyze the implementation of the developed algorithms in solving in vivo problems. This section is restricted to the problem of estimating the initial acoustic pressure distribution. Recovering the optical properties of the brain, which is not necessarily required in quantifying the PA signals’ fractional changes for functional imaging, will be touched on briefly in Section 5.

4.1 PACT forward model in a lossy and acoustically heterogeneous fluid medium

To describe the power-law absorption and dispersion effects, wave equations that employ time-domain fractional derivative operators were proposed [98,99,117,118]. However, the operators posed a significant memory burden for numerical implementation [119]. To overcome the issue, a wave equation that modeled the power law absorption using fractional Laplacian operators was proposed [120]. Later on, the dispersion term was introduced into the equation of state via another fractional Laplacian operator [121]:

$$p({{{\boldsymbol r}},t} )= \; {c_0}{({{\boldsymbol r}} )^2}\left[ {1 - \tau ({{\boldsymbol r}} )\frac{\partial }{{\partial t}}{{({ - {\nabla^2}} )}^{\frac{y}{2} - 1}} - \eta ({{\boldsymbol r}} ){{({ - {\nabla^2}} )}^{\frac{{y - 1}}{2}}}} \right]{\rho _p}({{{\boldsymbol r}},t} ),\; $$
where $p({{{\boldsymbol r}},t} )$ is the acoustic pressure, ${c_0}({{\boldsymbol r}} )$ denotes the SOS in an adiabatic system, and ${\rho _p}({{{\boldsymbol r}},t} )$ stands for the acoustic density (sound-induced perturbation of the fluid density from its ambient value ${\rho _0}({{\boldsymbol r}} )$). The second and third terms on the right-hand side account for the required power law absorption and dispersion. $\tau ({{\boldsymbol r}} )$ and $\eta ({{\boldsymbol r}} )$ describe the acoustic absorption and dispersion proportionality coefficients defined as [121]
$$\tau ({{\boldsymbol r}} )={-} 2{\alpha _0}({{\boldsymbol r}} ){c_0}{({{\boldsymbol r}} )^{y - 1}},\; \; \eta ({{\boldsymbol r}} )= 2{\alpha _0}({{\boldsymbol r}} ){c_0}{({{\boldsymbol r}} )^y}\textrm{tan}\left( {\frac{{\pi y}}{2}} \right).\; $$
Additionally, the following two equations govern Newton’s second law and conservation of mass:
$$\frac{{\partial {\boldsymbol u}({{{\boldsymbol r}},t} )}}{{\partial t}} = \; - \frac{1}{{{\rho _0}({{\boldsymbol r}} )}}\nabla p({{{\boldsymbol r}},t} ),\; $$
$$\frac{{\partial {\rho _p}({{{\boldsymbol r}},t} )}}{{\partial t}} = \; - {\rho _0}({{\boldsymbol r}} )\nabla \cdot {\boldsymbol u}({{{\boldsymbol r}},t} ),\; $$
where ${\boldsymbol u}({{{\boldsymbol r}},t} )$ denotes the particle velocity. The initial conditions are
$${p_0}({{\boldsymbol r}} )\equiv p({{{\boldsymbol r}},t} ){|_{t = 0}} = 0,\; {{\boldsymbol u}}({{{\boldsymbol r}},t} ){|_{t = 0}} = 0.\; $$
Equation 3 forms the forward wave model of PACT in a lossy and heterogeneous fluid medium. Defining the pressure recording domain as$\; {\boldsymbol R}$ and the recording positions ${{\boldsymbol r}^{\prime}} \in {\boldsymbol \; }$R, the initial pressure distribution ${p_0}({{\boldsymbol r}} )$ can be mapped to the measured time-varying pressure distribution $p({{{\boldsymbol r}^{\prime}},t} )\; $using a forward operator ${\cal \textrm{F}}$:
$$p({{{\boldsymbol r}^{\prime}},t} )= {\cal \textrm{F}}{p_0}({{\boldsymbol r}} ).\; $$

For a lossless and acoustically homogeneous infinite medium, Eq. (4) has the explicit form [64]:

$$p({{{\boldsymbol r}^{\prime}},t} )= \frac{1}{{4\pi c_0^2}}\mathop \smallint \limits_{\boldsymbol V}^{} {d^3}{{\boldsymbol r}}{p_0}({{\boldsymbol r}} )\frac{d}{{dt}}\frac{{\delta ({{c_0}t - |{{{\boldsymbol r}^{\prime}} - {{\boldsymbol r}}} |} )}}{{|{{{\boldsymbol r}^{\prime}} - {{\boldsymbol r}}} |}},$$
where ${c_0}$ is the constant SOS, and $\delta ({\cdot} )$ is the Dirac delta function. One can rewrite Eq. (5) as
$$p({{{\boldsymbol r}^{\prime}},t} )= \frac{1}{{4\pi c_0^2}}\frac{d}{{dt}}\left( {\frac{{g({{{\boldsymbol r}}^{\prime},t} )}}{t}} \right),$$
where $g({{{\boldsymbol r}^{\prime}},t} )= \mathop \smallint \limits_{\boldsymbol V}^{} {d^3}{{\boldsymbol r}}{p_0}({{\boldsymbol r}} )\delta ({{c_0}t - |{{{\boldsymbol r}^{\prime}} - {{\boldsymbol r}}} |} )$ represents the spherical Radon transform connecting $g({{{\boldsymbol r}^{\prime}},t} )$ with the integral of ${{\boldsymbol r}}{p_0}({{\boldsymbol r}} )$ over a spherical surface that has a radius of ${c_0}t$.

4.2 PACT forward model based on elastic wave equations

The previous section assumed a lossy and heterogeneous fluid medium where the shear stress or viscosity was ignored by the fluid wave equations. To describe the propagation of transverse shear waves in the bone, a more accurate reconstruction method can be developed from the elastic wave equations. In the elastic wave equations, instead of using scalar pressure, a 3×3 stress tensor matrix is defined:

$${\boldsymbol \sigma }({{{\boldsymbol r}},t} )= \; \left[ {\begin{array}{ccc} {{\sigma^{11}}({{{\boldsymbol r}},t} )}&{{\sigma^{12}}({{{\boldsymbol r}},t} )}&{{\sigma^{13}}({{{\boldsymbol r}},t} )}\\ {{\sigma^{21}}({{{\boldsymbol r}},t} )}&{{\sigma^{22}}({{{\boldsymbol r}},t} )}&{{\sigma^{23}}({{{\boldsymbol r}},t} )}\\ {{\sigma^{31}}({{{\boldsymbol r}},t} )}&{{\sigma^{32}}({{{\boldsymbol r}},t} )}&{{\sigma^{33}}({{{\boldsymbol r}},t} )} \end{array}} \right].\; $$
Here, acoustic absorption within the skull is assumed to be frequency invariant [90]. This assumption is reasonable because the ultrasonic transducer’s bandwidth limits the bandwidth of the detected PA signals. Defining ${\boldsymbol u}$ = [${\boldsymbol \; }{u_1}({{{\boldsymbol r}},t} ),{u_2}({{{\boldsymbol r}},t} ),{u_3}({{{\boldsymbol r}},t} )$] as the particle velocity vector and $\hat{\alpha }({{\boldsymbol r}} )$ in s-1 as the acoustic absorption coefficient which represents the damping force at unit particle speed for unit mass, the wave propagation can be modeled by the following two equations [122125]:
$$\frac{{\partial {\boldsymbol u}({{{\boldsymbol r}},t} )}}{{\partial t}} + \hat{\alpha }({{\boldsymbol r}} ){\boldsymbol u}({{{\boldsymbol r}},t} )= \frac{1}{{{\rho _0}({{\boldsymbol r}} )}}({\nabla \cdot {\boldsymbol \sigma }({{{\boldsymbol r}},t} )} ),\; $$
$$\frac{{\partial {\boldsymbol \sigma }({{{\boldsymbol r}},t} )}}{{\partial t}} = \lambda ({{\boldsymbol r}} ){\mathbf tr}({\nabla {\boldsymbol u}({{{\boldsymbol r}},t} )} ){\boldsymbol I} + \mu ({{\boldsymbol r}} )({\nabla {\boldsymbol u}({{{\boldsymbol r}},t} )+ \nabla {\boldsymbol u}{{({{{\boldsymbol r}},t} )}^T}} ),\; $$
subject to the initial conditions:
$${{\boldsymbol \sigma }_0}({{\boldsymbol r}} )\equiv {\boldsymbol \sigma }({{{\boldsymbol r}},t} ){|_{t = 0}} ={-} \frac{1}{3}{p_0}({{\boldsymbol r}} ){\boldsymbol I},{{\boldsymbol u}}({{{\boldsymbol r}},t} ){|_{t = 0}} = 0.\; $$

In Eq. 7–c, $\lambda ({{\boldsymbol r}} )$ and $\mu ({{\boldsymbol r}} )$ represent the Lamé parameters describing the full elastic tensor of the linear isotropic medium. The ${\boldsymbol tr}(\cdot )$ operator calculates the trace of a matrix, and I is the identity matrix. It should be noted that the initial pressure ${p_0}({{\boldsymbol r}} )$ is compactly supported in a fluid medium with shear modulus $\mu ({{\boldsymbol r}} )\; $ = 0 [126]. Similar to Eq. (4), the initial pressure distribution ${p_0}({{\boldsymbol r}} )$ can be mapped to the measured pressure distribution $p({{{\boldsymbol r}^{\prime}},t} )\; $using a forward operator $H$:

$$p({{{\boldsymbol r}^{\prime}},t} )= H{p_0}({{\boldsymbol r}} ).\; $$
H depends implicitly on the discretized shear and longitudinal sound speed distribution, density distribution, and absorption distribution. These quantities can be specified by a skull model, which can potentially be obtained from other imaging modalities, such as MRI, x-ray CT, or ultrasound tomography [65]. The effects of the transducer's electrical and spatial impulse responses were ignored in Eqs. (3) and 7, but they can be incorporated readily [35,66,127].

4.3 Discretization of the forward models

To solve the wave equations using numerical methods, the detected pressure and object function need to be both temporally and spatially discretized, and the material parameters need to be spatially discretized. Numerical methods for solving the acoustic wave equations fall into three main categories: finite difference (FD) methods, finite element (FE) methods, and spectral methods [35,128,129]. FD and FE methods are local because the wave equations are solved at each point based on its neighboring points. Spectral methods are global as they employ the information from the entire wavefield, allowing computation to be performed on coarser grids without losing accuracy [129,130]. For transcranial PACT reconstruction, the skull boundaries need to be carefully handled because of the sharp transition of acoustic properties. For FD and spectral methods, a constant grid shape and size are commonly used across the entire finite domain [66,126,131]. To better capture the skull curvature, denser grids need to be defined, which poses a computational challenge. For the FD time-domain (FDTD) method, the widely adopted staggered grid finite difference scheme requires the skull boundaries to be numerically smoothed [35,66,90,126,127]. The effects of smoothing on reconstruction accuracies may require further evaluation. For spectral methods, when the entire finite domain is heterogeneous with a smooth transition or simply homogeneous, the solution can be of high accuracy [129]. However, the sharp acoustic property transition at the skull boundaries may violate the assumptions. FE methods can handle sharp boundaries well using adaptive meshing [128]. However, compared to FD and spectral methods, FE methods require a longer computation time given the same degrees of freedom [132]. Although the sharp skull boundaries can potentially restrict the implementation of a particular numerical method, no study has compared the reconstruction accuracies using the three methods side by side.

4.4 Image reconstruction

Comprehensive reviews of PACT reconstruction methods can be found in several review articles [35,133135]. The universal back-projection algorithm has been the most popular method for PACT reconstruction [64,133,65,33,47,60,62]. Other methods include the inverse Radon transform and time-domain delay-and-sum (beamforming) techniques [136,137]. These canonical methods assume a lossless and acoustically homogeneous medium and that the PA signals are densely sampled on a surface enclosing the object. To compensate for the acoustic heterogeneity induced by the skull, ray-based methods were proposed to divide the wave propagation into layers and account for the skull-induced acoustic aberration in each layer individually (Fig. 10) [65,138]. These approximate methods were computationally efficient and improved the reconstruction quality to some extent but could not separate the longitudinal and shear waves or correct for reverberation. Additionally, to prevent a single back-projected ray from crossing two or more layers, a limited number of virtual detectors in one layer were used to estimate the pressure in the next layer, introducing partial-view problems. Frequency-domain reconstruction methods have also been studied. Certain detection apertures can be implemented efficiently using the fast Fourier transform [139141]. Nevertheless, frequency-domain reconstruction methods also need to assume an acoustically homogeneous medium.

 figure: Fig. 10.

Fig. 10. Ex vivo PACT images reconstructed using (a) the universal back-projection algorithm without the skull and (b) with the skull, and using (c) the layered back-projection algorithm with the skull. Adapted with permission [65].

Download Full Size | PDF

Time reversal reconstruction algorithms form a PACT image by running a numerical model of the forward wave equations backward in time [142144]. When the detection aperture fully encloses the object and the sampling time is sufficiently long for the PA waves to completely escape the detection enclosure, time reversal yields a theoretically exact reconstruction of the object function ${p_0}({{\boldsymbol r}} )\; $[145]. Like the back-projection methods, a partial-view detection aperture leads to information incompleteness and an inexact solution. However, time reversal methods can compensate for the medium loss using gain that inverts the attenuation [144]. For example, to account for the acoustic absorption, the absorption terms in Eqs. (3)–a and 7–b can be reversed in signs. However, care should be taken when performing compensation because numerical instability may occur. Heterogeneity in sound speeds can also be incorporated into time reversal methods [143,146]. The universal back-projection and time reversal algorithms were compared by imaging two line-shape targets through an ex vivo money skull (Fig. 11) [113].

 figure: Fig. 11.

Fig. 11. Ex vivo PACT images reconstructed using (a) the universal back-projection algorithm without the skull and (b) with the skull, and using (c) the time reversal algorithm with the skull. Adapted with permission [113].

Download Full Size | PDF

PACT reconstruction can also be implemented in a lossy elastic medium by directly applying the adjoint of the discretized H operator to the discretized recorded pressure data. The exact form of the discrete and continuous adjoint operators have been derived, making this approach readily available and computationally efficient [126,147]. Compared with the universal back-projection algorithm, the adjoint reconstruction can effectively mitigate skull-induced wavefront aberration (Fig. 12) [126]. However, the adjoint operator does not compensate for the acoustic attenuation. Instead, the attenuation term applies the attenuation to the measured pressure signals twice if not turned off.

 figure: Fig. 12.

Fig. 12. (a) Photograph of a skull-mimicking plastic globe with “blood vessels” painted on the inner surface. Ex vivo PACT images were reconstructed through the globe using (b) the universal back-projection algorithm and (c) the adjoint operator. Adapted with permission [126].

Download Full Size | PDF

When the adjoint operator does not produce adequate image quality, the ability to compute the adjoint operator facilitates gradient-based iterative algorithms [66,126]. In fact, forming the reconstruction problem as an optimization problem has been widely adopted in modern medical imaging modalities, such as x-ray CT and PET [148150]. An iterative reconstruction algorithm that seeks to compute the penalized least-squares estimates has been reported [66]:

$${\hat{{\boldsymbol p}}_0} = \textrm{argmin}{_{{{\boldsymbol p}_0} \ge 0}}\left\|{\boldsymbol p} - H{{\boldsymbol p}_0}\right\|_W^2 + \gamma R({{{\boldsymbol p}_0}} ),\; $$
where vectors ${\boldsymbol p}$ and ${{\boldsymbol p}_0}$ represent the discretized observation data (after being deconvolved with the system responses) and discretized initial pressure, respectively.$\; \gamma $ is the regularization parameter, and $R({\cdot} )$ is a regularizing penalty term, such as the total variation penalty [66]. The quantity $\left\| \cdot \right\|_W^2$ stands for a weighted $l^2$ norm where the diagonal weight matrix $W$ contains elements inversely proportional to the variance of the measurement data. Modern iterative algorithms, such as the fast iterative shrinkage/thresholding algorithm (FISTA), can be implemented with parallel computing to improve computational speed [66,151155]. The main advantage of the optimization-based approach over the time reversal approaches is that it offers the flexibility to mitigate the effects of data incompleteness via the regularization term. Also, numerical stability issues that exist in time reversal can be mitigated [66,144]. Figure 13 compares the reconstructed images using the adjoint and optimization-based methods [66].

 figure: Fig. 13.

Fig. 13. (a) Photograph of a skull-mimicking plastic globe with “blood vessels” painted on the inner surface and scalp vessel-mimicking phantoms placed on the outer surface. Ex vivo PACT images were reconstructed through the globe using (b) the adjoint method and (c) the iterative optimization-based method. Adapted with permission [66].

Download Full Size | PDF

Given the potential estimation errors of the skull's acoustic properties, one can employ a joint reconstruction approach to further improve the image quality by concurrently optimizing the PACT initial pressure distribution and the skull acoustic parameters. Based on the elastic forward models, a PACT joint reconstruction problem can be described as [127,156]

$$({{{\hat{{\boldsymbol p}}}_0},{{\hat{c}}_l},{{\hat{c}}_s},{{\hat{\rho }}_0},\hat{\alpha }} )= \textrm{argmin}{_{{{\boldsymbol p}_0},{c_l},{c_s},{\rho _0},\alpha }}\left\|{\boldsymbol p} - H{{\boldsymbol p}_0}\right\|_W^2 + \gamma R({{{\boldsymbol p}_0}} ),\; $$
where ${\hat{c}_l},{\hat{c}_s},{\hat{\rho }_0},and\; \hat{\alpha }$ denote the estimates of the longitudinal wave velocity ${c_l}$, shear wave phase velocity ${c_s}$, density ${\rho _0}$, and acoustic attenuation $\alpha $, respectively. The algorithm iteratively optimizes the skull acoustic parameter estimates and the initial pressure distribution until convergence [127]. The gradients of the cost function can be computed using the adjoint operator [126]. Since this method allows the acoustic properties and initial pressure distribution to be estimated simultaneously, a more accurate estimation of the initial pressure distribution can be achieved. Figure 14 compares the images reconstructed using the universal back-projection, conventional optimization-based (Eq. (9)), and joint reconstruction methods based on simulation.

 figure: Fig. 14.

Fig. 14. (a) X-ray CT image of an adult human skull with cortical bones spatially encoded by colors. Transcranial PACT was simulated using (b) the universal back-projection method, (c) the conventional iterative optimization-based method, and (d) the joint reconstruction method. Adapted with permission [127].

Download Full Size | PDF

Recently, the Bayesian framework was proposed for PACT reconstruction [157160]. It defines all parameters as random variables. The measurements, the model, and the prior information about the parameters were analyzed using maximum a posteriori estimate to solve the inverse problem. Figure 15 displays the mouse head images reconstructed using the Bayesian framework and time reversal. The Bayesian approach is promising for transcranial PACT as it accounts for the uncertainties in parameters, models, and geometries and demonstrates advantages over time reversal algorithms when the detection view is limited. However, further development is required before it can be applied to an acoustically absorptive and heterogeneous medium.

 figure: Fig. 15.

Fig. 15. The mouse head imaged by PACT and reconstructed using (a) the Bayesian framework and (b) time reversal. Adapted with permission [158].

Download Full Size | PDF

In addition to the purely model-based algorithms, data-driven techniques, especially deep learning (DL), have been increasingly investigated for PACT reconstruction [161165]. Driven by data, DL learns end-to-end transformations without the need for explicit definition of an analytical model. Training the DL network can also be understood as an optimization problem related to the aforementioned Bayesian framework [161]. So far, DL has been successfully applied to reconstruct PACT images from limited-band/limited-view/sparse measurements given forward operators [135,166168] and to approximate inverse operators, which otherwise involved solving the forward and adjoint problems repetitively [169,170]. Although the use of DL for transcranial PACT has not been established, some reported works are relevant and potentially transferrable to transcranial PACT. For example, an encoder-decoder network was developed to account for the acoustic and optical attenuation in the deep tissue during PACT reconstruction [171], a U-net-based convolutional neural network (CNN) was proposed to correct for the SOS-heterogeneity-induced aberration in PACT images [172], several DL networks were designed to produce full-bandwidth output signals from limited-band raw signals to improve the reconstruction resolution [166,173], and various DL networks were employed to remove the reconstruction artifacts or denoise the measured data [174176]. In another relevant work, the vector space similarity (VSS) model was used in conjunction with a simulated training data set to correct for the skull-induced distortion in transcranial PAM [177]. However, this method is not strictly DL due to the lack of a layered network. For the problem of DL-based transcranial PACT reconstruction, two major issues remain to be addressed. One is the lack of paired training data which are essential for fully supervised training. Two approaches can potentially overcome this issue. Experimentally, an acoustic point source can be scanned over a volume inside the ex vivo skulls to acquire the location-dependent point spread functions. The acquired point source data can be synthesized to form arbitrary features to mimic realistic targets. The second approach uses pure simulation, where the ground truth can be extracted from the clinical x-ray CT or MRI images. By simulating the forward problems using these features as the initial pressure sources, one can create training data for the network. Except for the fully supervised training, a semi-supervised approach can also be considered. For example, physics-informed neural networks can be developed by directly incorporating the physical models into the loss function such that the network learns much of the physics by the terms in the loss function [161]. The other issue is the lack of a data-consistency term, which may cause challenges in assessing the reconstruction correctness. As a result, further networks are expected to consider uncertainty and provide an uncertainty estimate of the reconstruction. For instance, null-space projection can be used to ensure data consistency after postprocessing in DL-based PACT reconstruction [178]. Overall, given the many opportunities that a network can be incorporated into the reconstruction pipeline, it is expected that DL will play a revolutionary role in transcranial PACT reconstruction.

5. PACT of human brain function

Studies of functional brain PACT have mainly been reported in animal models, for which several comprehensive review articles can be consulted [22,45,179,180]. For human brain PACT, researchers have mainly focused on ex vivo structural imaging, and in vivo functional imaging had not been demonstrated until recently [27,28]. In that study, a newly developed 2.12-MHz 1024-element parallel system was reported. The system costs ∼US $\$$0.44 million before tax, which is less expensive than a 7T MRI system (∼US $\$$6.5 million for Siemens Terra) [181]. The PACT system was used to image the functional responses of hemicraniectomy human subjects. Although the unique subject population provided an acoustic window allowing for aberration-free image reconstruction, the achieved functional results revealed PACT’s capability in mapping human brain function with comparable performance to 7 Tesla MRI—the current gold standard in the clinic. It is believed that some of the results and methodologies from that study can inspire future related research.

5.1 Imaging schemes

Figure 16 shows a subject being imaged by the PACT system [27]. Supine, lateral, and prone imaging positions were adopted to optimize the tradeoff between comfort and stability [27,28]. A custom-designed head stabilizer was used to reduce the motion artifacts during scanning. Two laser wavelengths, 1064 nm from an Nd:YAG laser fired at 10 Hz and 694 nm from a ruby laser fired at 1 Hz, were employed to excite PA signals from endogenous hemoglobin (Hb). The measurement started with the baseline mode to acquire brain angiography followed by the functional mode to image brain function. The scanning configuration resulted in a 10-cm–diameter FOV on the head, an isotropic spatial resolution of 350 µm, and a temporal resolution of 10 s (1064 nm) and 100 s (694 nm) for the baseline mode and 2 s for the functional mode. Once configured to acquire data at a 20-MHz sampling frequency, 12-bit resolution, and 2000-point sampling length, one functional volumetric scan produces a data size of around 60 MB. If the reconstruction is performed at a volume size of 100×500×500 (resulting in a FOV of 20×100×100 mm3 at a voxel size of 0.2×0.2×0.2 mm3) with two-times azimuthally interpolated channel data, the GPU-accelerated computational time is around 140 s (GeForce GTX 1080 Ti, Nvidia, Corp.)

 figure: Fig. 16.

Fig. 16. A subject’s head being imaged by the PACT system. Adapted with permission [27].

Download Full Size | PDF

Five block-designed benchmark motor and language tasks were investigated. They were finger tapping, lip puckering, tongue tapping, story listening, and word generation. N = 4 subjects were recruited for the study, and most tasks were repeated for three times. The BOLD fMRI results non-concurrently obtained using a 7 Tesla scanner were used to validate the PACT functional results.

5.2 Functional results

For both PACT and BOLD fMRI, the brain activities were extracted based on the PA signals measured at the 1064-nm optical wavelength at each voxel based on the General Linear Model and presented in t-scores on the top of T1-weight MRI images (Fig. 17(a)) [27,28,182]. The maximum imaging depth was ∼19 mm below the skin surface or ∼11 mm under the cortical surface. An average dice coefficient and spatial correlation coefficient of ∼0.4 among all motor tasks indicated a fair-to-moderate agreement. An average center-of-mass error of ∼6 mm among all motor tasks demonstrated acceptable variations. For story listening, an average dice coefficient and spatial correlation coefficient of ∼0.5 indicated a moderate agreement. An average center-of-mass error of ∼6 mm of both language tasks denoted acceptable localization discrepancies. The quantified correspondence between the two modalities’ functional results demonstrated that PACT could provide comparable performance to 7 T fMRI for functional human brain imaging.

 figure: Fig. 17.

Fig. 17. Functional responses. (a) Functional activation in response to different stimulations measured by PACT (top row) and BOLD fMRI (bottom row). (b) Fractional changes of the PA and BOLD signals. (b) Fractional changes of the Hb and BOLD signals [27]. Adapted with permission [27].

Download Full Size | PDF

The authors quantified the fractional changes of the directly measured PA signals (Fig. 17(b)) and the derived Hb concentration signals (Fig. 17(c)). When quantifying the fractional Hb concentration changes, the authors compensated for the optical fluence using the prior knowledge of the wavelength-dependent optical properties of the scalp and brain layers, which had been measured up to 30 mm in depth using diffuse optical imaging based on layered optical models [183]. However, the quantification was not validated against the ground truth, which was generally acquired invasively [184]. Other methods for estimating the optical fluence with a higher accuracy include modeling the light propagation as a relationship between the medium’s optical properties and the light fluence, followed by an iterative calculation of the spatial distribution of the optical absorption coefficient [185,186]. However, such an approach suffers computational instability, especially in in vivo situations. Another approach, termed eigenspectra optoacoustic tomography, models optical fluence as an affine function of a few reference base spectra and has demonstrated 3–8-fold estimation improvement for imaging tissues at depths greater than 5 mm [187]. It was also shown to be independent of the specific distribution of optical properties. However, given the case-by-case differences in optical properties of the brain tissue, the preliminary results of simulated targets in the mouse brain require further validation with a larger pool of tissue physiology interrogations [188,189]. Overall speaking, the problem of optical fluence quantification in deep tissues has not been conclusively solved due to the dependency of light fluence on the medium’s wavelength-dependent and location-dependent scattering and absorption, as well as the variance of optical properties between subjects [45].

5.3 SNR analysis

The functional data acquired in the hemicraniectomy subjects can be used to estimate the SNR of transcranial PACT, which has not been demonstrated in vivo. As shown in Fig. 18, the SNR at ∼11 mm below the cortical surface was measured to be ∼50, corresponding to ∼2% detectability of signal changes [27,28]. Based on the discussion in Section 3, the SNR of transcranial PACT is compromised by the skull-induced optical attenuation, acoustic attenuation, and acoustic waveform distortion (Fig. 6). To detect the functional changes of several percent, the hardware needs to be further improved, and a system with a lower central frequency (e.g., 1 MHz) is preferred [65]. Given the ∼50% optical attenuation and ∼80% acoustic pressure attenuation for a 4-mm skull thickness, the total PA signal attenuation induced by the skull will be ∼90%. As a result, the SNR will decrease to ∼5, corresponding to ∼20% detectability for transcranial imaging. However, the linear dimension of a 1-MHz transducer element will be doubled compared to the element size in this study, improving the SNR by a factor of ∼2.1. Besides, if the radiant exposure can be increased to the maximum permissible ANSI safety limit, another factor of ∼4 SNR improvement is expected [87]. Finally, if the element count of the transducer array can be doubled to 2048, the SNR can be further improved by ∼1.4 times.

 figure: Fig. 18.

Fig. 18. Noise-equivalent molar concentrations (NECs) of Hb vs. depths measured at 1064-nm optical wavelength. Data are presented as mean ± SEM (n = 4 subjects). Adapted with permission [27].

Download Full Size | PDF

Since the dependence of the SNR on the system parameters, such as the ultrasonic transducer element size, element count, central frequency, and scanning time, holds the same for different systems, the above SNR is considered transferable to other PACT systems. Indeed, the thickness differs among skulls and locations on the skull, but one may estimate the SNR based on the skull properties summarized in Figs. 5 and 7. Currently, the functional signals are from hemoglobin. However, one may potentially estimate the SNR of other targets as long as its absorption coefficient is known. Notably, the SNR cited in this review is mainly for evaluating the feasibility of transcranial PACT. With more in vivo studies carried out by various labs, the SNR can be cross-validated.

Overall, given the above potential hardware improvement, the SNR of a future transcranial PACT system can potentially reach ∼59 at ∼10 mm below the cortical surface. One should note that the presented SNR analysis treated acoustic scattering as loss. However, if the next-generation reconstruction algorithms can incorporate the acoustic scattering effect, the SNR can be further increased by a factor of ∼1.3 (Fig. 7), resulting in an SNR of ∼77 at ∼10 mm below the cortical surface. SNRs of ∼59 and ∼77 allow for the detectability of ∼1.7% and ∼1.3% PA signal changes, respectively. For example, to detect 1% functional changes, the former SNR requires three times of averaging (SNR increases by $\surd 3$), while the latter SNR needs approximately two times of averaging (SNR increases by $\surd 2$), allowing for faster detection of function. On the other hand, since the SNR was predicted based on the assumption of successful correction of the skull-induced acoustic scattering, the reconstructed structural image is expected to present higher contrast. However, such a correction algorithm is still under development.

6. Summary and outlook

This article reviewed and discussed the technical state of the art and challenges of translating PACT to functional human brain imaging. A human brain PACT system was suggested to employ a 3D hemispherical detection aperture with massively parallel and sensitive ultrasonic transducers with a ∼1-MHz center frequency. The skull’s acoustic properties and the mechanism behind skull-induced acoustic aberration were discussed, suggesting that the skull remained the main obstacle for functional human-brain PACT. Several numerical skull modeling approaches were envisaged to correct for the skull-induced acoustic aberration, including subject-specific and subject-non-specific ones. The subject-specific approaches involved x-ray CT, MRI, or USCT, and the subject-non-specific method was based on the skull atlas. Their feasibility was discussed, suggesting that the USCT-based modeling method could provide the optimal tradeoff between modeling accuracy and system complexity. Conventional reconstruction algorithms and modern full-wave-based acoustic de-aberration algorithms were reviewed, suggesting that optimization-based de-aberration approaches can potentially provide the best reconstruction quality but with considerable computational cost. Alternatively, although data-driven techniques have not been implemented in transcranial PACT reconstruction, they are expected to play a revolutionary role in this problem due to the high computational speed and potentially high tolerance to skull modeling errors. To date, functional human-brain PACT has not been realized in healthy adults, and in the authors’ opinion, the primary goal should be as simple as to demonstrate the detectability of any functional changes on the superficial cortex. Based on the preliminary functional results acquired on the hemicraniectomy human patients and ex vivo skulls, the feasibility of transcranial functional PACT was analyzed, suggesting that with potential improvement in the hardware and reconstruction methodologies, an SNR of ∼77 at ∼10 mm below the cortical surface is achievable. The corresponding detectability enables 1.3% functional changes to be revealed. In conclusion, although functional human-brain PACT has been considered one of the most challenging directions in the photoacoustic field, the authors believe it is a feasible direction and look forward to seeing breakthroughs in the foreseeable future.

Funding

National Institutes of Health (R01 NS102213, U01 EB029823).

Acknowledgment

The authors would like to thank David C. Garrett, Yilin Luo, and Olick G. Joshua for discussing and proofreading the manuscript. This work was sponsored by the United States National Institutes of Health (NIH) grants R01 NS102213 and U01 EB029823.

Disclosures

L.V.W. has a financial interest in Microphotoacoustics, Inc., CalPACT, LLC, and Union Photoacoustic Technologies, Ltd., which, however, did not support this work. S.N declares no competing interests.

References

1. T. R. Insel, S. C. Landis, and F. S. Collins, “The NIH brain initiative,” Science 340(6133), 687–688 (2013). [CrossRef]  

2. L. A. Jorgenson, W. T. Newsome, D. J. Anderson, C. I. Bargmann, E. N. Brown, K. Deisseroth, J. P. Donoghue, K. L. Hudson, G. S. Ling, and P. R. MacLeish, “The BRAIN Initiative: developing technology to catalyse neuroscience discovery,” Philos. Trans. R. Soc., B 370(1668), 20140164 (2015). [CrossRef]  

3. M. G. Murer, Q. Yan, and R. Raisman-Vozari, “Brain-derived neurotrophic factor in the control human brain, and in Alzheimer's disease and Parkinson's disease,” Prog. Neurobiol. 63(1), 71–124 (2001). [CrossRef]  

4. R. Cade, M. Privette, M. Fregly, N. Rowland, Z. Sun, V. Zele, H. Wagemaker, and C. Edelstein, “Autism and schizophrenia: intestinal disorders,” Nutr. Neurosci. 3(1), 57–72 (2000). [CrossRef]  

5. F. De Martino, J. Zimmermann, L. Muckli, K. Ugurbil, E. Yacoub, and R. Goebel, “Cortical depth dependent functional responses in humans at 7T: improved specificity with 3D GRASE,” PLoS One 8(3), e60514 (2013). [CrossRef]  

6. C. A. Olman, N. Harel, D. A. Feinberg, S. He, P. Zhang, K. Ugurbil, and E. Yacoub, “Layer-specific fMRI reflects different neuronal computations at different depths in human V1,” PLoS One 7(3), e32536 (2012). [CrossRef]  

7. S. Moeller, E. Yacoub, C. A. Olman, E. Auerbach, J. Strupp, N. Harel, and K. Uğurbil, “Multiband multislice GE-EPI at 7 tesla, with 16-fold acceleration using partial parallel imaging with application to high spatial and temporal whole-brain fMRI,” Magn. Reson. Med. 63(5), 1144–1153 (2010). [CrossRef]  

8. P. J. Koopmans, M. Barth, S. Orzada, and D. G. Norris, “Multi-echo fMRI of the cortical laminae in humans at 7 T,” NeuroImage 56(3), 1276–1285 (2011). [CrossRef]  

9. G. Salimi-Khorshidi, G. Douaud, C. F. Beckmann, M. F. Glasser, L. Griffanti, and S. M. Smith, “Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers,” NeuroImage 90, 449–468 (2014). [CrossRef]  

10. M. J. McKeown, L. K. Hansen, and T. J. Sejnowsk, “Independent component analysis of functional MRI: what is signal and what is noise?” Curr. Opin. Neurobiol. 13(5), 620–629 (2003). [CrossRef]  

11. A. Nowogrodzki, “The world's strongest MRI machines are pushing human imaging to new limits,” Nature 563(7729), 24–26 (2018). [CrossRef]  

12. C. Catana, D. Procissi, Y. Wu, M. S. Judenhofer, J. Qi, B. J. Pichler, R. E. Jacobs, and S. R. Cherry, “Simultaneous in vivo positron emission tomography and magnetic resonance imaging,” Proc. Natl. Acad. Sci. 105(10), 3705–3710 (2008). [CrossRef]  

13. G. Strangman, D. A. Boas, and J. P. Sutton, “Non-invasive neuroimaging using near-infrared light,” Biol. Psychiatry 52(7), 679–693 (2002). [CrossRef]  

14. A. T. Eggebrecht, S. L. Ferradal, A. Robichaux-Viehoever, M. S. Hassanpour, H. Dehghani, A. Z. Snyder, T. Hershey, and J. P. Culver, “Mapping distributed brain function and networks with diffuse optical tomography,” Nat. Photonics 8(6), 448–454 (2014). [CrossRef]  

15. F. Darvas, D. Pantazis, E. Kucukaltun-Yildirim, and R. M. Leahy, “Mapping human brain function with MEG and EEG: methods and validation,” NeuroImage 23, S289–S299 (2004). [CrossRef]  

16. C. Demene, J. Baranger, M. Bernal, C. Delanoe, S. Auvin, V. Biran, M. Alison, J. Mairesse, E. Harribaud, M. Pernot, M. Tanter, and O. Baud, “Functional ultrasound imaging of brain activity in human newborns,” Sci. Transl. Med. 9(411), eaah6756 (2017). [CrossRef]  

17. T. Deffieux, C. Demene, M. Pernot, and M. Tanter, “Functional ultrasound neuroimaging: a review of the preclinical and clinical state of the art,” Curr. Opin. Neurobiol. 50, 128–135 (2018). [CrossRef]  

18. M. Imbault, D. Chauvet, J.-L. Gennisson, L. Capelle, and M. Tanter, “Intraoperative functional ultrasound imaging of human brain activity,” Sci. Rep. 7(1), 7304–7307 (2017). [CrossRef]  

19. S. Soloukey, A. J. Vincent, D. D. Satoer, F. Mastik, M. Smits, C. M. Dirven, C. Strydis, J. G. Bosch, A. F. van der Steen, and C. I. De Zeeuw, “Functional ultrasound (fUS) during awake brain surgery: the clinical potential of intra-operative functional and vascular brain mapping,” Front. Neurosci. 13, 1384 (2020). [CrossRef]  

20. C. Demené, J. Robin, A. Dizeux, B. Heiles, M. Pernot, M. Tanter, and F. Perren, “Transcranial ultrafast ultrasound localization microscopy of brain vasculature in patients,” Nat. Biomed. Eng. 5(3), 219–228 (2021). [CrossRef]  

21. L. V. Wang, “Tutorial on photoacoustic microscopy and computed tomography,” IEEE J. Sel. Top. Quantum Electron. 14(1), 171–179 (2008). [CrossRef]  

22. J. Yao and L. V. Wang, “Photoacoustic brain imaging: from microscopic to macroscopic scales,” Neurophotonics 1(1), 011003 (2014). [CrossRef]  

23. I. P. Pavlov, An Investigation of the Physiological Activity of the Cerebral Cortex (Dover, 1960).

24. L. V. Wang and J. Yao, “A practical guide to photoacoustic tomography in the life sciences,” Nat. Methods 13(8), 627–638 (2016). [CrossRef]  

25. L. V. Wang, “Multiscale photoacoustic microscopy and computed tomography,” Nat. Photonics 3(9), 503–509 (2009). [CrossRef]  

26. L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012). [CrossRef]  

27. S. Na, J. J. Russin, L. Lin, X. Yuan, P. Hu, K. B. Jann, L. Yan, K. Maslov, J. Shi, D. J. Wang, C. Y. Liu, and L. V. Wang, “Massively parallel functional photoacoustic computed tomography of the human brain,” Nat. Biomed. Eng. 20211–9 (2021). [CrossRef]  

28. S. Na, J. Russin, L. Lin, X. Yuan, P. Hu, K. Jann, L. Yan, K. Maslov, J. Shi, and D. Wang, “Mapping human brain function with massively parallel high-speed three-dimensional photoacoustic computed tomography,” Proc. SPIE 11642, 116420F (2021). [CrossRef]  

29. Y. Liu, H. Liu, H. Yan, Y. Liu, J. Zhang, W. Shan, P. Lai, H. Li, L. Ren, and Z. Li, “Aggregation-Induced absorption enhancement for deep near-infrared II photoacoustic imaging of brain gliomas in vivo,” Adv. Sci. 6(8), 1801615 (2019). [CrossRef]  

30. Y. Liu, Y. Yang, M. Sun, M. Cui, Y. Fu, Y. Lin, Z. Li, and L. Nie, “Highly specific noninvasive photoacoustic and positron emission tomography of brain plaque with functionalized croconium dye labeled by a radiotracer,” Chem. Sci. 8(4), 2710–2716 (2017). [CrossRef]  

31. A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadeh, J. Xia, and K. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019). [CrossRef]  

32. L. Xiang, B. Wang, L. Ji, and H. Jiang, “4-D photoacoustic tomography,” Sci. Rep. 3(1), 1113 (2013). [CrossRef]  

33. X. L. Deán-Ben, G. Sela, A. Lauri, M. Kneipp, V. Ntziachristos, G. G. Westmeyer, S. Shoham, and D. Razansky, “Functional optoacoustic neuro-tomography for scalable whole-brain monitoring of calcium indicators,” Light: Sci. Appl. 5(12), e16201 (2016). [CrossRef]  

34. X. L. Deán-Ben, T. F. Fehm, S. J. Ford, S. Gottschalk, and D. Razansky, “Spiral volumetric optoacoustic tomography visualizes multi-scale dynamics in mice,” Light: Sci. Appl. 6(4), e16247 (2017). [CrossRef]  

35. J. Poudel, Y. Lou, and M. A. Anastasio, “A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography,” Phys. Med. Biol. 64(14), 14TR01 (2019). [CrossRef]  

36. X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnol. 21(7), 803–806 (2003). [CrossRef]  

37. L. Nie, Z. Guo, and L. V. Wang, “Photoacoustic tomography of monkey brain using virtual point ultrasonic transducers,” J. Biomed. Opt. 16(7), 076005 (2011). [CrossRef]  

38. X. Yang and L. V. Wang, “Monkey brain cortex imaging by photoacoustic tomography,” J. Biomed. Opt. 13(4), 044009 (2008). [CrossRef]  

39. Y. Xu and L. V. Wang, “Rhesus monkey brain imaging through intact skull with thermoacoustic tomography,” Ieee Trans. Ultrason. Ferroelectr. Freq. Control 53(3), 542–548 (2006). [CrossRef]  

40. L. Nie, X. Cai, K. I. Maslov, A. Garcia-Uribe, M. A. Anastasio, and L. V. Wang, “Photoacoustic tomography through a whole adult human skull with a photon recycler,” J. Biomed. Opt. 17(11), 110506 (2012). [CrossRef]  

41. L. Li, L. Zhu, C. Ma, L. Lin, J. Yao, L. Wang, K. Maslov, R. Zhang, W. Chen, J. Shi, and L. V. Wang, “Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution,” Nat. Biomed. Eng. 1(5), 0071 (2017). [CrossRef]  

42. J. Gamelin, A. Maurudis, A. Aguirre, F. Huang, P. Guo, L. V. Wang, and Q. Zhu, “A real-time photoacoustic tomography system for small animals,” Opt. Express 17(13), 10489 (2009). [CrossRef]  

43. I. Olefir, E. Merčep, N. C. Burton, S. V. Ovsepian, and V. Ntziachristos, “Hybrid multispectral optoacoustic and ultrasound tomography for morphological and physiological brain imaging,” J. Biomed. Opt. 21(8), 086005 (2016). [CrossRef]  

44. I. Olefir, A. Ghazaryan, H. Yang, J. Malekzadeh-Najafabadi, S. Glasl, P. Symvoulidis, V. B. O’Leary, G. Sergiadis, V. Ntziachristos, and S. V. Ovsepian, “Spatial and Spectral Mapping and Decomposition of Neural Dynamics and Organization of the Mouse Brain with Multispectral Optoacoustic Tomography,” Cell Rep. 26(10), 2833–2846.e3 (2019). [CrossRef]  

45. S. V. Ovsepian, I. Olefir, G. Westmeyer, D. Razansky, and V. Ntziachristos, “Pushing the boundaries of neuroimaging with optoacoustics,” Neuron 96(5), 966–988 (2017). [CrossRef]  

46. E. Merčep, J. L. Herraiz, X. L. Deán-Ben, and D. Razansky, “Transmission–reflection optoacoustic ultrasound (TROPUS) computed tomography of small animals,” Light: Sci. Appl. 8(1), 18 (2019). [CrossRef]  

47. J. Tang, J. E. Coleman, X. Dai, and H. Jiang, “Wearable 3-D photoacoustic tomography for functional brain imaging in behaving rats,” Sci. Rep. 6(1), 1–10 (2016). [CrossRef]  

48. L. Lin, P. Hu, J. Shi, C. M. Appleton, K. Maslov, L. Li, R. Zhang, and L. V. Wang, “Single-breath-hold photoacoustic computed tomography of the breast,” Nat. Commun. 9(1), 1–9 (2018). [CrossRef]  

49. P. Zhang, L. Li, L. Lin, P. Hu, J. Shi, Y. He, L. Zhu, Y. Zhou, and L. V. Wang, “High-resolution deep functional imaging of the whole mouse brain by photoacoustic computed tomography in vivo,” J. Biophotonics 11(1), e201700024 (2018). [CrossRef]  

50. M. T. Graham, J. Huang, F. X. Creighton, and M. A. Lediju Bell, “Simulations and human cadaver head studies to identify optimal acoustic receiver locations for minimally invasive photoacoustic-guided neurosurgery,” Photoacoustics 19, 100183 (2020). [CrossRef]  

51. H.-P. Brecht, R. Su, M. Fronheiser, S. A. Ermilov, A. Conjusteau, and A. A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009). [CrossRef]  

52. A. Buehler, E. Herzog, D. Razansky, and V. Ntziachristos, “Video rate optoacoustic tomography of mouse kidney perfusion,” Opt. Lett. 35(14), 2475 (2010). [CrossRef]  

53. C. Cai, X. Wang, K. Si, J. Qian, J. Luo, and C. Ma, “Feature coupling photoacoustic computed tomography for joint reconstruction of initial pressure and sound speed in vivo,” Biomed. Opt. Express 10(7), 3447–3462 (2019). [CrossRef]  

54. J. Xia, C. Huang, K. Maslov, M. A. Anastasio, and L. V. Wang, “Enhancement of photoacoustic tomography by ultrasonic computed tomography based on optical excitation of elements of a full-ring transducer array,” Opt. Lett. 38(16), 3140 (2013). [CrossRef]  

55. M. Li, C. Liu, X. Gong, R. Zheng, Y. Bai, M. Xing, X. Du, X. Liu, J. Zeng, and R. Lin, “Linear array-based real-time photoacoustic imaging system with a compact coaxial excitation handheld probe for noninvasive sentinel lymph node mapping,” Biomed. Opt. Express 9(4), 1408–1422 (2018). [CrossRef]  

56. X. Wang, J. B. Fowlkes, J. M. Cannata, C. Hu, and P. L. Carson, “Photoacoustic imaging with a commercial ultrasound system and a custom probe,” Ultrasound Med. Biol. 37(3), 484–492 (2011). [CrossRef]  

57. R. Zhang, L. Zhao, C. Zhao, M. Wang, S. Liu, J. Li, R. Zhao, R. Wang, F. Yang, and L. Zhu, “Exploring the diagnostic value of photoacoustic imaging for breast cancer: the identification of regional photoacoustic signal differences of breast tumors,” Biomed. Opt. Express 12(3), 1407–1421 (2021). [CrossRef]  

58. M. Wang, L. Zhao, Y. Wei, J. Li, Z. Qi, N. Su, C. Zhao, R. Zhang, T. Tang, and S. Liu, “Functional photoacoustic/ultrasound imaging for the assessment of breast intraductal lesions: preliminary clinical findings,” Biomed. Opt. Express 12(3), 1236–1246 (2021). [CrossRef]  

59. S. Gottschalk, O. Degtyaruk, B. Mc Larney, J. Rebling, M. A. Hutter, X. L. Deán-Ben, S. Shoham, and D. Razansky, “Rapid volumetric optoacoustic imaging of neural dynamics across the mouse brain,” Nat. Biomed. Eng. 3(5), 392–401 (2019). [CrossRef]  

60. Y. Matsumoto, Y. Asao, H. Sekiguchi, A. Yoshikawa, T. Ishii, K. Nagae, S. Kobayashi, I. Tsuge, S. Saito, and M. Takada, “Visualising peripheral arterioles and venules through high-resolution and large-area photoacoustic imaging,” Sci. Rep. 8(1), 14930 (2018). [CrossRef]  

61. M. Toi, Y. Asao, Y. Matsumoto, H. Sekiguchi, A. Yoshikawa, M. Takada, M. Kataoka, T. Endo, N. Kawaguchi-Sakita, M. Kawashima, E. Fakhrejahani, S. Kanao, I. Yamaga, Y. Nakayama, M. Tokiwa, M. Torii, T. Yagi, T. Sakurai, K. Togashi, and T. Shiina, “Visualization of tumor-related blood vessels in human breast by photoacoustic imaging system with a hemispherical detector array,” Sci. Rep. 7(1), 41970 (2017). [CrossRef]  

62. S. M. Schoustra, D. Piras, R. Huijink, T. J. P. M. op ’t Root, L. Alink, W. M. Kobold, W. Steenbergen, and S. Manohar, “Twente Photoacoustic Mammoscope 2: system overview and three-dimensional vascular network images in healthy breasts,” J. Biomed. Opt. 24(12), 1 (2019). [CrossRef]  

63. A. A. Oraevsky, R. Su, H. Nguyen, J. Moore, Y. Lou, S. Bhadra, M. Anastasio, L. Forte, and W. Yang, “Full-view 3D imaging system for functional and anatomical screening of the breast,” in Photons Plus Ultrasound: Imaging and Sensing 2018, A. A. Oraevsky and L. V. Wang, eds. (SPIE, 2018), p. 248.

64. M. Xu and L. V. Wang, “Universal back-projection algorithm for photoacoustic computed tomography,” Phys. Rev. E 71(1), 016706 (2005). [CrossRef]  

65. S. Na, X. Yuan, L. Lin, J. Isla, D. Garrett, and L. V. Wang, “Transcranial photoacoustic computed tomography based on a layered back-projection method,” Photoacoustics 20, 100213 (2020). [CrossRef]  

66. J. Poudel, S. Na, L. V. Wang, and M. A. Anastasio, “Iterative image reconstruction in transcranial photoacoustic tomography based on the elastic wave equation,” Phys. Med. Biol. 65(5), 055009 (2020). [CrossRef]  

67. J. Yao and L. V. Wang, “Photoacoustic microscopy,” Laser Photonics Rev. 7(5), 758–778 (2013). [CrossRef]  

68. A. P. Jathoul, J. Laufer, O. Ogunlade, B. Treeby, B. Cox, E. Zhang, P. Johnson, A. R. Pizzey, B. Philip, T. Marafioti, M. F. Lythgoe, R. B. Pedley, M. A. Pule, and P. Beard, “Deep in vivo photoacoustic imaging of mammalian tissues using a tyrosinase-based genetic reporter,” Nat. Photonics 9(4), 239–246 (2015). [CrossRef]  

69. R. Manwar, K. Kratkiewicz, and K. Avanaki, “Overview of ultrasound detection technologies for photoacoustic imaging,” Micromachines 11(7), 692 (2020). [CrossRef]  

70. G. Wissmeyer, M. A. Pleitez, A. Rosenthal, and V. Ntziachristos, “Looking at sound: optoacoustics with all-optical ultrasound detection,” Light: Sci. Appl. 7(1), 53 (2018). [CrossRef]  

71. B. Raj, V. Rajendran, and P. Palanichamy, Science and Technology of Ultrasonics (Alpha Science Int'l Ltd., 2004).

72. L. L. Wong, S. Na, A. I. Chen, Z. Li, M. Macecek, and J. T. Yeow, “A feasibility study of piezoelectric micromachined ultrasonic transducers fabrication using a multi-user MEMS process,” Sens. Actuators Phys. 247, 430–439 (2016). [CrossRef]  

73. J. Chen, M. Wang, J.-C. Cheng, Y.-H. Wang, P.-C. Li, and X. Cheng, “A photoacoustic imager with light illumination through an infrared-transparent silicon CMUT array,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 59(4), 766–775 (2012). [CrossRef]  

74. Z. Li, A. K. Ilkhechi, and R. Zemp, “Transparent capacitive micromachined ultrasonic transducers (CMUTs) for photoacoustic applications,” Opt. Express 27(9), 13204–13218 (2019). [CrossRef]  

75. X. Zhang, O. Adelegan, F. Y. Yamaner, and Ö. Oralkan, “An optically transparent capacitive micromachined ultrasonic transducer (CMUT) fabricated using SU-8 or BCB adhesive wafer bonding,” in 2017 IEEE International Ultrasonics Symposium (IUS) (IEEE, 2017), pp. 1–4.

76. J. A. Guggenheim, J. Li, T. J. Allen, R. J. Colchester, S. Noimark, O. Ogunlade, I. P. Parkin, I. Papakonstantinou, A. E. Desjardins, and E. Z. Zhang, “Ultrasensitive plano-concave optical microresonators for ultrasound sensing,” Nat. Photonics 11(11), 714–719 (2017). [CrossRef]  

77. C. Zhang, S.-L. Chen, T. Ling, and L. J. Guo, “Review of imprinted polymer microrings as ultrasound detectors: design, fabrication, and characterization,” IEEE Sens. J. 15(6), 3241–3248 (2015). [CrossRef]  

78. R. Shnaiderman, G. Wissmeyer, O. Ülgen, Q. Mustafa, A. Chmyrov, and V. Ntziachristos, “A submicrometre silicon-on-insulator resonator for ultrasound detection,” Nature 585(7825), 372–378 (2020). [CrossRef]  

79. W. J. Westerveld, M. Mahmud-Ul-Hasan, R. Shnaiderman, V. Ntziachristos, X. Rottenberg, S. Severi, and V. Rochus, “Sensitive, small, broadband and scalable optomechanical ultrasound sensor in silicon photonics,” Nat. Photonics 15(5), 341–345 (2021). [CrossRef]  

80. D. C. Garrett and L. V. Wang, “Acoustic sensing with light,” Nat. Photonics 15(5), 324–326 (2021). [CrossRef]  

81. A. M. Winkler, K. I. Maslov, and L. V. Wang, “Noise-equivalent sensitivity of photoacoustics,” J. Biomed. Opt. 18(9), 097003 (2013). [CrossRef]  

82. S. Na, Z. Zheng, A. I.-H. Chen, L. L. P. Wong, Z. Li, and J. T. W. Yeow, “Design and fabrication of a high-power air-coupled capacitive micromachined ultrasonic transducer array with concentric annular cells,” IEEE Trans. Electron Devices 64(11), 4636–4643 (2017). [CrossRef]  

83. I. Albert, H. Chen, L. L. Wong, S. Na, Z. Li, M. Macecek, and J. T. Yeow, “Fabrication of a curved row–column addressed capacitive micromachined ultrasonic transducer array,” J. Microelectromechanical Syst. 25(4), 675–682 (2016). [CrossRef]  

84. S. Vaithilingam, T.-J. Ma, Y. Furukawa, I. O. Wygant, X. Zhuang, A. De La Zerda, O. Oralkan, A. Kamaya, R. B. Jeffrey, and B. T. Khuri-Yakub, “Three-dimensional photoacoustic imaging using a two-dimensional CMUT array,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 56(11), 2411–2419 (2009). [CrossRef]  

85. I. Zamora, E. Ledesma, A. Uranga, and N. Barniol, “Fully integrated CMOS-PMUT transceiver,” in 2018 25th IEEE International Conference on Electronics, Circuits and Systems (ICECS) (IEEE, 2018), pp. 149–152.

86. F. Bevilacqua, D. Piguet, P. Marquet, J. D. Gross, B. J. Tromberg, and C. Depeursinge, “In vivo local determination of tissue optical properties: applications to human brain,” Appl. Opt. 38(22), 4939 (1999). [CrossRef]  

87. American National Standards Institute, American National Standard for the Safe Use of Lasers ANSI Z136.1–2007 (Laser Institute of America, 2007).

88. Z. Guo, L. Li, and L. V. Wang, “On the speckle-free nature of photoacoustic tomography,” Med. Phys. 36(9Part1), 4084–4088 (2009). [CrossRef]  

89. Z. Guo, Z. Xu, and L. V. Wang, “Dependence of photoacoustic speckles on boundary roughness,” J. Biomed. Opt. 17(4), 046009 (2012). [CrossRef]  

90. G. Pinton, J.-F. Aubry, E. Bossy, M. Muller, M. Pernot, and M. Tanter, “Attenuation, scattering, and absorption of ultrasound in the skull bone: absorption of ultrasound in the skull bone,” Med. Phys. 39(1), 299–307 (2011). [CrossRef]  

91. S.-W. Jin, K.-B. Sim, and S.-D. Kim, “Development and growth of the normal cranial vault: an embryologic review,” J. Korean Neurosurg. Soc. 59(3), 192 (2016). [CrossRef]  

92. R. Manwar, K. Kratkiewicz, and K. Avanaki, “Investigation of the effect of the skull in transcranial photoacoustic imaging: a preliminary ex vivo study,” Sensors 20(15), 4189 (2020). [CrossRef]  

93. S. Chaffaı, F. Peyrin, S. Nuzzo, R. Porcher, G. Berger, and P. Laugier, “Ultrasonic characterization of human cancellous bone using transmission and backscatter measurements: relationships to density and microstructure,” Bone 30(1), 229–237 (2002). [CrossRef]  

94. C. F. Njeh, “The dependence of ultrasound velocity and attenuation on the material properties of cancellous bone,” Sheffield Hallam University (1995).

95. G. Osterhoff, E. F. Morgan, S. J. Shefelbine, L. Karim, L. M. McNamara, and P. Augat, “Bone mechanical properties and changes with osteoporosis,” Injury 47, S11–S20 (2016). [CrossRef]  

96. F. J. Fry and J. E. Barger, “Acoustical properties of the human skull,” J. Acoust. Soc. Am. 63, 1576 (1978. [CrossRef]  

97. M. M. Elahi, M. L. Lessard, S. Hakim, K. Watkin, and J. Sampalis, “Ultrasound in the assessment of cranial bone thickness,” J. Craniofac. Surg. 8(3), 213–221 (1997). [CrossRef]  

98. T. L. Szabo, “Time domain wave equations for lossy media obeying a frequency power law,” J. Acoust. Soc. Am. 96(1), 491–500 (1994). [CrossRef]  

99. T. L. Szabo, Diagnostic Ultrasound Imaging: Inside Out, Academic Press Series in Biomedical Engineering (Elsevier Academic Press, 2004).

100. M. Caputo, “Linear models of dissipation whose Q is almost frequency independent—II,” Geophys. J. Int. 13(5), 529–539 (1967). [CrossRef]  

101. K. R. Waters, M. S. Hughes, J. Mobley, G. H. Brandenburger, and J. G. Miller, “On the applicability of Kramers–Krönig relations for ultrasonic attenuation obeying a frequency power law,” J. Acoust. Soc. Am. 108(2), 556–563 (2000). [CrossRef]  

102. P. J. White, G. T. Clement, and K. Hynynen, “Longitudinal and shear mode ultrasound propagation in human skull bone,” Ultrasound Med. Biol. 32(7), 1085–1096 (2006). [CrossRef]  

103. B. Liang, S. Wang, F. Shen, Q. H. Liu, Y. Gong, and J. Yao, “Acoustic impact of the human skull on transcranial photoacoustic imaging,” Biomed. Opt. Express 12(3), 1512–1528 (2021). [CrossRef]  

104. J.-F. Aubry, M. Tanter, M. Pernot, J.-L. Thomas, and M. Fink, “Experimental demonstration of noninvasive transskull adaptive focusing based on prior computed tomography scans,” J. Acoust. Soc. Am. 113(1), 84–93 (2003). [CrossRef]  

105. H. E. Cline, J. F. Schenek, K. Hynynen, and R. D. Watkins, “MR-guided focused ultrasound surgery,” J. Comput. Assist. Tomogr. 16(6), 956–965 (1992). [CrossRef]  

106. E. Martin, D. Jeanmonod, A. Morel, E. Zadicario, and B. Werner, “High-intensity focused ultrasound for noninvasive functional neurosurgery,” Ann Neurol. 66(6), 858–861 (2009). [CrossRef]  

107. H. D. Barth, M. E. Launey, A. A. MacDowell, J. W. Ager III, and R. O. Ritchie, “On the effect of X-ray irradiation on the deformation and fracture behavior of human cortical bone,” Bone 46(6), 1475–1485 (2010). [CrossRef]  

108. J. Du, M. Carl, M. Bydder, A. Takahashi, C. B. Chung, and G. M. Bydder, “Qualitative and quantitative ultrashort echo time (UTE) imaging of cortical bone,” J. Magn. Reson. 207(2), 304–311 (2010). [CrossRef]  

109. M. Weiger, K. P. Pruessmann, and F. Hennel, “MRI with zero echo time: hard versus sweep pulse excitation,” Magn. Reson. Med. 66(2), 379–389 (2011). [CrossRef]  

110. D. M. Grodzki, P. M. Jakob, and B. Heismann, “Ultrashort echo time imaging using pointwise encoding time reduction with radial acquisition (PETRA),” Magn. Reson. Med. 67(2), 510–518 (2012). [CrossRef]  

111. F. Wiesinger, L. I. Sacolick, A. Menini, S. S. Kaushik, S. Ahn, P. Veit-Haibach, G. Delso, and D. D. Shanbhag, “Zero TEMR bone imaging in the head: Zero TE Bone Imaging,” Magn. Reson. Med. 75(1), 107–114 (2016). [CrossRef]  

112. M. R. Juttukonda, B. G. Mersereau, Y. Chen, Y. Su, B. G. Rubin, T. L. Benzinger, D. S. Lalush, and H. An, “MR-based attenuation correction for PET/MRI neurological studies with continuous-valued attenuation coefficients for bone through a conversion from R2* to CT-Hounsfield units,” NeuroImage 112, 160–168 (2015). [CrossRef]  

113. C. Huang, “Aberration correction for transcranial photoacoustic tomography of primates employing adjunct image data,” J. Biomed. Opt. 17(6), 066016 (2012). [CrossRef]  

114. L. Guasch, O. C. Agudo, M.-X. Tang, P. Nachev, and M. Warner, “Full-waveform inversion imaging of the human brain,” NPJ Digit. Med. 3(1), 28 (2020). [CrossRef]  

115. A. Tarantola, “Inversion of seismic reflection data in the acoustic approximation,” Geophysics 49(8), 1259–1266 (1984). [CrossRef]  

116. G. Subsol, J.-P. Thirion, and N. Ayache, “A scheme for automatically building 3D morphometric anatomical atlases: application to a skull atlas,” (1998).

117. M. Moshrefi-Torbati and J. K. Hammond, “Physical and geometrical interpretation of fractional operators,” J. Frankl. Inst. 335(6), 1077–1086 (1998). [CrossRef]  

118. I. Podlubny, Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications (Elsevier, 1998).

119. X. Yuan, D. Borup, J. Wiskin, M. Berggren, and S. A. Johnson, “Simulation of acoustic wave propagation in dispersive media with relaxation losses by using FDTD method with PML absorbing boundary condition,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 46(1), 14–23 (1999). [CrossRef]  

120. W. Chen and S. Holm, “Fractional Laplacian time-space models for linear and nonlinear lossy media exhibiting arbitrary frequency power-law dependency,” J. Acoust. Soc. Am. 115(4), 1424–1430 (2004). [CrossRef]  

121. B. E. Treeby and B. T. Cox, “Modeling power law absorption and dispersion for acoustic propagation using the fractional Laplacian,” J. Acoust. Soc. Am. 127(5), 2741–2748 (2010). [CrossRef]  

122. D. M. Boore, “Finite difference methods for seismic wave propagation in heterogeneous materials,” Methods Comput. Phys. 11, 1–37 (1972). [CrossRef]  

123. J. Virieux, “SH-wave propagation in heterogeneous media: velocity-stress finite-difference method,” Geophysics 49(11), 1933–1942 (1984). [CrossRef]  

124. R. Madariaga, K. Olsen, and R. Archuleta, “Modeling dynamic rupture in a 3D earthquake fault model,” Bull. Seismol. Soc. Am. 88(5), 1182–1197 (1998).

125. Z. Alterman and F. C. Karal Jr, “Propagation of elastic waves in layered media by finite difference methods,” Bull. Seismol. Soc. Am. 58(1), 367–398 (1968).

126. K. Mitsuhashi, J. Poudel, T. P. Matthews, A. Garcia-Uribe, L. V. Wang, and M. A. Anastasio, “A forward-adjoint operator pair based on the elastic wave equation for use in transcranial photoacoustic computed tomography,” SIAM J. Imaging Sci. 10(4), 2022–2048 (2017). [CrossRef]  

127. J. Poudel and M. A. Anastasio, “Joint reconstruction of initial pressure distribution and spatial distribution of acoustic properties of elastic media with application to transcranial photoacoustic tomography,” Inverse Probl. 36(12), 124007 (2020). [CrossRef]  

128. O. C. Zienkiewicz and R. L. Taylor, The Finite Element Method, Vol. 2 (Butterworth-Heinemann, 2000).

129. D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (SIAM, 1977).

130. B. T. Cox, S. Kara, S. R. Arridge, and P. C. Beard, “k-space propagation models for acoustically heterogeneous media: Application to biomedical photoacoustics,” J. Acoust. Soc. Am. 121(6), 3453–3464 (2007). [CrossRef]  

131. B. E. Treeby and B. T. Cox, “k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010). [CrossRef]  

132. L. C. Wilcox, G. Stadler, C. Burstedde, and O. Ghattas, “A high-order discontinuous Galerkin method for wave propagation through coupled elastic–acoustic media,” J. Comput. Phys. 229(24), 9373–9396 (2010). [CrossRef]  

133. M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77, 041101 (2006). [CrossRef]  

134. C. Lutzweiler and D. Razansky, “Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification,” Sensors 13(6), 7345–7384 (2013). [CrossRef]  

135. N. Davoudi, X. L. Deán-Ben, and D. Razansky, “Deep learning optoacoustic tomography with sparse data,” Nat. Mach. Intell. 1(10), 453–460 (2019). [CrossRef]  

136. R. A. Kruger, P. Liu, Y. Richard Fang, and C. R. Appledorn, “Photoacoustic ultrasound (PAUS)-reconstruction tomography,” Med. Phys. 22(10), 1605–1609 (1995). [CrossRef]  

137. C. G. Hoelen and F. F. de Mul, “Image reconstruction for photoacoustic scanning of tissue structures,” Appl. Opt. 39(31), 5872–5883 (2000). [CrossRef]  

138. X. Jin, C. Li, and L. V. Wang, “Effects of acoustic heterogeneities on transcranial brain imaging with microwave-induced thermoacoustic tomography: effects of acoustic heterogeneities on thermoacoustic tomography,” Med. Phys. 35(7Part1), 3205–3214 (2008). [CrossRef]  

139. Y. Xu, D. Feng, and L. V. Wang, “Exact frequency-domain reconstruction for thermoacoustic tomography. I. Planar geometry,” IEEE Trans. Med. Imaging 21(7), 823–828 (2002). [CrossRef]  

140. Y. Xu, M. Xu, and L. V. Wang, “Exact frequency-domain reconstruction for thermoacoustic tomography. II. Cylindrical geometry,” IEEE Trans. Med. Imaging 21(7), 829–833 (2002). [CrossRef]  

141. R. J. Schulze, O. Scherzer, G. Zangerl, M. Holotta, D. Meyer, F. Handle, R. Nuster, and G. Paltauf, “On the use of frequency-domain reconstruction algorithms for photoacoustic imaging,” J. Biomed. Opt. 16(8), 086002 (2011). [CrossRef]  

142. Y. Xu and L. V. Wang, “Time reversal and its application to tomography with diffracting sources,” Phys. Rev. Lett. 92(3), 033902 (2004). [CrossRef]  

143. Y. Hristova, P. Kuchment, and L. Nguyen, “Reconstruction and time reversal in thermoacoustic tomography in acoustically homogeneous and inhomogeneous media,” Inverse Probl. 24(5), 055006 (2008). [CrossRef]  

144. B. E. Treeby, E. Z. Zhang, and B. T. Cox, “Photoacoustic tomography in absorbing acoustic media using time reversal,” Inverse Probl. 26(11), 115003 (2010). [CrossRef]  

145. M. Agranovsky and P. Kuchment, “Uniqueness of reconstruction and an inversion procedure for thermoacoustic and photoacoustic tomography with variable sound speed,” Inverse Probl. 23(5), 2089–2102 (2007). [CrossRef]  

146. P. Stefanov and G. Uhlmann, “Thermoacoustic tomography with variable sound speed,” Inverse Probl. 25(7), 075011 (2009). [CrossRef]  

147. A. Javaherian and S. Holman, “A continuous adjoint for photo-acoustic tomography of the brain,” Inverse Probl. 34(8), 085003 (2018). [CrossRef]  

148. J. A. Fessler, “Penalized weighted least-squares image reconstruction for positron emission tomography,” IEEE Trans. Med. Imaging 13(2), 290–300 (1994). [CrossRef]  

149. X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Probl. 25(12), 123009 (2009). [CrossRef]  

150. M. N. Wernick and J. N. Aarsvold, Emission Tomography: The Fundamentals of PET and SPECT (Elsevier, 2004).

151. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009). [CrossRef]  

152. A. Beck, Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with MATLAB (SIAM, 2014).

153. B. O’donoghue and E. Candes, “Adaptive restart for accelerated gradient schemes,” Found. Comput. Math. 15(3), 715–732 (2015). [CrossRef]  

154. A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18(11), 2419–2434 (2009). [CrossRef]  

155. D. R. Kaeli, Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units (ACM, 2009).

156. C. Huang, K. Wang, R. W. Schoonover, L. V. Wang, and M. A. Anastasio, “Joint reconstruction of absorbed optical energy density and sound speed distributions in photoacoustic computed tomography: a numerical investigation,” IEEE Trans. Comput. Imaging 2(2), 136–149 (2016). [CrossRef]  

157. J. Tick, A. Pulkkinen, and T. Tarvainen, “Image reconstruction with uncertainty quantification in photoacoustic tomography,” J. Acoust. Soc. Am. 139(4), 1951–1961 (2016). [CrossRef]  

158. J. Tick, A. Pulkkinen, F. Lucka, R. Ellwood, B. T. Cox, J. P. Kaipio, S. R. Arridge, and T. Tarvainen, “Three dimensional photoacoustic tomography in Bayesian framework,” J. Acoust. Soc. Am. 144(4), 2061–2071 (2018). [CrossRef]  

159. T. Tarvainen, A. Pulkkinen, B. T. Cox, J. P. Kaipio, and S. R. Arridge, “Bayesian image reconstruction in quantitative photoacoustic tomography,” IEEE Trans. Med. Imaging 32(12), 2287–2298 (2013). [CrossRef]  

160. A. Pulkkinen, B. T. Cox, S. R. Arridge, J. P. Kaipio, and T. Tarvainen, “A Bayesian approach to spectral quantitative photoacoustic tomography,” Inverse Probl. 30(6), 065012 (2014). [CrossRef]  

161. A. Hauptmann and B. T. Cox, “Deep learning in photoacoustic tomography: current approaches and future directions,” J. Biomed. Opt. 25(11), 112903 (2020). [CrossRef]  

162. C. Yang, H. Lan, F. Gao, and F. Gao, “Review of deep learning for photoacoustic imaging,” Photoacoustics 21, 100215 (2021). [CrossRef]  

163. H. Deng, H. Qiao, Q. Dai, and C. Ma, “Deep learning in photoacoustic imaging: a review,” J. Biomed. Opt. 26(4), 040901 (2021). [CrossRef]  

164. K.-T. Hsu, S. Guan, and P. V. Chitnis, “Comparing deep learning frameworks for photoacoustic tomography image reconstruction,” Photoacoustics 23, 100271 (2021). [CrossRef]  

165. G. Wang, J. C. Ye, K. Mueller, and J. A. Fessler, “Image reconstruction is a new frontier of machine learning,” IEEE Trans. Med. Imaging 37(6), 1289–1296 (2018). [CrossRef]  

166. S. Gutta, V. S. Kadimesetty, S. K. Kalva, M. Pramanik, S. Ganapathy, and P. K. Yalavarthy, “Deep neural network-based bandwidth enhancement of photoacoustic data,” J. Biomed. Opt. 22(11), 1 (2017). [CrossRef]  

167. A. Hauptmann, F. Lucka, M. Betcke, N. Huynh, J. Adler, B. Cox, P. Beard, S. Ourselin, and S. Arridge, “Model-based learning for accelerated, limited-view 3-D photoacoustic tomography,” IEEE Trans. Med. Imaging 37(6), 1382–1393 (2018). [CrossRef]  

168. S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 27(7), 987–1005 (2019). [CrossRef]  

169. S. Antholzer, M. Haltmeier, R. Nuster, and J. Schwab, “Photoacoustic image reconstruction via deep learning,” Proc. SPIE 10494, 104944U (2018). [CrossRef]  

170. M. Kim, G.-S. Jeng, I. Pelivanov, and M. O’Donnell, “Deep-learning image reconstruction for real-time photoacoustic system,” IEEE Trans. Med. Imaging 39(11), 3379–3390 (2020). [CrossRef]  

171. K. Johnstonbaugh, S. Agrawal, D. Abhishek, M. Homewood, S. P. K. Karri, and S.-R. Kothapalli, “Novel deep learning architecture for optical fluence dependent photoacoustic target localization,” Proc. SPIE 10878, 108781L (2019). [CrossRef]  

172. S. Jeon and C. Kim, “Deep learning-based speed of sound aberration correction in photoacoustic images,” Proc. SPIE 11240, 112400J (2020). [CrossRef]  

173. N. Awasthi, G. Jain, S. K. Kalva, M. Pramanik, and P. K. Yalavarthy, “Deep neural network-based sinogram super-resolution and bandwidth enhancement for limited-data photoacoustic tomography,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 67(12), 2660–2673 (2020). [CrossRef]  

174. D. Allman, A. Reiter, and M. A. L. Bell, “Photoacoustic source detection and reflection artifact removal enabled by deep learning,” IEEE Trans. Med. Imaging 37(6), 1464–1477 (2018). [CrossRef]  

175. S. Guan, A. A. Khan, S. Sikdar, and P. V. Chitnis, “Fully dense UNet for 2-D sparse photoacoustic tomography artifact removal,” IEEE J. Biomed. Health Inform. 24(2), 568–576 (2020). [CrossRef]  

176. T. Vu, M. Li, H. Humayun, Y. Zhou, and J. Yao, “A generative adversarial network for artifact removal in photoacoustic computed tomography with a linear-array transducer,” Exp. Biol. Med. 245(7), 597–605 (2020). [CrossRef]  

177. L. Mohammadi, H. Behnam, J. Tavakkoli, and K. Avanaki, “Skull acoustic aberration correction in photoacoustic microscopy using a vector space similarity model: a proof-of-concept simulation study,” Biomed. Opt. Express 11(10), 5542 (2020). [CrossRef]  

178. S. Antholzer, J. Schwab, and M. Haltmeier, “Deep learning versus $\ell^{1}$-minimization for compressed sensing photoacoustic tomography,” in 2018 IEEE International Ultrasonics Symposium (IUS) (IEEE, 2018), pp. 206–212.

179. D. Wang, Y. Wu, and J. Xia, “Review on photoacoustic imaging of the brain using nanoprobes,” Neurophotonics 3(1), 010901 (2016). [CrossRef]  

180. D. Razansky, J. Klohs, and R. Ni, “Multi-scale optoacoustic molecular imaging of brain diseases,” Eur. J. Nucl. Med. Mol. Imaging 2021, 1–19 (2021). [CrossRef]  

181. R. McManus, “19-Ton Magnet Augments NIH MRI Facility,” https://nihrecord.nih.gov/2018/10/05/19-ton-magnet-augments-nih-mri-facility.

182. F. A. Graybill, Theory and Application of the Linear Model (Duxbury Press, 1976), 183.

183. L. Gagnon, C. Gauthier, R. D. Hoge, F. Lesage, J. J. Selb, and D. A. Boas, “Double-layer estimation of intra-and extracerebral hemoglobin concentration with a time-resolved system,” J. Biomed. Opt. 13(5), 054019 (2008). [CrossRef]  

184. R. D. Whitehead Jr, Z. Mei, C. Mapango, and M. E. D. Jefferds, “Methods and analyzers for hemoglobin measurement in clinical laboratories and field settings,” Ann. N. Y. Acad. Sci. 1450(1), 147 (2019). [CrossRef]  

185. B. T. Cox, S. R. Arridge, and P. C. Beard, “Estimating chromophore distributions from multiwavelength photoacoustic images,” J. Opt. Soc. Am. A 26(2), 443–455 (2009). [CrossRef]  

186. B. T. Cox, J. G. Laufer, P. C. Beard, and S. R. Arridge, “Quantitative spectroscopic photoacoustic imaging: a review,” J. Biomed. Opt. 17(6), 061202 (2012). [CrossRef]  

187. S. Tzoumas, A. Nunes, I. Olefir, S. Stangl, P. Symvoulidis, S. Glasl, C. Bayer, G. Multhoff, and V. Ntziachristos, “Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues,” Nat. Commun. 7(1), 12121 (2016). [CrossRef]  

188. S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013). [CrossRef]  

189. H. Obrig and A. Villringer, “Beyond the visible—imaging the human brain with light,” J. Cereb. Blood Flow Metab. 23(1), 1–18 (2003). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. 2D PACT. (a) PACT of the rat brain cortex through the skull and skin using a scanned cylindrically focused ultrasonic transducer [36]. (b) PACT of the monkey brain cortex through a monkey skull using a scanned cylindrically focused ultrasonic transducer [37]. (c) PACT of the canine brain cortex through an adult human skull using a scanned spherically focused ultrasonic transducer [40]. (d) (1) The ring-shaped animal PACT system was used to image (1) the mouse brain cortex through the skull in vivo, (2) the coronal plane of the mouse brain through the skin and skull in vivo, (3) the whole mouse brain ex vivo, and (4) the axial plane of the trunk with USCT and PACT in vivo [4146]. (e) A wearable PACT device for imaging the brain of a free-moving rat [47]. (f) PACT of the human breast using a full-ring ultrasonic transducer array [48]. (g) PACT of the deep mouse brain using a scanned linear ultrasonic probe [49]. (h) PACT for brain surgical guidance using a linear ultrasonic probe without scanning [50]. In (b) and (c), the scalp was shown in the schematics but not present in the ex vivo experiments. Adapted with permission [36,37,4050].
Fig. 2.
Fig. 2. 3D PACT. (a) A sparse hemispherical array for animal brain imaging [59]. (b) A sparse hemispherical array for human breast and extremity imaging [60]. (c) A system composed of 16 arc arrays for human breast imaging [62]. (d) A system with a quarter-ring array for human breast imaging [63]. (e) A system with a quarter-ring array for transcranial imaging [65]. (f) A 1024-element parallel system for functional human brain imaging [28]. Adapted with permission [59,60,62,63,65,28].
Fig. 3.
Fig. 3. Schematics of electric ultrasonic transducers and optical ultrasonic detectors, including (a) piezoelectric transducers, (b) CMUTs, (c) PMUTs, and (d) optical (planar Fabry-Pérot, interferometric type) detectors. (d) was adapted with permission [70].
Fig. 4.
Fig. 4. Binarized x-ray microtomographic image of an adult cranial bone slice with 10-µm resolution. Adapted with permission [90].
Fig. 5.
Fig. 5. Thickness maps of the human skull. (a) Skull anatomies. Average skull thicknesses of (b) males and (c) females at various anatomical locations. L/R: left/right; FB: frontal bone; TB: temporal bone; OB: occipital bone; PB: parietal bone; FCP: frontal central point; OCP: occipital central point. Error bars: standard deviations across subjects of the same genders.
Fig. 6.
Fig. 6. Acoustic effects of the skull.
Fig. 7.
Fig. 7. Acoustic transmittance at 0.75 MHz in the presence of the human skull due to the three causes.
Fig. 8.
Fig. 8. Pressure transmission coefficients at the skull boundaries. (a) Pressure transmission coefficients versus incident angles at the soft-tissue–skull interface and (b) at the skull–soft-tissue interface. Adapted with permission [65].
Fig. 9.
Fig. 9. MRI and x-ray CT of the human skull. (a) Coregistered MRI and x-ray CT images. (b) The 2D-histogram distribution indicates an approximately linear correlation between x-ray CT and MRI for the bone between 300 HU and 1,500 HU. (c) Schematic of USCT (left) and a wave propagated from a single element transducer (right). (d) A simple homogeneous head model (1) was used as the initial guess for AWI (2), which was later used as the preconditioner for FWI (3). The ground truth is displayed in (4). Adapted with permission [111,114].
Fig. 10.
Fig. 10. Ex vivo PACT images reconstructed using (a) the universal back-projection algorithm without the skull and (b) with the skull, and using (c) the layered back-projection algorithm with the skull. Adapted with permission [65].
Fig. 11.
Fig. 11. Ex vivo PACT images reconstructed using (a) the universal back-projection algorithm without the skull and (b) with the skull, and using (c) the time reversal algorithm with the skull. Adapted with permission [113].
Fig. 12.
Fig. 12. (a) Photograph of a skull-mimicking plastic globe with “blood vessels” painted on the inner surface. Ex vivo PACT images were reconstructed through the globe using (b) the universal back-projection algorithm and (c) the adjoint operator. Adapted with permission [126].
Fig. 13.
Fig. 13. (a) Photograph of a skull-mimicking plastic globe with “blood vessels” painted on the inner surface and scalp vessel-mimicking phantoms placed on the outer surface. Ex vivo PACT images were reconstructed through the globe using (b) the adjoint method and (c) the iterative optimization-based method. Adapted with permission [66].
Fig. 14.
Fig. 14. (a) X-ray CT image of an adult human skull with cortical bones spatially encoded by colors. Transcranial PACT was simulated using (b) the universal back-projection method, (c) the conventional iterative optimization-based method, and (d) the joint reconstruction method. Adapted with permission [127].
Fig. 15.
Fig. 15. The mouse head imaged by PACT and reconstructed using (a) the Bayesian framework and (b) time reversal. Adapted with permission [158].
Fig. 16.
Fig. 16. A subject’s head being imaged by the PACT system. Adapted with permission [27].
Fig. 17.
Fig. 17. Functional responses. (a) Functional activation in response to different stimulations measured by PACT (top row) and BOLD fMRI (bottom row). (b) Fractional changes of the PA and BOLD signals. (b) Fractional changes of the Hb and BOLD signals [27]. Adapted with permission [27].
Fig. 18.
Fig. 18. Noise-equivalent molar concentrations (NECs) of Hb vs. depths measured at 1064-nm optical wavelength. Data are presented as mean ± SEM (n = 4 subjects). Adapted with permission [27].

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

α ( r , f ) = α 0 ( r ) f y ,
1 c 2 ( r ) 1 c 1 ( r ) = α 0 ( r ) tan ( π y 2 ) ( f 2 y 1 f 1 y 1 ) ,
p ( r , t ) = c 0 ( r ) 2 [ 1 τ ( r ) t ( 2 ) y 2 1 η ( r ) ( 2 ) y 1 2 ] ρ p ( r , t ) ,
τ ( r ) = 2 α 0 ( r ) c 0 ( r ) y 1 , η ( r ) = 2 α 0 ( r ) c 0 ( r ) y tan ( π y 2 ) .
u ( r , t ) t = 1 ρ 0 ( r ) p ( r , t ) ,
ρ p ( r , t ) t = ρ 0 ( r ) u ( r , t ) ,
p 0 ( r ) p ( r , t ) | t = 0 = 0 , u ( r , t ) | t = 0 = 0.
p ( r , t ) = F p 0 ( r ) .
p ( r , t ) = 1 4 π c 0 2 V d 3 r p 0 ( r ) d d t δ ( c 0 t | r r | ) | r r | ,
p ( r , t ) = 1 4 π c 0 2 d d t ( g ( r , t ) t ) ,
σ ( r , t ) = [ σ 11 ( r , t ) σ 12 ( r , t ) σ 13 ( r , t ) σ 21 ( r , t ) σ 22 ( r , t ) σ 23 ( r , t ) σ 31 ( r , t ) σ 32 ( r , t ) σ 33 ( r , t ) ] .
u ( r , t ) t + α ^ ( r ) u ( r , t ) = 1 ρ 0 ( r ) ( σ ( r , t ) ) ,
σ ( r , t ) t = λ ( r ) t r ( u ( r , t ) ) I + μ ( r ) ( u ( r , t ) + u ( r , t ) T ) ,
σ 0 ( r ) σ ( r , t ) | t = 0 = 1 3 p 0 ( r ) I , u ( r , t ) | t = 0 = 0.
p ( r , t ) = H p 0 ( r ) .
p ^ 0 = argmin p 0 0 p H p 0 W 2 + γ R ( p 0 ) ,
( p ^ 0 , c ^ l , c ^ s , ρ ^ 0 , α ^ ) = argmin p 0 , c l , c s , ρ 0 , α p H p 0 W 2 + γ R ( p 0 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.