Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Technical considerations in the Verasonics research ultrasound platform for developing a photoacoustic imaging system

Open Access Open Access

Abstract

Photoacoustic imaging (PAI) is an emerging functional and molecular imaging technology that has attracted much attention in the past decade. Recently, many researchers have used the vantage system from Verasonics for simultaneous ultrasound (US) and photoacoustic (PA) imaging. This was the motivation to write on the details of US/PA imaging system implementation and characterization using Verasonics platform. We have discussed the experimental considerations for linear array based PAI due to its popularity, simple setup, and high potential for clinical translatability. Specifically, we describe the strategies of US/PA imaging system setup, signal generation, amplification, data processing and study the system performance.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Photoacoustic imaging (PAI), also called optoacoustic imaging, is a three-dimensional (3-D) imaging modality that works based on the photoacoustic (PA) effect [1]. The sample (light absorbent) to be imaged is optically excited, leading to a transient temperature rise, resulting in a thermoelastic expansion of the absorber followed by emission of acoustic waves. The absorber could be endogenous such as hemoglobin (oxy- or deoxy-), myoglobin (oxy- or deoxy-), melanin, lipid, Bilirubin, water, or an exogenous contrast agent such as dyes [2]. The absorption spectrum of some of the endogenous and exogenous absorbers are shown in Fig. 1; the absorption spectrum of two example imaging targets that are commonly used for calibration and performance evaluation in PAI experiments, i.e., vinyl black tape and carbon pencil lead, are given in Appendix A.

 figure: Fig. 1.

Fig. 1. Absorption spectrum of (a) endogenous, and (b) exogenous contrast agents. Reproduced with permission from [35]. According to Beer-Lambert law, absorbance is defined by $A = \varepsilon (\lambda )Cd$ where, $\varepsilon (\lambda )$, C, and d are molar absorptivity, concentration, and cross-section thickness of the contrast agent, respectively. Absorption coefficient is expressed as: $\alpha = ({{A / d}} ){\log _{10}}e$[6].

Download Full Size | PDF

The emitted acoustic waves from the absorber are detected by ultrasound (US) transducers. The transducer signals are then given to an image reconstruction algorithm to generate the absorption map of the tissue. The PAI process steps are illustrated in Fig. 2.

 figure: Fig. 2.

Fig. 2. Principle of PA signal generation, detection, and image reconstruction. Reprinted with permission from Ref. [7]. A short pulsed (in nanoseconds) laser light illuminates the absorber, leading to a transient temperature rise which results in a thermoelastic expansion of the absorber, and acoustic (or PA) wave generation. The signals generated from the waves received by an ultrasound probe are given to a reconstruction algorithm to form a PA image. Thermal and stress confinements must be met to produce PA waves.

Download Full Size | PDF

Due to strong optical scattering in biological tissues, pure optical imaging modalities have a shallow imaging depth [1,810]; even with contrast agent [1114]. The transport mean free path (i.e., the mean distance after which a photon’s direction becomes random) in biological tissues is around 1 mm [3]. Acoustic waves experience far less tissue scattering, thus they propagate a greater distance [15,16]. Although ultrasound imaging can image deep in biological tissues with a high spatial resolution, its acoustic contrast is incapable of providing certain physiological parameters [17]. In PAI, there is no restriction for photons, thus optical excitation can travel far beyond the diffusion limit and still generate acoustic waves. Sensitivity of PAI in deep tissues is orders of magnitude higher than that of pure optical imaging modalities [17,18]; the highest penetration depth reported in PAI is ∼12 cm [1921]. PAI is an ideal modality for measuring or monitoring tissue physiological parameters by imaging the concentration of tissue chromophores [22], which are changed during the course of a disease [23]. PAI has been evaluated in preclinical and recently in clinical applications for disease detection and monitoring purposes [2446]. For instance, it has been used to study human skin abnormalities [41,4750], brain disease detection [51,52], human breast tumor detection [23,5356], retina disease diagnosis [57,58], and atherosclerosis evaluation of vessel walls [34,59,60]. PAI usage in transcranial brain imaging in under investigation [1114,46,50,61,62].

PAI can be implemented in tomography or microscopy configurations with variations in system design (more details are provided in [7]). Among all configurations and variations of PAI, linear-array based PAI is one of the most commonly used due to its straightforward setup, easy use, and simple clinical translatability [63,64]. In addition, since linear-array ultrasound imaging systems have been well-established as clinical tools, slightly modifying them to develop more capable tools, i.e., US/PA imaging systems, is not far from reality.

In recent years, many researchers have used the Vantage Verasonics system for simultaneous US and PA imaging. Vantage system is a MATLAB-friendly programmable platform that can produce co-registered US and PA images; the PA image represents the optical absorption map of the tissue while the US image represents the tissue acoustic impedance map. Here, we describe the subtle details of US/PA imaging system setup, study the performance parameters of the system, and explain sequencing of the US/PA signal generation and signal amplification as well as the details required for efficient use of the hardware of the system and data processing protocols.

2. Vantage system architecture for US/PA sequencing

Vantage hardware is controlled by MATLAB coding. The detailed explanation of the parameters used in the code for data acquisition and processing, image reconstruction, and display are given in Appendix B. The Vantage system architecture for US/PA sequencing is shown in Fig. 3(A). The structures for US and PA run-time defined in the MATLAB code communicate to runAcq through the activation of the MATLAB loader function VSX. This opens a GUI including imaging windows of the reconstructed images from RunAcq (US and PA), time gain control, US voltage transmit, and many other parameters that are used for real-time user modification to the imaging sequence. These modifications take effect on the hardware between cycles of hand-offs between VSX and RunAcq. After initial activation of VSX, RunAcq then communicates all of the run-time transmit and receive structures to the Vantage hardware (silver box) through the PCIe cable. RunAcq, receives the raw data, stores it in the RcvBuffer variable, and performs image reconstruction on US and PA data and sends it back to the VSX GUI. The raw data of US and PA are accessible after completion of the imaging sequence [65].

 figure: Fig. 3.

Fig. 3. Vantage system architecture and data acquisition sequencing. (a) Vantage system architecture for US/PA sequencing and (b) timing diagram of US/PA sequencing in Vantage system. US: Ultrasound, PA: Photoacoustic, Recon.: Reconstruction. RcvBuffer is the predefined buffer where the US and PA data are stored, Acq: acquisition, TX: transmit, RX: receive, A/D: analog to digital, VSX: Verasonics script execution, GUI: graphical user interface, RcvBuffer: receive buffer, CPU: central processing unit.

Download Full Size | PDF

The timing sequence in US and PA data acquisition, illustrated in Fig. 3(B), is as follows. US acquisition requires a user defined number of steered angles. Between each set of US acquisitions, the system waits for a laser trigger input to begin one PA event. After PA acquisition, US and PA data are transferred to the host controller and stored in the RcvBuffer variable. RunAcq then performs image reconstruction on the averaged data of all of the steered angles and also on the PA data. The reconstructed US and PA images are then displayed in the MATLAB VSX GUI.

3. US/PA experimental setup and system characterization

To perform an US/PA experiment, an Nd:YAG laser (PhocusMobil, Opotek Inc., CA, USA) with a repetition rate of 10 Hz and a pulse width of 8 ns was used. The laser uses an optical parametric oscillator (OPO) to tune the wavelength of light between 690 nm to 950 nm. For light delivery, we used a custom fiber bundle (Newport Corporation, Irvine, CA, USA). For data acquisition, the Verasonics Vantage 128 system was used. The specifications of the vantage system are listed in Table 1.

Tables Icon

Table 1. High frequency Vantage 128 system specifications (verified by Verasonics’ technical support)

The Vantage system is connected to a host computer using a PCIe express cable. A transducer is connected to the 260 pin Cannon connector mounted on the Vantage system. On the sample end, the transducer was placed and held perpendicularly to the sample. There is a coupling media between the imaging target and the transducer. Water and ultrasound gel are the most common couplant [61,66]. More details about acoustic couplants can be found in Appendix C.

The Verasonics software package provides a PA MATLAB script, SetUpL11_4vFlashAngPA.m, for the 128-element linear array L11-4v probe (see Appendix B). This script can be modified to be used with other probes. Two commonly used ultrasound transducers with Vantage system are ATL Philips L7-4 and Verasonics L22-14v. The specifications of these transducers are listed in Table 2.

Tables Icon

Table 2. Specifications of linear array transducers ATL Philips L7-4, and Verasonics L22-14v.

Table 3 lists the modifications on the original code, written for the L11-4v, needed for the L7-4 and L22-14v. The remainder of the script is compatible as long as the probe is a 128-element linear array.

Tables Icon

Table 3. Modifications to original L11-4 script by Verasonics, for L7-4 and L22-14v probes.

The Vantage system supports linear array transducers from different companies. The list of different transducer arrays that can be used with the Vantage system is given in Table 4. If an unrecognized transducer needs to be used, the transducer attributes must be determined independently and input into the computTrans.m script under a new custom transducer case. Key attributes include the central frequency, bandwidth, number of elements, element width, element spacing, element position, and connector pinout arrangement. In addition to unrecognized linear arrays not listed in Table 4, single-element-based arrays, ring arrays, and hemispherical arrays can also be connected to the Cannon connector mounted on Vantage system via a custom-made converter.

Tables Icon

Table 4. List of immediately recognizable transducers for the Vantage system.

To demonstrate the PA signal characteristics in both time and frequency domains, a 2 mm diameter carbon lead phantom in water was imaged at 690 nm. The energy after the fiber bundle was measured as ∼15 mJ/cm2. L22-14v transducer array was used in this experiment. The time domain signal obtained from the central transducer (i.e., 64th element), is shown in Fig. 4(A). In Fig. 4(B), the magnitude of the single-sided Fast Fourier Transform (FFT) of the time domain signal is shown. In Fig. 4(A), we also show that the strength of the US waves reflected back from the phantom is greater than that of the photoacoustic [67,68]. Spectral parameters (e.g., slope, y-intercept and mid-band fit) of photoacoustic signals can be utilized to characterize and quantify different tissue types based on their microstructural and mechanical properties [6974].

 figure: Fig. 4.

Fig. 4. Photoacoustic signal in (A) time-domain, and (B) frequency-domain, obtained from a 2 mm diameter carbon lead phantom, imaged with L22-14v transducer at 690 nm wavelength. The red signal in A shows the overlaid US signal acquired from the same sample.

Download Full Size | PDF

We characterized L7-4 and L22-14v for US/PA imaging. We created a resolution phantom of hair of diameter 54 µm in an open top plastic cubic box filled with deionized water [see Fig. 5(A)]. The lateral and axial resolutions were quantified by measuring the full width half maximum (FWHM) of the normalized 1D intensity profile across the hair image [75]. The axial and lateral resolutions of the L7-4 were measured at depths 0.5 to 5.0 cm with steps of 0.5 cm [see Fig. 5(B)]. The axial and lateral resolutions of the L22-14v were measured at depths 0.2 to 2.0 cm with steps of 0.2 cm [see Fig. 5(C)]. In L7-4, the axial resolution remained constant in both US and PA imaging while the lateral resolution worsened with depth, whereas in L22-14v, the axial and lateral resolutions stayed almost constant in both US and PA images. This is potentially due to the focal depth specified by the Vantage system. High frequency transducers have less beam divergence than low frequency transducers, this may have also contributed to worsening the lateral resolution of the L7-4 and constant lateral resolution of the L22-14v [76].

 figure: Fig. 5.

Fig. 5. US/PA resolution study. (A) Schematic of the experimental setup, including a hair phantom photograph captured by a 4× objective on a light microscope (SME-F8BH, Amscope, CA, USA). Resolution study when (B) L7-4 was used, (C) L22-14v was used. (i) US axial resolution versus depth, (ii) US lateral resolution versus depth, (iii) PA axial resolution versus depth, (iv) PA lateral resolution versus depth.

Download Full Size | PDF

4. Technical considerations

4.1. Synchronization optimization for efficient averaging

Photoacoustic imaging relies on careful timing of data acquisition and laser firing to accurately reconstruct photoacoustic depth information. In Q-switched Nd:YAG lasers, the timing involves triggering of the flash lamps to stimulate the emission medium (Nd:YAG) followed by time for optical buildup and subsequent opening of the Q-switch for release of the laser beam. A straightforward triggering method [see Fig. 6(Ai)] involves using internal flash lamp and Q-switch triggers built into the laser being used and using the flash lamp out port to trigger data acquisition on the Vantage trigger in-1 port. Upon receipt of the flash lamp trigger, the Vantage waits for a user-specified delay time equal to the known optical build-up time for the laser and then begins recording approximately when the laser fires. The major drawback to this triggering method is that the optical build-up time for the laser will “jitter” between tens of nanoseconds and will cause the peak PA signal to fluctuate between a few sample numbers. This will cause issues in data averaging between subsequent frames. We have implemented a more optimal triggering method following the method described in [77] [see Fig. 6(Aii)]. In this method the flash lamp and Q-switch of the laser are both externally triggered. Using a function generator (ATF20B, Shenzhen Atten Electronics Co., Ltd., Nanshan, China) a trigger of 10Hz, 5Vpp, 2.5V offset, 50% duty cycle was simultaneously sent to the flash lamp-in of the Nd:YAG laser (PhocusMobil, Opotek, CA, USA) and the trigger in-1 of the Vantage system. Within the Vantage MATLAB script, the flash2Qdelay parameter is set to the external Q-switch delay time specified by the laser retailer (290 µs). This allows the Vantage to then trigger the Q-switch by connecting the Vantage’s trigger-out to the Q-switch-in on the laser. We have compared the performance of the two triggering methods by imaging a 2mm diameter carbon lead, when L7-4 US probe was used for PA signal detection and the fiber bundle we described earlier was used for illumination at 690 nm. Five sequential photoacoustic frames were recorded and overlaid with both methods [see Figs. 6(Bi) and 6(Bii)]. The results show a greater “jitter” in the first triggering method compared to nearly no “jitter” in the second method. We then averaged the signals acquired with each of these triggering methods. Averaging improved the signal-to-noise ratio (SNR) of the PA signal probably because the signal components being averaged were correlated and the noise components were not. Due to the pulse-to-pulse intensity and beam profile fluctuation, averaging can improve the SNR of the PA signal. 64th element of the L7-4 probe data was averaged using various numbers of stored frames: 1, 10, and 50, and plotted [see Fig. 6(Biii)].

 figure: Fig. 6.

Fig. 6. Photoacoustic signal “jitter” comparison between two triggering methods when a 2 mm carbon lead was imaged. 64th element signals are plotted. (A) (i) Schematic of straightforward triggering method where laser triggers Vantage system, (ii) schematic of the function generator driven triggering method where function generator triggers laser flash lamp and Vantage system, which then triggers laser Q-switch, (B) (i) five sequential PA frames showing significant “jitter” between frames, (ii) five sequential PA frames showing nearly no “jitter” between frames, and (iii) averaged frames using methods described in A(i) and A(ii).

Download Full Size | PDF

4.2. Illumination angle matters

Light illumination is one of the main components of a photoacoustic imaging system [78]. The orientation of the illumination of light can influence the amplitude of the PA signal received by the transducer based on how the light energy is deposited on the object [16]. To investigate this matter, we imaged a 2 mm diameter carbon lead in a scattering media, i.e., 2% milk, at 690 nm. We acquired PA data at different illumination angles. The angles were measured as the angle between the transducer and the illumination plane [see the experimental setup in Fig. 7(A)]. In these experiments, the transducer plane was held constant (always perpendicular to the sample) and the illumination plane was changed while the illumination spot was at the same location on the surface of the phantom. Figure 7(Bi) shows the data collected from the 64th element of the L7-4 probe at an angle of $\theta = $ 40° where the PA signal was weakest. Figure 7(Bii) shows the data collected from the 64th element of the L7-4 probe at an angle of 48° where the PA signal was strongest. The difference between the amplitudes of the PA signals is probably related to: (i) the light illumination path at each angle which depends on the scattering map of the phantom, (ii) the mechanical property of the tissue as to which directions the thermoelastically generated pressure waves have the maximum amplitude, and (iii) at greater angles, larger area of the target phantom is illuminated, hence, more numbers of point sources are excited and collectively a larger pressure wave is generated. It can be seen that the shape of the two induced PA signals are consistent while the PA signal amplitude is higher at one angle over another. This demonstrates the importance of optimizing illumination in PA imaging.

 figure: Fig. 7.

Fig. 7. Impact of illumination angle. (A) Experimental setup of illumination angle investigation. (B) PA signal profile from 64th element of L7-4 probe taken from a 2 mm carbon lead phantom with different illumination angles. (i) 40 degrees, (ii) 48 degrees.

Download Full Size | PDF

4.3. Improving reading accuracy by upsampling

The Vantage system automatically samples signals at a rate that is 4 times the central frequency of the transducer being used. This is sufficient to meet the Nyquist limit, however it may be necessary to analyze PA signals at a higher sampling rate. This requires modifications to the script as well as the use of an in-built filter tool to change the spectrum of the bandpass filter on the receive end of the transducer so that the system does not mistake the higher sampled signals as high frequency noise. First, the Resource.RcvBuffer(1).rowsPerFrame parameter is multiplied by the factor that the sampling rate will be multiplied. This is to allow the data buffer to hold the increased number of samples. Similarly, the Receive.decimSampleRate parameter is multiplied by the same factor. Finally, the parameter Receive.inputFilter is modified by entering filterTool into the MATLAB command line. This brings up a GUI which outputs the Receive.inputFilter value based on the user input parameters. Inside the GUI, sampleMode are set to custom with the decimSampleRate set to the rate of choice. The bandpass filter central frequency and relative bandwidth are then set to the standard values of the probe being used. The modifications to the script can be seen in Table 5 for using the L7-4 probe and changing the sampling rate from 20.8MHz to 62.4MHz.

Tables Icon

Table 5. Script modifications to change the sampling rate of L7-4 from 20.8 MHz to 62.4 MHz. The modifications are indicated in green.

We imaged a black tape of 18 mm width at 3 sampling rates, 20.8, 41.6, and 62.4MHz. The tape was imaged in a water bath with the L7-4 probe at the illumination wavelength of 690 nm. The results for these sampling rates are given in Figs. 8(A), 8(B), and 8(C), respectively. To demonstrate the improvement in the reading of the signals with increasing the sampling frequency, we calculated the difference between the top peaks and also the bottom peaks in the signal. The values are indicated in each figure. One drawback of using higher sampling frequency is slower available frame rate, therefore, an alternative solution to increase the number of samples without increasing the sampling frequency is interpolation. In Fig. 8(D), we showed the interpolated signal (3 times up sampling) using the ‘spline’ algorithm [79]. We also provided the location (i.e., index) of the highest peak value normalized by respective sample numbers (${x_{nor}} = {{x({{y_{\textrm{max}}}} )} / {\textrm{sample no}\textrm{.}}}$) and the ratio of peak values (both positive and negatives).

 figure: Fig. 8.

Fig. 8. PA signal profile from 64th element of L7-4 probe taken from a black tape phantom of 18 mm width with varying sampling rates: (A) 20.8 MHz, (B) 41.6 MHz, (C) 62.4 MHz, and (D) interpolated signal using ‘spline’ algorithm. Here, ${x_{nor}} = {{x({{y_{\textrm{max}}}} )} / {\textrm{sample no}\textrm{.}}}$ and ${y^r}_x = \textrm{ Ratio of }y\textrm{ values }$, where.

Download Full Size | PDF

4.4. Simultaneous real-time visualization of US and PA Images

The default MATLAB script provided by Verasonics utilizes a single imaging window for both US and PA image display, with the user being able to modify parameters to view one or the other. With real-time imaging, it is more important to simultaneously view US and PA images side by side. The following describes how to modify the original example script to simultaneously display US and PA images in real-time. With a single imaging window, all parameters for the Resource structure have arguments of 1. Adding a second window requires duplication of all of the Resource.DisplayWindow parameters with the argument of 2. Specifically, the variables Title, pdelta, Position, ReferencePt, numFrames, AxesUnits, Colormap, and splitPalette are copied from the already existing Resource.DisplayWindow(1) structure with the argument changed to 2. To differentiate between US in window 1 and PA in window 2, we set Resource.DisplayWindow(1).Colormap to gray(256) and Resource.DisplayWindow(2).Colormap to hot. The other change necessary to keep both windows real-time is modification of the Process(n).Parameters structure where n is 1 for US and 2 for PA. In the original structure, mappingMethod is set to lowerHalf for US and upperHalf for PA, but both should now be set to full. Also, displayWindow was originally set to 1 in both processes and now it should be 1 for US and 2 for PA. Modifications to the original script to simultaneously display US and PA images, in real-time are given in Appendix D (the modifications are indicated in red).

4.5. US image quality improvement

Vantage offers beam steering as a method of improving US image quality. This technique modifies transmit delay times in the beam apodization to steer the beam at different directions into the imaging field. This provides received echoes from different angles of reflection off of the imaging targets. The received echoes from multiple steered angles are then averaged to form an improved image. The variable na in the MATLAB script can be set to any positive integer to determine the number of steered angles. The transmit waveforms will be emitted at angles $\frac{{n\pi }}{{N + 1}}$, where N is equal to the variable na. n iterates with each transmit angle from 1 to N, where $\frac{\pi }{2}$ is normal to the transducer elements. Upon definition of na, these angles and corresponding time delays in the apodization for each element are calculated, and the time occurs between each US steered angle is determined. Taking this time into account with the pulse repetition rate of the laser, the number of the angles occur between each laser pulse is determined. Further, increasing the number of steered angles and the number of stored frames in each buffer are balanced to avoid slowdown of the system due to memory build up, which can potentially freeze MATLAB if the system RAM is overflowed. To investigate the effect of the number of steered US transmit angles on the image quality, we imaged a phantom, with 2 mm, 0.9 mm, and 0.2 mm diameter carbon leads in water with varying numbers of steered angles (i.e., 8, 16, 32, 64, 128), and quantified the contrast to noise ratio (CNR) of resultant images following the method explained in [16]. Figure 9(A) demonstrates the experimental setup. Figures 9(B)–9(F) show the US image for 8, 16, 32, 64, and 128 steered angles, respectively. Figure 9(G) shows the improvement of CNR with greater number of steered angles.

 figure: Fig. 9.

Fig. 9. US image quality improvement. (A) Experimental setup showing an ultrasound probe and a 3-carbon lead phantom. (L1) 2 mm diameter carbon lead, (L2) 0.9 mm diameter carbon lead, (L3) 0.2 mm diameter carbon lead, (B-F) US images of the 3 carbon lead phantom for 8, 16, 32, 64, and 128 steered angles, respectively. Green dashed circle encloses object pixels and red dashed box encloses both object and background pixels, (G) CNR versus number of steered angles

Download Full Size | PDF

4.6. Photoacoustic signal amplification

The PA pressure waves received by the linear array transducer are weak (i.e., their SNR is low) [80,81]. In cases where the imaging media is highly scattering, only weak pressure waves reach the transducer and produce low level voltage signals. These low level signals may not be accurately digitized using the default Analog Front End (AFE) settings of the Vantage data acquisition (DAQ) system. In addition, there is an attenuation in both light illumination path and acoustic signal propagation path that further weakens the PA signal. The AFE gain can be increased by up to 12 dB by modifying the settings in the RcvProfile structure, however in some cases, additional gain may be required. To increase the amplitude of the received signal, we used a 128-channel amplifier (AMP128-18, Photosound, Houston, TX, USA). The amplifier has 40 dB gain with -6 dB cut off frequencies of 25 kHz and 35 MHz. The amplifier was attached to the 260 pin Cannon connector on Vantage system (see Fig. 10). The US probe was then plugged into the front end of the amplifier. In the MATLAB script, i.e., computeTrans.m, a custom transducer ID was added. Details of the ultrasound transducer were copied to the custom ID so that the amplifier can be recognized. This is similar to the addition of any unrecognized transducer to the system, only the transducer attributes (e.g., central frequency, bandwidth, number of elements, element width, element spacing, and element position) are determined independently and input into the computTrans.m script under a new custom transducer case. To allow for identical wiring path lengths through the amplifier from the transducer end to Vantage system end, the pins were not connected one to one and their organization were corrected in the MATLAB script in the Trans.Connector variable. Since the amplifier was programmed to multiply any signal going in or out, to avoid causing damage to the transducer elements during US transmit, the transmit beam was disabled by setting TX.Apodization to zeros. This allowed the Vantage system to only work in the receiving mode with amplification. To demonstrate the utility of the amplifier, a two-wire phantom (each with 600 µm diameter) was imaged with and without the amplifier at varying concentrations of intralipid, i.e., 0%, 25%, 50% (Sigma Aldrich, USA). The experimental setup is shown in Figs. 10(A) and 10(B). The setup consists of a 50 Hz, 532 nm Nd:YAG laser (NL231-50-SH, EKSPLA, Vilnius, Lithuania) with 7 ns pulses, and an L7-4 US transducer. The laser illuminated the 2-wire phantom through a large fiber optic bundle (1 cm diameter, FO Lightguide, Edmund Optics, NJ, USA). We observed an SNR improvement in addition to the signal peak increase in all concentrations of intralipid [see Figs. 10(B), 10(C), and 10(D)]. SNR improvement was probably due to the high pre-amplifier input impedance which shifted the transducer noise spectrum to low frequencies, which was then filtered out using a high-pass filter [82].

 figure: Fig. 10.

Fig. 10. PA signal amplification. (A) System setup to test the Photosound AMP128-18 amplifier: (i) display monitor, (ii) Nd:YAG laser head, (iii) fiber optic bundle, (iv) laser power supply, (v) laser chiller. (B) Wire imaging results: wire phantom suspended in 0% intralipid imaged (i) without and (iv) with amplifier, wire phantom suspended in 25% intralipid imaged (ii) without and (v) with amplifier, and wire phantom suspended in 50% intralipid image (iii) without and (vi) with amplifier. (C) Raw data from the 64th channel of the linear array when imaging a single wire in 50% intralipid concentration to demonstrate SNR improvement from (i) without amplifier (SNR: 3.25 dB) to (ii) with amplifier (SNR: 5.5 dB), (D) Bar chart demonstrating signal amplitude increase corresponding to phantom imaging in B with and without amplifier, at different concentrations of intralipid.

Download Full Size | PDF

5. Fluence compensation

The initial PA pressure is proportional to the product of local optical absorption coefficient and the optical fluence deposited at the same location. However, the optical fluence could vary significantly in living tissue which necessitates the use of optical fluence compensation for quantitative PA imaging [83]. If the optical parameters of the tissue, e.g., absorption coefficient, scattering coefficient, and anisotropy factor, are known, the optical fluence can be calculated by solving the photon diffusion equation using Monte Carlo (MC) or finite element method [84,85]. With the known optical fluence distribution, a fluence-compensated absorption map can be calculated via dividing the PA reconstructed image by its corresponding fluence map.

We evaluated the fluence compensation algorithm on a phantom made of a gelatin, intralipid, and ink mixture [see Figs. 11(A) and 11(B)]. We used 0.5 mm diameter carbon lead in three layers of gelatin phantom with thicknesses of 5, 8, and 7 mm, respectively. We added intralipid (Sigma Aldrich, USA) as scatterers and used ink to represent absorption. The concentration of the intralipid solutions were 4%, 1%, and 6%, and those of ink were 0.1%, 0.4%, and 0.2%, respectively; these values were chosen to represent various biological tissues. Using Mie calculator [86], the scattering coefficient, absorption coefficient, and anisotropy factor of the layers were calculated at 532 nm. The scattering coefficients were 15.4 cm-1, 3.1 cm-1 and 22.3 cm-1, respectively. The absorption coefficients were 0.3 cm-1, 0.9 cm-1, and 0.4 cm-1, respectively. The absorption coefficient of water, i.e., 0.11 cm-1, was added to these numbers. We used another layer of pure gelatin with the thickness of 5 mm. The scattering coefficient of both pure gelatin and US gel were considered 0.05 cm-1. The absorption coefficient of US gel (3mm layer) was considered 0.11 cm-1. The experimental protocol was as follows: (1) we imaged the phantom using the L7-4 probe and generated an US image [see Fig. 11(C)] and a PA image [(see Fig. 11(D)]; (2) we then segmented the US image using the segmentation method given in [87]; (3) the abovementioned optical properties were assigned to each segment, creating the phantom optical model; (4) fluence map [see Fig. 11(E)] of the phantom optical model was generated by an MC simulation (using MCX software [88]); (5) finally, the PA image was divided by the MC simulated image. The resultant image was a fluence-compensated PA image [see Fig. 11(F)]. The PA intensity variation between the peaks (at the location of imaging targets) in the fluence-compensated PA image was less than 5%. In this experiment we assumed that the field of view of each element of the transducer is cylindrical. An alternate way to compensate for the fluence decay is finite element (FE) methods which are effective especially for complex heterogeneous media. The downside of FE methods is that they have a high computational cost [8992]. Another technique that we previously proposed is to utilize deep learning and machine learning algorithms to compensate for the decay [62,93].

 figure: Fig. 11.

Fig. 11. Fluence Compensation algorithm validation. (A) Schematic of a phantom with 3 layers of gelatin phantom mixed with intralipid/ink solution and 3 carbon lead imaging targets, (B) photograph of the phantom made in a cube box and magnified to show the layers, (C) US image of the phantom, (D) PA image before fluence compensation, (E) Monte Carlo simulation of light propagation in the phantom, (F) PA image after fluence compensation.

Download Full Size | PDF

6. PA image reconstruction

After detecting PA signals by the linear array, the channel data are given to an image reconstruction algorithm and an optical absorption distribution map, i.e., PA image, is reconstructed [9498]. Several image reconstruction algorithms have been studied for linear-array PA imaging [99102]. A block diagram of the reconstruction methods that are studied here is depicted in Fig. 12.

 figure: Fig. 12.

Fig. 12. Mathematical description and flow diagram of delay and sum (DAS), delay multiply and sum (DMAS), double stage DMAS (DS-DMAS), and minimum variance (MV) based reconstruction algorithms.

Download Full Size | PDF

The most basic and simple-to-implement algorithm is delay-and-sum (DAS). DAS follows the dynamic focusing protocol, in which the focus is adjusted for each pixel of the imaging target; the detected signals are delayed proportional to the distance between the focal point and the position of the element in the imaging array. Finally, the delayed signals are summed up, and an image is formed. As a result of this summation, the on-axis signals will overlap on each other while off-axis signals will be suppressed. Since DAS is a blind algorithm and simply adds up the samples, the final image suffers from sidelobes and artifacts [103]. In the remaining of this section, we used two signal quality metrics to compare the performance of different image reconstruction algorithms: (i) resolution, which is defined as the full width half maximum (FWHM) of the lateral profile of the image of a point source, and (ii) sidelobe, defined as the power of the first lobe next to the main lobe. Although DAS is simple to implement and fast, it treats all the detected signals the same way which results in low resolution images with high sidelobes; this is mainly due to the constructive overlap of the off-axis signals [103]. Delay-Multiply-and-Sum (DMAS) can generate images with a finer resolution and lower sidelobes compared to DAS [104], as it uses the correlation of the extracted samples, doubles the central frequency and synthetically increases the detection aperture [104]. Double stage DMAS (DS-DMAS) algorithm offers finer resolution, and lower sidelobes [101,103,105] than DMAS; DS-DMAS uses two stages of correlation process to suppress off-axis signals. Minimum variance (MV) significantly improves the image resolution compared to DAS, DMAS and DS-DMAS, however the produced images have high sidelobes; this problem has been addressed in MV-DMAS [99,106]. The Eigen-space version of MV-DMAS, i.e., EIMV-DMAS, provides similar resolution to MV-DMAS with lower sidelobes; these methods reduce the contribution of sidelobes in estimation of weighting factors by taking eigenvectors corresponding to the first largest eigenvalues of the estimated covariance matrix [100]. Methods that use MV algorithm, are sensitive to the quality of the received data; this issue limits the application of MV-based algorithms for PA image reconstruction, because PA signals are usually weak. Among the image reconstruction algorithms described above (see Table 6), DS-DMAS is the most suitable one for linear-array based PA image reconstruction, because it reconstructs images with fine resolution and low sidelobes, and has a low computational complexity. Although some numerical/experimental comparisons between the abovementioned methods have already been reported in the literature, a more rigorous study is required for a fair comparison; see the issues came up in the comparison of these methods in [107].

Tables Icon

Table 6. Performance comparison between image reconstruction algorithms.

Four wire phantoms were prepared for this study [ Fig. 13(A)]. Transducers were securely held by a clamp which was attached to an x-y translation stage. The phantom container was fixed to the optical table and filled with distilled water. Each phantom was imaged 50 times. Figures 13(B)–13(E) demonstrate images of the four wire phantoms taken with the L7-4 probe where (i) is the US image, (ii) is the PA image reconstructed in the Vantage system, and (iii) is the PA image reconstructed with DS-DMAS. Figures 13(F)–13(I) demonstrate images of the four wire phantom taken with the L22-14v probe where (i) is the US image, (ii) is the PA image reconstructed in the Vantage system, and (iii) is the PA image reconstructed with DS-DMAS.

 figure: Fig. 13.

Fig. 13. Comparison of the performance of DS-DMAS to Vantage PA reconstruction method. (A) Wire phantoms for resolution study, (i) one-wire, (ii) two-wire, (iii) two-wire cross, and (iv) three-wire. (B-I) US and PA images produced by L7-4 and L22-14v probes where the image reconstruction is: (i) Vantage default US, (ii) Vantage default PA, and (iii) photoacoustic DS-DMAS. L7-4 probe: (B) single wire cross section, (C) 3-wire cross section, (D) 2-wire cross section, and (E) 2-wire cross section. L22-14v probe: (F) single wire cross section, (G) 2-wire cross section, (H) 2-wire cross section, and (I) 3-wire cross section. Images of the targets are shown in white dotted circles or straight lines.

Download Full Size | PDF

7. Conclusion

Photoacoustic imaging is a complement to the already established US imaging technique and may significantly increase its scope of application in diagnostic imaging and therapeutic monitoring. Combining with commercial medical US systems, the development of PAI can be accelerated by taking advantage of US image reconstruction and processing. Among different configurations, linear-array PAI is becoming popular, mainly because linear-array US transducers can easily be manufactured, hence the production cost is lower as compared to custom-made curved or ring arrays [110,111]. Moreover, these transducers are commonly used in clinical applications, implying the PAI system built with it will have high clinical translatability.

We discussed technical considerations for US/PA imaging implementation using the Vantage research system. With the information we presented in the body of this technical note and the four appendices, we described most of the experimental considerations one should know when working with the Vantage system for PAI tests. Although the Verasonics Vantage System has many capabilities and advantages, it has some limitations. The transducer connector limits which transducers can directly connect to the system. Verasonics offers some connector adapters to alleviate this problem, but it can make switching between transducers cumbersome. Another limitation is that the signal pre-amplification occurs far (∼1-2 meters) from the probe which increases the noise level for weak PA signals. Further, the maximum sampling rate of the system is 62.5MHz which may limit the use of high frequency transducers.

Appendix A. Absorption spectrum of black tape and carbon pencil lead

Measuring the absorption spectrum of the imaging target, i.e., amplitude of the optical energy versus wavelength, is essential for the analysis of results in PA experiments. Here we explain a method of how to extract the absorption spectra of two example imaging targets, vinyl black tape and carbon pencil lead. The internal energy readings are not proportional to the energy deposited on the sample due to dispersion, i.e., variable attenuation of wavelengths in the optical fiber. Using the PhocusMobil (Opotek, CA, USA) laser, that has a built-in energy meter, we first obtained the energy ratio by using a separate energy meter (QE25LP-S-MB-QED-D0, Gentec-EO, Quebec City, Canada) to record the energy after the illumination fiber and dividing it by the internal energy reading inside the PhocusMobil. This yields the relation between the internal Opotek energy reading to the energy deposited on the sample. We imaged a 2 mm carbon lead phantom and a black vinyl tape phantom, and simultaneously recorded the internal laser energies. The PA data was compensated by dividing the raw PA signals by the product of the energies recorded with each phantom and the energy ratio. The absorption spectrum of the black tape and carbon pencil lead after performing a fourth order polynomial fit are shown in Figs. 14(A) and 14(B), respectively.

 figure: Fig. 14.

Fig. 14. Measurement of absorption spectrum of two commonly used imaging targets: (A) black tape, and (B) carbon pencil lead.

Download Full Size | PDF

Appendix B. MATLAB codes for data acquisition, processing, image reconstruction

MATLAB code for the Vantage system for real-time data acquisition, data processing, image reconstruction, and display in order of variable appearance in the script is provided as supplementary material (S1) as follows:

boe-12-2-1050-i004

boe-12-2-1050-i005

boe-12-2-1050-i006

boe-12-2-1050-i007

boe-12-2-1050-i008

boe-12-2-1050-i009

boe-12-2-1050-i010

boe-12-2-1050-i011

Explanation of the code is below:

Start and end depth of imaging are set for US and PA imaging with P(1).startDepth and P(1).endDepth and P(2).startDepth and P(2).endDepth, respectively. na sets the number of steered US angles that are transmitted for a single US frame. The default number is 7. oneway, sets whether or not the system runs in in transmission mode: 0 for receive mode only or 1 for transmission mode is on. RunAcq communicates receive and transmit commands to the ultrasound hardware. Flash2Qdelay is the time between trigger input and start of acquisition in microseconds and must equal the time delay between the flash lamp output and Q-switch output from the laser. We used an oscilloscope to find the exact delay time and set the time value to the flash2Qdelay variable (see Table 3 for the values we used for our system). PA_PRF is set to the pulse repetition rate of the laser. The next section of the code involves setting up system parameters, such as data buffers, and the transducer array specifications. For Vantage 128 system and a 128-element linear array transducer, Resource.Parameters.numTransmit and Resource.Parameter.numRcvChannels are set to 128. For the two linear array transducers we used, Trans.name is set to either ‘L7-4’ or ‘L22-14v’. ComputeTrans(Trans) populates all the attributes of the specified transducer. PData defines the pixel grid to be manipulated by the image reconstruction software. PData(n).PDelta defines the spacing between pixels in all dimensions, PData(n).Size defines rows, columns, and sections of the data, and PData(n).Origin defines the x,y,z coordinates of the left corner of the reconstructed image, where n is 1 for the US image and n is 2 for the PA image. RcvBuffer, InterBuffer and ImageBuffer are used to store the US and PA raw data and reconstructed images. Transmit waveform (TW), transmit (TX), and transmit power controller (TPC) are transmit objects. TW structure array defines specification of a transmit waveform (type, frequency, duty cycle, duration, and polarity), where TW(1) is for US and TW(2) for PA. The TX structure array defines the specification of the beam characteristics of each transmit action, including which transmitters are active in the aperture (apodization), and the delay time before transmit for each active transmitter. With 7 US transmits, TX(1:7) is defined for US transmit events. For PA, only one TX structure is needed, so we define TX(8) for this event. In TX(8),all transmitters are turned off for the receive-only beamformer. TPC(1) for US and TPC(2) for PA are defined, where TPC sets the transmit power level for each specific transmit event. Receive objects are defined next and populate all the characteristics of the receive phase of an acquisition event. The Transmit and Receive periods both start with the data acquisition. Next, the time-gain-control (TGC) object defines the time-gain-compensation curve for the receive portion of the acquisition event. To define a TGC waveform, the user specifies the TGC.CntrlPts array and TGC.rangeMax. TGC.Waveform is then synthesized and applied to the received data. The next section describes the reconstruction protocol. Recon structure provides the general attributes of the reconstruction, including the source and destination buffers to use. Recon.senscutoff is a value from 0.0 to 1.0 that sets the threshold of sensitivity below which an element is excluded from the reconstruction summation for a given pixel. Recon.pdatanum(n) specifies the number of the PData structure that defines the pixel locations of the reconstructed data, where n is 1 for the US image or is 2 for the PA image. Recon.RcvBufFrame is an optional attribute that when provided, overrides the frame number specified by the ReconInfo structures. Setting Recon.RcvBufFrame to -1 allows the last acquisition frame transferred into the RcvBuffer to be used in the reconstruction and be displayed for real-time imaging. Recon.IntbufDest and Recon.ImgBufDest specify the destination buffer and frame that will receive the reconstructed output, respectively. Recon.RINums is a row vector that specifies the ReconInfo structure indices associated with the most recent reconstruction. For each Recon, there is an associated set of ReconInfo objects which contain information on how to perform the reconstruction. This information includes which data in the data buffer to reconstruct along with where within PData. Further, ReconInfo chooses which type of reconstruction is performed between replace, add, or multiply intensity, where each successive reconstructed frame replaces the previous, is added to the previous, or is multiplied by the previous, respectively. Replace intensity is chosen for our purpose. Process objects are used to describe the type of processing to apply to the acquired data. After defining the sequence control objects, event objects, and some graphical user interface (GUI) controls, the script is complete for data acquisition and displaying US and PA data. Once, the GUI is closed, the acquired RFdata can be accessed from the variable RcvData. This variable is in cell format and can be converted into mat format using the function ‘cell2mat. The data is stored as a three dimensional structure (x,y,z int16) where x = total sample number, y = number of elements, and z = number of frames.

Appendix C. Optical absorption of acoustic coupling medium (water)

In a PA imaging experiment, there are ultrasonic transducers and an acoustic coupling layer between the transducers and the imaging target [112]. Propagation of generated ultrasound waves from the imaging target is least attenuated when received by the ultrasound sensor through a coupling agent that exhibits minimal acoustic impedance mismatch with the imaging target. The coupling agent helps minimize degradation and signal loss [113]. Acoustic couplants can be characterized as liquid, gel, semi-dry, and dry. Liquids and gels generally have lower acoustic impedance than dry couplant materials. More details about different acoustic couplants can be found in the literature. Here we describe details on the most widely used acoustic couplant, water. Water has a low acoustic attenuation and impedance (1.5 MPa.s/m) which makes it suitable as an acoustic coupling material. In terms of water optical properties, it absorbs over a wide range of the electromagnetic radiation spectrum with rotational transitions and intermolecular vibrations responsible for absorption in the microwave (≈ 1 mm - 10 cm wavelength) and far-infrared (≈ 10 µm - 1 mm), intramolecular vibrational transitions responsible for absorption in the infrared (≈ 200nm - 10 µm), and electronic transitions responsible for absorption in the ultraviolet region (< 200 nm) [114116]. In regular water (H2O), the first large absorption band occurs at around 980 nm [114,117,118], followed by another band at ∼1450 nm. The absorption spectrum of heavy water (D2O) is different from that of H2O, mainly due to the heavier deuterium nucleus; absorption peaks occur at around 1000 nm, 1330 nm, and 1600 nm [119,120]. The spectral features of water absorption also depend upon the temperature [114,119,121]; as temperature decreases, the fraction of hydrogen-bound water molecules is increased, causing absorption peaks to reduce in intensity, broaden in bandwidth, and shift to lower energy [117]. The optical absorption spectrum of regular water and heavy water at various temperatures are shown in Fig. 15.

 figure: Fig. 15.

Fig. 15. Optical absorption spectrum of regular water (H2O) and heavy water (D2O) at various temperatures in NIR region. Data for this figure has been extracted from [114,119] and reproduced.

Download Full Size | PDF

Appendix D. MATLAB code for simultaneous display of US and PA image

The modifications to the original script to simultaneously display US and PA images, in real-time are described in Section 4.4. Corresponding MATLAB code is available as supplementary material (S2) as follows:

boe-12-2-1050-i012

Funding

National Institute of Biomedical Imaging and Bioengineering (R01EB027769‐01, R01EB028661‐01).

Disclosures

The authors declare there is no conflict in regards to this paper.

References

1. J. Zhou, Y. Yao, and L. V. Wang, “Tutorial on photoacoustic tomography,” J. Biomed. Opt. 21(6), 061007 (2016). [CrossRef]  

2. B. Pan, D. Kim, L. V. Wang, and G. M. Lanza, “A brief account of nanoparticle contrast agents for photoacoustic imaging,” Wiley Interdiscip. Rev.: Nanomed. Nanobiotechnol. 5(6), 517–543 (2013). [CrossRef]  

3. L. J. Yao and V. Wang, “Sensitivity of photoacoustic microscopy,” Photoacoustics 2(2), 87–101 (2014). [CrossRef]  

4. B. Herzog , and F. Sengün, “Scattering particles increase absorbance of dyes–a model study with relevance for sunscreens,” Photochem. Photobiol. Sci. 14(11), 2054–2063 (2015). [CrossRef]  

5. V. Dogra, B. Chinni, S. Singh, H. Schmitthenner, N. Rao, J. J. Krolewski, and K. L. Nastiuk, “Photoacoustic imaging with an acoustic lens detects prostate cancer cells labeled with PSMA-targeting near-infrared dye-conjugates,” J. Biomed. Opt. 21(6), 066019 (2016). [CrossRef]  

6. J. Gong and S. Krishnan, “Mathematical Modeling of Dye-Sensitized Solar Cells,” In Dye-Sensitized Solar Cells, Elsevier: 2019; pp. 51–81.

7. A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of Cost Reduction Methods in Photoacoustic Computed Tomography,” Photoacoustics 15, 100137 (2019). [CrossRef]  

8. C. Kim, C. Favazza, and L. V. Wang, “In vivo photoacoustic tomography of chemicals: high-resolution functional and molecular optical imaging at new depths,” Chem. Rev. 110(5), 2756–2782 (2010). [CrossRef]  

9. M. Zafar, K. Kratkiewicz, R. Manwar, and M. Avanaki, “Low-cost fast photoacoustic computed tomography: phantom study,” InProceedings of Photons Plus Ultrasound: Imaging and Sensing2019; p. 108785 V.

10. Z. Turani, E. Fatemizadeh, T. Blumetti, S. Daveluy, A. F. Moraes, W. Chen, D. Mehregan, P. E. Andersen, and M. Nasiriavanaki, “Optical radiomic signatures derived from optical coherence tomography images improve identification of melanoma,” Cancer Res. 79(8), 2021–2030 (2019). [CrossRef]  

11. K. Avanaki and P. Andersen, “Oct radiomic features for differentiation of early malignant melanoma from benign nevus. Google Patents: 2020.

12. E. Jalilian, Q. Xu, L. Horton, A. Fotouhi, S. Reddy, R. Manwar, S. Daveluy, D. Mehregan, J. Gelovani, and K. Avanaki, “Contrast-enhanced optical coherence tomography for melanoma detection: An in vitro study,” J. Biophotonics 13(5), e201960097 (2020). [CrossRef]  

13. Q. Xu, E. Jalilian, J. W. Fakhoury, R. Manwar, B. Michniak-Kohn, K. B. Elkin, and K. Avanaki, “Monitoring the topical delivery of ultrasmall gold nanoparticles using optical coherence tomography,” Skin Res Technol 26(2), 263–268 (2020). [CrossRef]  

14. M. Hessler, E. Jalilian, Q. Xu, S. Reddy, L. Horton, K. Elkin, R. Manwar, M. Tsoukas, D. Mehregan, and K. Avanaki, “Melanoma biomarkers and their potential application for in vivo diagnostic imaging modalities,” Int. J. Mol. Sci. 21(24), 9583 (2020). [CrossRef]  

15. L. C. Li and V. Wang, “Photoacoustic tomography and sensing in biomedicine,” Phys. Med. Biol. 54(19), R59–R97 (2009). [CrossRef]  

16. K. Kratkiewicz, R. Manwar, M. Zafar, S. Mohsen Ranjbaran, M. Mozaffarzadeh, N. de Jong, K. Ji, and K. Avanaki, “Development of a Stationary 3D photoacoustic imaging system using sparse single-element transducers: phantom study,” Appl. Sci. 9(21), 4505 (2019). [CrossRef]  

17. L. V. Wang, “Tutorial on photoacoustic microscopy and computed tomography,” IEEE J. Sel. Top. Quantum Electron. 14(1), 171–179 (2008). [CrossRef]  

18. A. Hariri, A. Fatima, N. Mohammadian, S. Mahmoodkalayeh, M. A. Ansari, N. Bely, and M. R. Avanaki, “Development of low-cost photoacoustic imaging systems using very low-energy pulsed laser diodes,” J. Biomed. Opt. 22(7), 075001 (2017). [CrossRef]  

19. R. A. Kruger, C. M. Kuzmiak, R. B. Lam, D. R. Reinecke, S. P. Del Rio, and D. Steed, “Dedicated 3D photoacoustic breast imaging,” Med. Phys. 40(11), 113301 (2013). [CrossRef]  

20. L. G. Ku and V. Wang, “Deeply penetrating photoacoustic tomography in biological tissues enhanced with an optical contrast agent,” Opt. Lett. 30(5), 507–509 (2005). [CrossRef]  

21. U. Chitgupi, N. Nyayapathi, J. Kim, D. Wang, B. Sun, C. Li, K. Carter, W. C. Huang, C. Kim, and J. Xia, “Surfactant-stripped micelles for NIR-II photoacoustic imaging through 12 cm of breast tissue and whole human breasts,” Adv. Mater. 31(40), 1902279 (2019). [CrossRef]  

22. H. F. Zhang, K. Maslov, G. Stoica, and L. V. Wang, “Functional photoacoustic microscopy for high-resolution and noninvasive in vivo imaging,” Nat. Biotechnol. 24(7), 848–851 (2006). [CrossRef]  

23. S. Mallidi, G. P. Luke, and S. Emelianov, “Photoacoustic imaging in cancer detection, diagnosis, and treatment guidance,” Trends Biotechnol. 29(5), 213–221 (2011). [CrossRef]  

24. S. Yang, D. Xing, Q. Zhou, L. Xiang, and Y. Lao, “Functional imaging of cerebrovascular activities in small animals using high-resolution photoacoustic tomography,” Med. Phys. 34(8), 3294–3301 (2007). [CrossRef]  

25. J. Gamelin, A. Maurudis, A. Aguirre, F. Huang, P. Guo, L. V. Wang, and Q. Zhu, “A real-time photoacoustic tomography system for small animals,” Opt. Express 17(13), 10489–10498 (2009). [CrossRef]  

26. E. Zhang, J. Laufer, R. Pedley, and P. Beard, “In vivo high-resolution 3D photoacoustic imaging of superficial vascular anatomy,” Phys. Med. Biol. 54(4), 1035–1046 (2009). [CrossRef]  

27. H.-P. Brecht, R. Su, M. Fronheiser, S. A. Ermilov, A. Conjusteau, and A. A. Oraevsky, “Whole-body three-dimensional optoacoustic tomography system for small animals,” J. Biomed. Opt. 14(6), 064007 (2009). [CrossRef]  

28. C. Zhang, K. I. Maslov, J. Yao, and L. V. Wang, “In vivo photoacoustic microscopy with 7.6-µm axial resolution using a commercial 125-MHz ultrasonic transducer,” J. Biomed. Opt. 17(11), 1 (2012). [CrossRef]  

29. E. Z. Zhang, J. Laufer, and P. Beard, “Three-dimensional photoacoustic imaging of vascular anatomy in small animals using an optical detection system,” InProceedings of Biomedical Optics (BiOS)2007; pp. 64370S–64378.

30. X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnol. 21(7), 803–806 (2003). [CrossRef]  

31. C. Li, A. Aguirre, J. Gamelin, A. Maurudis, Q. Zhu, and L. V. Wang, “Real-time photoacoustic tomography of cortical hemodynamics in small animals,” J. Biomed. Opt. 15(1), 010509 (2010). [CrossRef]  

32. W. Lu, Q. Huang, G. Ku, X. Wen, M. Zhou, D. Guzatov, P. Brecht, R. Su, A. Oraevsky, and L. V. Wang, “Photoacoustic imaging of living mouse brain vasculature using hollow gold nanospheres,” Biomaterials 31(9), 2617–2626 (2010). [CrossRef]  

33. J. Laufer, P. Johnson, E. Zhang, B. Treeby, B. Cox, B. Pedley, and P. Beard, “In vivo preclinical photoacoustic imaging of tumor vasculature development and therapy,” J. Biomed. Opt. 17(5), 1 (2012). [CrossRef]  

34. J. S. Sethuraman, S. H. Amirian, R. H. Litovsky, S. W. Smalling, and Y. Emelianov, “Spectroscopic intravascular photoacoustic imaging to differentiate atherosclerotic plaques,” Opt. Express 16(5), 3362–3367 (2008). [CrossRef]  

35. J. B. Wang, A. L. Su, K. B. Karpiouk, R. V. Sokolov, S. W. Smalling, and Y. Emelianov, “Intravascular photoacoustic imaging,” IEEE J. Sel. Top. Quantum Electron. 16(3), 588–599 (2010). [CrossRef]  

36. S. S. Sethuraman, J. R. Aglyamov, R. H. Amirian, S. W. Smalling, and Y. Emelianov, “Intravascular photoacoustic imaging using an IVUS imaging catheter,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 54(5), 978–986 (2007). [CrossRef]  

37. S. Hu, K. Maslov, V. Tsytsarev, and L. V. Wang, “Functional transcranial brain imaging by optical-resolution photoacoustic microscopy,” J. Biomed. Opt. 14(4), 040503 (2009). [CrossRef]  

38. X. Wang, X. Xie, G. Ku, L. V. Wang, and G. Stoica, “Noninvasive imaging of hemoglobin concentration and oxygenation in the rat brain using high-resolution photoacoustic tomography,” J. Biomed. Opt. 11(2), 024015 (2006). [CrossRef]  

39. G. Ku, X. Wang, X. Xie, G. Stoica, and L. V. Wang, “Imaging of tumor angiogenesis in rat brains in vivo by photoacoustic tomography,” Appl. Opt. 44(5), 770–775 (2005). [CrossRef]  

40. M. Zafar, K. Kratkiewicz, R. Manwar, and M. Avanaki, “Development of low-cost fast photoacoustic computed tomography: system characterization and phantom study,” Appl. Sci. 9(3), 374 (2019). [CrossRef]  

41. K. Kratkiewicz, R. Manwar, A. Rajabi-Estarabadi, J. Fakhoury, J. Meiliute, S. Daveluy, D. Mehregan, and K. M. Avanaki, “Photoacoustic/ultrasound/optical coherence tomography evaluation of melanoma lesion and healthy skin in a Swine model,” Sensors 19(12), 2815 (2019). [CrossRef]  

42. M. Nasiriavanaki, J. Xia, H. Wan, A. Q. Bauer, J. P. Culver, and L. V. Wang, “High-resolution photoacoustic tomography of resting-state functional connectivity in the mouse brain,” Proc. Natl. Acad. Sci. 111(1), 21–26 (2014). [CrossRef]  

43. J. Yao, J. Xia, K. I. Maslov, M. Nasiriavanaki, V. Tsytsarev, A. V. Demchenko, and L. V. Wang, “Noninvasive photoacoustic computed tomography of mouse brain metabolism in vivo,” NeuroImage 64, 257–266 (2013). [CrossRef]  

44. M. Nasiriavanaki, J. Xia, H. Wan, A. Q. Bauer, J. P. Culver, and L. V. Wang, “Resting-state functional connectivity imaging of the mouse brain using photoacoustic tomography,” InProceedings of Photons Plus Ultrasound: Imaging and Sensing2014; p. 89432O.

45. A.-R. Mohammadi-Nejad, M. Mahmoudzadeh, M. S. Hassanpour, F. Wallois, O. Muzik, C. Papadelis, A. Hansen, H. Soltanian-Zadeh, J. Gelovani, and M. Nasiriavanaki, “Neonatal brain resting-state functional connectivity imaging modalities,” Photoacoustics 10, 1–19 (2018). [CrossRef]  

46. S. Mahmoodkalayeh, M. Zarei, M. A. Ansari, K. Kratkiewicz, M. Ranjbaran, R. Manwar, and K. Avanaki, “Improving vascular imaging with co-planar mutually guided photoacoustic and diffuse optical tomography: a simulation study,” Biomed. Opt. Express 11(8), 4333–4347 (2020). [CrossRef]  

47. E. Z. Zhang, B. Povazay, J. Laufer, A. Alex, B. Hofer, B. Pedley, C. Glittenberg, B. Treeby, B. Cox, and P. Beard, “Multimodal photoacoustic and optical coherence tomography scanner using an all optical detection scheme for 3D morphological skin imaging,” Biomed. Opt. Express 2(8), 2202–2215 (2011). [CrossRef]  

48. Z. Chen, E. Rank, K. M. Meiburger, C. Sinz, A. Hodul, E. Zhang, E. Hoover, M. Minneman, J. Ensher, P. C. Beard, H. Kittler, R. A. Leitgeb, W. Drexler, and M. Liu, “Non-invasive multimodal optical coherence and photoacoustic tomography for human skin imaging. Scientific Reports 2017, 7, 17975.

49. D. Xu, S. Yang, Y. Wang, Y. Gu, and D. Xing, “Noninvasive and high-resolving photoacoustic dermoscopy of human skin,” Biomed. Opt. Express 7(6), 2095–2102 (2016). [CrossRef]  

50. H. Olds, D. Mehregan, K. Kratiewicz, R. Manwar, Q. Xu, S. Reddy, S. Mahmoodkayayeh, and K. Avanaki, “Is photoacoustic imaging clinically safe: evaluation of possible thermal damage due to laser-tissue interaction. 2020.

51. M. Li, J. Oh, X. Xie, G. Ku, W. Wang, C. Li, G. Lungu, G. Stoica, and L. V. Wang, “Simultaneous molecular and hypoxia imaging of brain tumors in vivo using spectroscopic photoacoustic tomography,” Proc. IEEE 96(3), 481–489 (2008). [CrossRef]  

52. L. Nie, X. Cai, K. Maslov, A. Garcia-Uribe, M. A. Anastasio, and L. V. Wang, “Photoacoustic tomography through a whole adult human skull with a photon recycler,” J. Biomed. Opt. 17(11), 110506 (2012). [CrossRef]  

53. S. A. Ermilov, T. Khamapirad, A. Conjusteau, M. H. Leonard, R. Lacewell, K. Mehta, T. Miller, and A. A. Oraevsky, “Laser optoacoustic imaging system for detection of breast cancer,” J. Biomed. Opt. 14(2), 024007 (2009). [CrossRef]  

54. J. A. Copland, M. Eghtedari, V. L. Popov, N. Kotov, N. Mamedova, M. Motamedi, and A. A. Oraevsky, “Bioconjugated gold nanoparticles as a molecular based contrast agent: implications for imaging of deep tumors using optoacoustic tomography,” Mol. Imaging Biol. 6(5), 341–349 (2004). [CrossRef]  

55. S. S. Manohar, J. E. Vaartjes, J. C. van Hespen, F. M. Klaase, M. van den Engh, T. W. Steenbergen, and G. Van Leeuwen, “Initial results of in vivo non-invasive cancer imaging in the human breast using near-infrared photoacoustics,” Opt. Express 15(19), 12277–12285 (2007). [CrossRef]  

56. A. A. Oraevsky, A. A. Karabutov, S. V. Solomatin, E. V. Savateeva, V. A. Andreev, Z. Gatalica, H. Singh, and R. D. Fleming, “Laser optoacoustic imaging of breast cancer in vivo,” In Proceedings of BiOS 2001 The International Symposium on Biomedical Optics; pp. 6–15.

57. S. Hu, B. Rao, K. Maslov, and L. V. Wang, “Label-free photoacoustic ophthalmic angiography,” Opt. Lett. 35(1), 1–3 (2010). [CrossRef]  

58. S. Jiao, M. Jiang, J. Hu, A. Fawzi, Q. Zhou, K. K. Shung, C. A. Puliafito, and H. F. Zhang, “Photoacoustic ophthalmoscopy for in vivo retinal imaging,” Opt. Express 18(4), 3967–3972 (2010). [CrossRef]  

59. B. Wang, J. L. Su, J. Amirian, S. H. Litovsky, R. Smalling, and S. Emelianov, “Detection of lipid in atherosclerotic vessels using ultrasound-guided spectroscopic intravascular photoacoustic imaging,” Opt. Express 18(5), 4889–4897 (2010). [CrossRef]  

60. K. Jansen, M. Wu, A. F. W. van der Steen, and G. van Soest, “Lipid detection in atherosclerotic human coronaries by spectroscopic intravascular photoacoustic imaging,” Opt. Express 21(18), 21472–21484 (2013). [CrossRef]  

61. R. Manwar, K. Kratkiewicz, and K. Avanaki, “Overview of ultrasound detection technologies for photoacoustic imaging,” Micromachines 11(7), 692 (2020). [CrossRef]  

62. R. Manwar, X. Li, S. Mahmoodkalayeh, E. Asano, D. Zhu, and K. Avanaki, “Deep learning protocol for improved photoacoustic brain imaging,” J. Biophotonics 13(10), e202000212 (2020). [CrossRef]  

63. M. Li, C. Liu, X. Gong, R. Zheng, Y. Bai, M. Xing, X. Du, X. Liu, J. Zeng, and R. Lin, “Linear array-based real-time photoacoustic imaging system with a compact coaxial excitation handheld probe for noninvasive sentinel lymph node mapping,” Biomed. Opt. Express 9(4), 1408–1422 (2018). [CrossRef]  

64. K. Avanaki and J. G. Gelovani, “Ultrasound and multispectral photoacoustic systems and methods for brain and spinal cord imaging through acoustic windows,” Google Patents (2020).

65. Y. Deng, N. C. Rouze, M. L. Palmeri, and K. R. Nightingale, “Ultrasonic shear wave elasticity imaging sequencing and data processing using a Verasonics research scanner,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 64(1), 164–176 (2017). [CrossRef]  

66. V. Bindal, “Water-based couplants for general purpose use for ultrasonic NDT applications. 2000.

67. H. Pennes, “Analysis of tissue and arterial blood temperature in the resting human forearm,” J. Appl. Physiol. 1(2), 93–122 (1948). [CrossRef]  

68. M. Breazeale and O. Leroy, Physical Acoustics: Fundamentals and Applications; Springer Science & Business Media, 2012.

69. R. Kumon, C. Deng, and X. Wang, “Spectrum analysis of photoacoustic imaging data from prostate adenocarcinoma tumors in a murine model,” In Proceedings of Photons Plus Ultrasound: Imaging and Sensing2011; p. 78993Q.

70. S. Wang, C. Tao, Y. Yang, X. Wang, and X. Liu, “Theoretical and experimental study of spectral characteristics of the photoacoustic signal from stochastically distributed particles,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 62(7), 1245–1255 (2015). [CrossRef]  

71. F. L. Lizzi, M. Ostromogilsky, E. J. Feleppa, M. C. Rorke, and M. M. Yaremko, “Relationship of ultrasonic spectral parameters to features of tissue microstructure,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 34(3), 319–329 (1987). [CrossRef]  

72. Y. Cao, A. Kole, L. Lan, P. Wang, J. Hui, M. Sturek, and J.-X. Cheng, “Spectral analysis assisted photoacoustic imaging for lipid composition differentiation,” Photoacoustics 7, 12–19 (2017). [CrossRef]  

73. S. Granchi, E. Vannacci, L. Miris, L. Onofri, D. Zingoni, and E. Biagi, “Spectral analysis of ultrasonic and photo acoustic signals generated by a prototypal fiber microprobe for media characterization,” Sens Imaging 21(1), 34 (2020). [CrossRef]  

74. T. Feng, J. E. Perosky, K. M. Kozloff, G. Xu, Q. Cheng, S. Du, J. Yuan, C. X. Deng, and X. Wang, “Characterization of bone microstructure using photoacoustic spectrum analysis,” Opt. Express 23(19), 25217–25224 (2015). [CrossRef]  

75. R. W. Cole, T. Jinadasa, and C. M. Brown, “Measuring and interpreting point spread functions to determine confocal microscope resolution and ensure quality control,” Nat. Protoc. 6(12), 1929–1941 (2011). [CrossRef]  

76. W. H. Blanton, Ultrasound Beams (East Tennessee State University, 2009).

77. J. Xia, Case Study: Photoacoustic Imaging of Hand Vasculature Using the Verasonics Vantage System (Verasonics Inc., 2019.

78. S. Mahmoodkalayeh, M. A. Ansari, and M. Nasiriavanaki, “A new illumination scheme for photoacoustic computed tomography,” In Proceedings of Photons Plus Ultrasound: Imaging and Sensing2018; p. 104946 T.

79. H. Akima, “A new method of interpolation and smooth curve fitting based on local procedures,” J. ACM 17(4), 589–602 (1970). [CrossRef]  

80. M. H. Rayyan Manwar, Ali Hariri, Karl Kratkiewicz, Shahryar Noei, and Mohammad R. N. Avanaki, “Photoacoustic signal enhancement: towards utilization of low energy laser diodes in real-time photoacoustic imaging,” Photoacoustic Sensing and Imaging in Biomedicine, MDPI Sensors2018.

81. M. Zafar, R. Manwar, K. Kratkiewicz, M. Hosseinzadeh, A. Hariri, S. Noei, and M. Avanaki, “Photoacoustic signal enhancement using a novel adaptive filtering algorithm,” In Proceedings of Photons Plus Ultrasound: Imaging and Sensing2019; p. 108785S.

82. V. Ivanov, “Analog white paper,” K. Kratkiewicz, Ed. PhotoSound Technologies Inc: PhotoSound Technologies Inc, 2018; p 20.

83. Z. Yuan and H. Jiang, “Quantitative photoacoustic tomography: Recovery of optical absorption coefficient maps of heterogeneous media,” Appl. Phys. Lett. 88(23), 231101 (2006). [CrossRef]  

84. S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl. 15(2), R41–R93 (1999). [CrossRef]  

85. K. D. Paulsen and H. Jiang, “Spatially varying optical property reconstruction using a finite element diffusion equation approximation,” Med. Phys. 22(6), 691–701 (1995). [CrossRef]  

86. S. Prahl, “Mie Scattering Calculator. OMLC, 2018.

87. L. Zhao, M. Yang, Y. Jiang, and C. Li, “Optical fluence compensation for handheld photoacoustic probe: An in vivo human study case,” J. Innovative Opt. Health Sci. 10(04), 1740002 (2017). [CrossRef]  

88. Q. Fang and D. A. Boas, “Monte Carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units,” Opt. Express 17(22), 20178–20190 (2009). [CrossRef]  

89. X. Zhou, N. Akhlaghi, K. A. Wear, B. S. Garra, T. J. Pfefer, and W. C. Vogt, “Evaluation of fluence correction algorithms in multispectral photoacoustic imaging,” Photoacoustics 19, 100181 (2020). [CrossRef]  

90. B. T. Cox, S. R. Arridge, K. P. Köstli, and P. C. Beard, “Two-dimensional quantitative photoacoustic image reconstruction of absorption distributions in scattering media by use of a simple iterative method,” Appl. Opt. 45(8), 1866–1875 (2006). [CrossRef]  

91. F. M. Brochu, J. Brunker, J. Joseph, M. R. Tomaszewski, S. Morscher, and S. E. Bohndiek, “Towards quantitative evaluation of tissue absorption coefficients using light fluence correction in optoacoustic tomography,” IEEE Trans. Med. Imaging 36(1), 322–331 (2017). [CrossRef]  

92. T. Tarvainen, A. Pulkkinen, B. T. Cox, J. P. Kaipio, and S. R. Arridge, “Image reconstruction in quantitative photoacoustic tomography using the radiative transfer equation and the diffusion approximation,” In Proceedings of Opto-Acoustic Methods and Applications, Munich, 2013/05/12; p. 880006.

93. Deep learning improves contrast in low-fluence photoacoustic imaging,” Biomed. Opt. Express 11(6), 3360–3373 (2020).

94. P. Omidi, M. Diop, J. Carson, and M. Nasiriavanaki, “Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method,” In Proceedings of Photons Plus Ultrasound: Imaging and Sensing2017; p. 100643H.

95. M. Mozaffarzadeh, A. Mahloojifar, M. Nasiriavanaki, and M. Orooji, “Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study,” In Proceedings of Photonics in Dermatology and Plastic Surgery2018; p. 1046717.

96. P. Omidi, M. Zafar, M. Mozaffarzadeh, A. Hariri, X. Haung, M. Orooji, and M. Nasiriavanaki, “A novel dictionary-based image reconstruction for photoacoustic computed tomography,” Appl. Sci. 8(9), 1570 (2018). [CrossRef]  

97. M. H. Heidari, M. Mozaffarzadeh, R. Manwar, and M. Nasiriavanaki, “Effects of important parameters variations on computing eigenspace-based minimum variance weights for ultrasound tissue harmonic imaging,” In Proceedings of Photons Plus Ultrasound: Imaging and Sensing2018; p. 104946R.

98. M. Mozaffarzadeh, A. Mahloojifar, M. Nasiriavanaki, and M. Orooji, “Model-based photoacoustic image reconstruction using compressed sensing and smoothed L0 norm,” In Proceedings of Photons Plus Ultrasound: Imaging and Sensing2018; p. 104943Z.

99. M. Mozaffarzadeh, A. Mahloojifar, M. Orooji, K. Kratkiewicz, S. Adabi, and M. Nasiriavanaki, “Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm,” J. Biomed. Opt. 23(02), 1 (2018). [CrossRef]  

100. M. Mozaffarzadeh, A. Mahloojifar, V. Periyasamy, M. Pramanik, and M. Orooji, “Eigenspace-based minimum variance combined with delay multiply and sum beamformer: Application to linear-array photoacoustic imaging,” IEEE J. Sel. Top. Quantum Electron. 25(1), 1–8 (2019). [CrossRef]  

101. M. Mozaffarzadeh, A. Hariri, C. Moore, and J. V. Jokerst, “The double-stage delay-multiply-and-sum image reconstruction method improves imaging quality in a LED-based photoacoustic array scanner,” Photoacoustics 12, 22–29 (2018). [CrossRef]  

102. M. Mozaffarzadeh, V. Periyasamy, M. Pramanik, and B. Makkiabadi, “Efficient nonlinear beamformer based on P’th root of detected signals for linear-array photoacoustic tomography: application to sentinel lymph node imaging,” J. Biomed. Opt. 23(12), 1 (2018). [CrossRef]  

103. M. Mozaffarzadeh, A. Mahloojifar, M. Orooji, S. Adabi, and M. Nasiriavanaki, “Double-stage delay multiply and sum beamforming algorithm: Application to linear-array photoacoustic imaging,” IEEE Trans. Biomed. Eng. 65(1), 31–42 (2018). [CrossRef]  

104. G. Matrone, A. S. Savoia, G. Caliano, and G. Magenes, “The delay multiply and sum beamforming algorithm in ultrasound B-mode medical imaging,” IEEE Trans. Med. Imaging 34(4), 940–949 (2015). [CrossRef]  

105. M. Mozaffarzadeh, M. Sadeghi, A. Mahloojifar, and M. Orooji, “Double-stage delay multiply and sum beamforming algorithm applied to ultrasound medical imaging,” Ultrasound in medicine & biology 44(3), 677–686 (2018). [CrossRef]  

106. M. Mozaffarzadeh, A. Mahloojifar, and M. Orooji, “Medical photoacoustic beamforming using minimum variance-based delay multiply and sum,” In Proceedings of Digital Optical Technologies2017; p. 1033522.

107. O. M. H. Rindal, A. Austeng, A. Fatemi, and A. Rodriguez-Molares, “The effect of dynamic range alterations in the estimation of contrast,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 66(7), 1198–1208 (2019). [CrossRef]  

108. S. Park, A. B. Karpiouk, S. R. Aglyamov, and S. Y. Emelianov, “Adaptive beamforming for photoacoustic imaging,” Opt. Lett. 33(12), 1291–1293 (2008). [CrossRef]  

109. J. Park, S. Jeon, J. Meng, L. Song, J. S. Lee, and C. Kim, “Delay-multiply-and-sum-based synthetic aperture focusing in photoacoustic microscopy,” J. Biomed. Opt. 21(3), 036010 (2016). [CrossRef]  

110. T. L. Szabo and P. A. Lewin, “Ultrasound transducer selection in clinical imaging practice,” Journal of Ultrasound in Medicine 32(4), 573–582 (2013). [CrossRef]  

111. B. A. Angelsen, H. Torp, S. Holm, K. Kristoffersen, and T. Whittingham, “Which transducer array is best?” European Journal of Ultrasound 2(2), 151–164 (1995). [CrossRef]  

112. N. Meimani, N. Abani, J. Gelovani, and M. R. Avanaki, “A numerical analysis of a semi-dry coupling configuration in photoacoustic computed tomography for infant brain imaging,” Photoacoustics 7, 27–35 (2017). [CrossRef]  

113. J. Nyholt and G. N. Langlois, “Dry-coupled permanently installed ultrasonic sensor linear array. Google Patents: 2013.

114. J. A. Curcio and C. C. Petty, “The near infrared absorption spectrum of liquid water,” J. Opt. Soc. Am. 41(5), 302–304 (1951). [CrossRef]  

115. S. Matcher, M. Cope, and D. Delpy, “Use of the water absorption spectrum to quantify tissue chromophore concentration changes in near-infrared spectroscopy,” Phys. Med. Biol. 39(1), 177–196 (1994). [CrossRef]  

116. F. M. Sogandares and E. S. Fry, “Absorption spectrum (340-640 nm) of pure water. I. Photothermal measurements,” Appl. Opt. 36(33), 8699–8709 (1997). [CrossRef]  

117. S. Chung, A. Cerussi, S. Merritt, J. Ruth, and B. Tromberg, “Non-invasive tissue temperature measurements based on quantitative diffuse optical spectroscopy (DOS) of water,” Phys. Med. Biol. 55(13), 3753–3765 (2010). [CrossRef]  

118. M. F. Bakhsheshi and T.-Y. Lee, Non-invasive Monitoring of Brain Temperature by Near-infrared Spectroscopy (Taylor & Francis: 2015).

119. A. Tam and C. Patel, “Optical absorptions of light and heavy water by laser optoacoustic spectroscopy,” Appl. Opt. 18(19), 3348–3358 (1979). [CrossRef]  

120. S. A. Sullivan, “Experimental study of the absorption in distilled water, artificial sea water, and heavy water in the visible region of the spectrum,” J. Opt. Soc. Am. 53(8), 962–968 (1963). [CrossRef]  

121. S. Mahmoodkalayeh, H. Z. Jooya, A. Hariri, Y. Zhou, Q. Xu, M. A. Ansari, and M. R. N. Avanaki, “Low Temperature-Mediated Enhancement of Photoacoustic Imaging Depth,” Scientific Reports 2018, 8, 4873.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Absorption spectrum of (a) endogenous, and (b) exogenous contrast agents. Reproduced with permission from [35]. According to Beer-Lambert law, absorbance is defined by $A = \varepsilon (\lambda )Cd$ where, $\varepsilon (\lambda )$ , C, and d are molar absorptivity, concentration, and cross-section thickness of the contrast agent, respectively. Absorption coefficient is expressed as: $\alpha = ({{A / d}} ){\log _{10}}e$ [6].
Fig. 2.
Fig. 2. Principle of PA signal generation, detection, and image reconstruction. Reprinted with permission from Ref. [7]. A short pulsed (in nanoseconds) laser light illuminates the absorber, leading to a transient temperature rise which results in a thermoelastic expansion of the absorber, and acoustic (or PA) wave generation. The signals generated from the waves received by an ultrasound probe are given to a reconstruction algorithm to form a PA image. Thermal and stress confinements must be met to produce PA waves.
Fig. 3.
Fig. 3. Vantage system architecture and data acquisition sequencing. (a) Vantage system architecture for US/PA sequencing and (b) timing diagram of US/PA sequencing in Vantage system. US: Ultrasound, PA: Photoacoustic, Recon.: Reconstruction. RcvBuffer is the predefined buffer where the US and PA data are stored, Acq: acquisition, TX: transmit, RX: receive, A/D: analog to digital, VSX: Verasonics script execution, GUI: graphical user interface, RcvBuffer: receive buffer, CPU: central processing unit.
Fig. 4.
Fig. 4. Photoacoustic signal in (A) time-domain, and (B) frequency-domain, obtained from a 2 mm diameter carbon lead phantom, imaged with L22-14v transducer at 690 nm wavelength. The red signal in A shows the overlaid US signal acquired from the same sample.
Fig. 5.
Fig. 5. US/PA resolution study. (A) Schematic of the experimental setup, including a hair phantom photograph captured by a 4× objective on a light microscope (SME-F8BH, Amscope, CA, USA). Resolution study when (B) L7-4 was used, (C) L22-14v was used. (i) US axial resolution versus depth, (ii) US lateral resolution versus depth, (iii) PA axial resolution versus depth, (iv) PA lateral resolution versus depth.
Fig. 6.
Fig. 6. Photoacoustic signal “jitter” comparison between two triggering methods when a 2 mm carbon lead was imaged. 64th element signals are plotted. (A) (i) Schematic of straightforward triggering method where laser triggers Vantage system, (ii) schematic of the function generator driven triggering method where function generator triggers laser flash lamp and Vantage system, which then triggers laser Q-switch, (B) (i) five sequential PA frames showing significant “jitter” between frames, (ii) five sequential PA frames showing nearly no “jitter” between frames, and (iii) averaged frames using methods described in A(i) and A(ii).
Fig. 7.
Fig. 7. Impact of illumination angle. (A) Experimental setup of illumination angle investigation. (B) PA signal profile from 64th element of L7-4 probe taken from a 2 mm carbon lead phantom with different illumination angles. (i) 40 degrees, (ii) 48 degrees.
Fig. 8.
Fig. 8. PA signal profile from 64th element of L7-4 probe taken from a black tape phantom of 18 mm width with varying sampling rates: (A) 20.8 MHz, (B) 41.6 MHz, (C) 62.4 MHz, and (D) interpolated signal using ‘spline’ algorithm. Here, ${x_{nor}} = {{x({{y_{\textrm{max}}}} )} / {\textrm{sample no}\textrm{.}}}$ and ${y^r}_x = \textrm{ Ratio of }y\textrm{ values }$ , where.
Fig. 9.
Fig. 9. US image quality improvement. (A) Experimental setup showing an ultrasound probe and a 3-carbon lead phantom. (L1) 2 mm diameter carbon lead, (L2) 0.9 mm diameter carbon lead, (L3) 0.2 mm diameter carbon lead, (B-F) US images of the 3 carbon lead phantom for 8, 16, 32, 64, and 128 steered angles, respectively. Green dashed circle encloses object pixels and red dashed box encloses both object and background pixels, (G) CNR versus number of steered angles
Fig. 10.
Fig. 10. PA signal amplification. (A) System setup to test the Photosound AMP128-18 amplifier: (i) display monitor, (ii) Nd:YAG laser head, (iii) fiber optic bundle, (iv) laser power supply, (v) laser chiller. (B) Wire imaging results: wire phantom suspended in 0% intralipid imaged (i) without and (iv) with amplifier, wire phantom suspended in 25% intralipid imaged (ii) without and (v) with amplifier, and wire phantom suspended in 50% intralipid image (iii) without and (vi) with amplifier. (C) Raw data from the 64th channel of the linear array when imaging a single wire in 50% intralipid concentration to demonstrate SNR improvement from (i) without amplifier (SNR: 3.25 dB) to (ii) with amplifier (SNR: 5.5 dB), (D) Bar chart demonstrating signal amplitude increase corresponding to phantom imaging in B with and without amplifier, at different concentrations of intralipid.
Fig. 11.
Fig. 11. Fluence Compensation algorithm validation. (A) Schematic of a phantom with 3 layers of gelatin phantom mixed with intralipid/ink solution and 3 carbon lead imaging targets, (B) photograph of the phantom made in a cube box and magnified to show the layers, (C) US image of the phantom, (D) PA image before fluence compensation, (E) Monte Carlo simulation of light propagation in the phantom, (F) PA image after fluence compensation.
Fig. 12.
Fig. 12. Mathematical description and flow diagram of delay and sum (DAS), delay multiply and sum (DMAS), double stage DMAS (DS-DMAS), and minimum variance (MV) based reconstruction algorithms.
Fig. 13.
Fig. 13. Comparison of the performance of DS-DMAS to Vantage PA reconstruction method. (A) Wire phantoms for resolution study, (i) one-wire, (ii) two-wire, (iii) two-wire cross, and (iv) three-wire. (B-I) US and PA images produced by L7-4 and L22-14v probes where the image reconstruction is: (i) Vantage default US, (ii) Vantage default PA, and (iii) photoacoustic DS-DMAS. L7-4 probe: (B) single wire cross section, (C) 3-wire cross section, (D) 2-wire cross section, and (E) 2-wire cross section. L22-14v probe: (F) single wire cross section, (G) 2-wire cross section, (H) 2-wire cross section, and (I) 3-wire cross section. Images of the targets are shown in white dotted circles or straight lines.
Fig. 14.
Fig. 14. Measurement of absorption spectrum of two commonly used imaging targets: (A) black tape, and (B) carbon pencil lead.
Fig. 15.
Fig. 15. Optical absorption spectrum of regular water (H2O) and heavy water (D2O) at various temperatures in NIR region. Data for this figure has been extracted from [114,119] and reproduced.

Tables (6)

Tables Icon

Table 1. High frequency Vantage 128 system specifications (verified by Verasonics’ technical support)

Tables Icon

Table 2. Specifications of linear array transducers ATL Philips L7-4, and Verasonics L22-14v.

Tables Icon

Table 3. Modifications to original L11-4 script by Verasonics, for L7-4 and L22-14v probes.

Tables Icon

Table 4. List of immediately recognizable transducers for the Vantage system.

Tables Icon

Table 5. Script modifications to change the sampling rate of L7-4 from 20.8 MHz to 62.4 MHz. The modifications are indicated in green.

Tables Icon

Table 6. Performance comparison between image reconstruction algorithms.

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.