Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Skull acoustic aberration correction in photoacoustic microscopy using a vector space similarity model: a proof-of-concept simulation study

Open Access Open Access

Abstract

Skull bone represents a highly acoustical impedance mismatch and a dispersive barrier for the propagation of acoustic waves. Skull distorts the amplitude and phase information of the received waves at different frequencies in a transcranial brain imaging. We study a novel algorithm based on vector space similarity model for the compensation of the skull-induced distortions in transcranial photoacoustic microscopy. The results of the algorithm tested on a simplified numerical skull phantom, demonstrate a fully recovered vasculature with the recovery rate of 91.9%.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

15 September 2020: A typographical correction was made to the acknowledgments section.

1. Introduction

Photoacoustic imaging (PAI) is a promising recent technology, which works based on acoustic detection of optical absorption from tissue chromophores, such as oxy-hemoglobin (HbO2) and deoxy-hemoglobin (Hb) [18]. Photoacoustic microscopy (PAM) is one of the implementations of PAI with micrometer spatial resolution. One of the fast-emerging applications for PAM has been to study the brain in rodents [46,916]. Transcranial photoacoustic microscopy (TsPAM), in particular, is of a great importance for longitudinal studies. Despite the thin skull in rodents (i.e., 0.23 mm to 0.53 mm in mice [16,17] and 0.5 mm to 1 mm in rats [18]), due to the use of high frequency transducers, TsPAM is challenging, and the photoacoustic (PA) pressure waves are usually distorted and experience attenuation, dispersion, and longitudinal to shear mode conversion [16,1922]. Here, we used distortion and aberration interchangeably.

The findings about the effect of skull on acoustic pressure waves are as follows. Skull bone represents a highly acoustical impedance mismatch and dispersive barrier for the propagation of acoustic waves [2325], that distorts the amplitude and phase of the received acoustic waves [20,21]; the acoustic attenuation occurs due to the absorption and scattering of the skull tissue and affects the magnitude of the acoustic waves [19,26,27]; the acoustic dispersion is the frequency dependency of the speed of sound in the skull and it distorts the phase of the acoustic wave [26]; the degree of attenuation and dispersion are defined by the density, porosity, and thickness of the skull [28]; frequency-dependent reduction of acoustic wave amplitude contributes to the broadening of the received acoustic signal at the transducer [29]; the significantly higher speed of sound in the bone ( 2900 m/s [16]) as compared to brain’s soft tissue ( 1500 m/s [30]) makes the acoustic waves travel faster through the skull and be detected earlier, leading to a different time shift for individual frequency components, and contributes to broadening of the received acoustic signal at the transducer [9,1921]; skull-originated reverberations occur due to the reflection of acoustic waves from the skull-tissue interface [9,20,21]; longitudinal to shear wave mode conversion occurs when a wave encounters an interface between materials of different acoustic impedances with the incident angle not being normal to the interface [22].

Several analytical and numerical compensation algorithms have been designed for correcting the skull-induced aberrations in transcranial ultrasound wave propagation. These methods can be broadly classified to ray-based [19,31] and wave-based [3137]. Ray-based methods calculate the correcting phases through ray tracing and numerical simulation. In the wave-based methods such as time-reversal, the pressure waveform over time is recorded, reversed, and re-emitted to focus on the location of the desired target, providing corrections for both phase and amplitude. A review of phase correction methods can be found in [38]. In addition to the above methods, recently, a deconvolution-based algorithm, was proposed by Estrada et al. for skull-induced distortions correction in transcranial optoacoustic microscopy (OAM) [20].

The skull’s aberration correction methods implemented so far, are either fast but not accurate, time consuming or require an axillary imaging modality to acquire the structural information of the imaging target. On the other hand, among modern computational methods, although used for improved image reconstruction [3947], there has not yet been any study on machine learning (ML) for skull’s aberration correction. Here, we introduce a novel algorithm based on the vector space similarity (VSS) model, for the first time, in conjunction with a ray-tracing simulation to correct for the skull-induced distortions for the images generated by a TsPAM. VSS has been used, for the first time, in Cornell system for the mechanical analysis and retrieval of text (SMART) in 1960s [48]; it has widely been used in intelligent information retrieval from search engines [49] to big data platforms such as biomedical documents [50]. We employ a modern take on of the VSS model in the context of matching between the extremum information in the distorted skull-induced PA time-frequency domain signal and the reference signals generated by our recently developed ray-tracing-based simulation [21,5153]. Additional justification for the application of VSS model is based on the successful use of it in PAM post-processing for tissue vasculature classification [54], and quantification of tissue response to cancer treatment [55].

2. Methods

2.1 PA wave propagation model

For the purpose of the simulation of the PA initial pressure wave propagation from a point source, a semi-analytical numerical acoustic solver is used that was recently developed by us [21]. The solver is based on a deterministic ray-tracing approach in the time-frequency domain that considers a homogenized single layer model for the skull, taking into account dispersion, reflection, refraction and mode conversion between the skull surfaces. Attenuation of light due to the skull or depth is ignored and it is assumed that sufficient initial pressure is generated at the imaging target location, and the acoustic attenuation for the initial pressure traveling towards the skull surface has been considered.

The imaging target is assumed to be a combination of many point sources. The impulse rays propagated to the first fluid-solid interface, multiplied by the skull’s transmission coefficient and propagated further. This procedure is repeated until all the rays reach the transducer’s surface. The PA signal is produced by convolving each individual frequency component of the initial pressure with its corresponding impulse response. When the skull is not present, the ray is simply propagated through a free space and travels directly towards the transducer’s surface.

Figure 1 shows the 2-D illustration of the simplified numerical skull phantom that is used in our simulations. The imaging target is a vessel represented by 27007 point sources. The diameter of the vessel varies between 30 $\mu$m and 150 $\mu$m, and the entire vessel is located at the averaged depth of 5 mm. In this simulation the axial and lateral resolutions are considered 6 $\mu$m and 10 $\mu$m, respectively. We modeled every initial pressure point source as a sphere with the radius of $r_0$. The spherical point sources generate broadband spherical acoustic waves. A single layer homogenized skull tissue with a thickness variation between 0.5 mm and 2.5 mm is considered. The outer-skull surface considered to be flat but the inner-skull surface is quantized as locally-flat surfaces. This is done to approximately study the effect of varying skull thickness on the resulting image. The space between the skull and the target is modeled with the same acoustic properties as the brain soft tissue [22,30]. A flat ultrasound transducer with an element diameter of 2$r_d$, 30 MHz center frequency, and 100% bandwidth is used. The transducer is in contact with the outer-skull surface through ultrasound gel. The acoustic properties of the skull, brain soft tissue, and the ultrasound gel that are used in our simulations are listed in Table 1.

 figure: Fig. 1.

Fig. 1. 2-D illustration of a simple model of numerical skull phantom used in our simulations. The initial pressure point source modeled as a sphere with the radius of $r_0$ at a depth of $d$ from the outer-skull surface. A single layer homogenized skull tissue with a thickness variation between 0.5 mm and 2.5 mm is considered. The solid acceptance angle of the transducer is indicated by the dashed lines.

Download Full Size | PDF

Tables Icon

Table 1. Acoustic properties of skull, brain soft tissue, and the ultrasound gel used in the simulations.a

2.2 Vector space similarity model

Our proposed method is based on VSS model, in which no direct signal amplification or shift will be performed, instead the compensated signal is reconstructed according to the similarity between the skull-affected signal and the signals in the training dataset [49,57]. To describe the VSS model, let’s suppose we have a numeric database, ${q=({\boldsymbol {f}}_{\textbf {q1}},{\boldsymbol {f}}_{\textbf {q2}},\ldots ,{\boldsymbol {f}}_{\textbf {qn}})}$ , and the goal is to find the most similar data from the training dataset, i.e., ${d_i=({\boldsymbol {f}}_{\textbf {i1}},{\boldsymbol {f}}_{\textbf {i2}},\ldots ,{\boldsymbol {f}}_{\textbf {in}})}$, to the desired query with the defined similarity features. Where, ${d_i}$ is the $i^{th}$ data in the training dataset, $q$ is the desired query, and ${{\boldsymbol {f}}_{\textbf {ij}}}$ and ${{\boldsymbol {f}}_{\textbf {qj}}}$ are the $j^{th}$ feature vectors of ${d_i}$ and $q$, respectively. The dot product of each query feature vector in all corresponding feature vectors of the training dataset is calculated, then the cosine similarity measure is used to find the minimum angle between query and training dataset as indicated in Eqs. (1) and (2); “.” is the notation of the intersection or dot product and “$\parallel ~ \parallel$” is the notation of the norm of vector.

$$cos({\boldsymbol{f}_{\boldsymbol{ij}}},{\boldsymbol{f}_{\boldsymbol{qj}}})=\frac{{\boldsymbol{f}_{\boldsymbol{ij}}}.{\boldsymbol{f}_{\boldsymbol{qj}}}}{\parallel {\boldsymbol{f}_{\boldsymbol{ij}}} \parallel.\parallel {\boldsymbol{f}_{\boldsymbol{qj}}} \parallel}=\frac{\Sigma_{k=1}^{n} {f}_{ij,k}.{f}_{qj,k}}{\sqrt{\Sigma_{k=1}^{n} {f}_{ij,k}^2}.\sqrt{\Sigma_{k=1}^{n} {f}_{qj,k}^2}}$$
$$ Similarity(\boldsymbol{f}_{\boldsymbol{ij}},\boldsymbol{f}_{\boldsymbol{qj}})=arccos(\frac{\Sigma_{k=1}^{n} {f}_{ij,k}.{f}_{qj,k}}{\sqrt{\Sigma_{k=1}^{n} {f}_{ij,k}^2}.\sqrt{\Sigma_{k=1}^{n} {f}_{qj,k}^2}})$$

2.3 Aberration correction algorithm

The aberration correction algorithm uses the PA signal extremum information in the time-frequency domain as feature vector. The algorithm is as follow. The input to the algorithm is “with skull” signals. The signals are initially decomposed to their time-frequency components using the short-time Fourier transform (STFT) [58]. For the implementation of the STFT in this study, the signals are discretized at a rate of 250 Megasamples per second and 32 frequencies are modeled. Then for each frequency given, vectors of features, namely $TimeVector$ and $AmpVector$, are extracted as follow.

PA signal vector space (For real and imaginary parts individually)

 For each frequency ${f}_{i}$ in the signal:

 TimeVector:

 Temporal delay

 Time points at which minimum occurs (${mnpt}_{1}$,${mnpt}_{2}$,…,${mnpt}_{k}$)

 Time points at which maximum occurs (${mxpt}_{1}$,${mxpt}_{2}$,…,${mxpt}_{k}$)

 AmpVector:

 Minimum peak amplitudes (${mnpa}_{1}$,${mnpa}_{2}$,…${mnpa}_{k}$)

 Maximum peak amplitudes (${mxpa}_{1}$,${mxpa}_{2}$,…,${mxpa}_{k}$)

Then, the dot product of the obtained feature vectors in each of the corresponding vectors of the training dataset (${d_i}$) are calculated and divided by the norms of the vectors to yield the cosine of the angle between them. The similarity is then calculated via the arccos of the obtained value. For each frequency, the overall similarity of each reference signal from the training dataset to the query data ($q$) (the skull affected signal) is calculated as the mean of its feature vector similarity (see Eq. (3)). The mean similarity is then calculated between similarities in all frequencies for each two pairs and created a similarity metric vector.

$$ Similarity(q,{d}_{i})=\frac{1}{2}(Similarity(\boldsymbol{T}_{\boldsymbol{i}},\boldsymbol{T}_{\boldsymbol{q}})+Similarity(\boldsymbol{A}_{\boldsymbol{i}},\boldsymbol{A}_{\boldsymbol{q}})$$
Where, ${\textbf {T}_{\textbf {i}}}$ and ${\textbf {T}_{\textbf {q}}}$ are the ${\textit {TimeVectors}}$ of ${d_i}$ and $q$, respectively, and ${\textbf {A}_{\textbf {i}}}$ and ${\textbf {A}_{\textbf {q}}}$ are the ${\textit {AmpVectors}}$ of ${d_i}$ and $q$, respectively.

Finally, the minimum of the similarity metric vector is used to select the best matched reference signal. For each frequency, according to the temporal delay and maximum amplitude difference between the input signal and the corresponding matched reference signal, the appropriate amplification coefficients and time shifts are performed on the “without skull” signal (see Algorithm (1)).

boe-11-10-5542-i001

2.4 Training dataset

We created a large training dataset of “without skull” and “with skull” signals (with different skull thicknesses (from 0.3 mm to 2.5 mm with 0.1 mm step size), and different imaging depths (from 0.1 mm to 25 mm below the skull surface with 0.5 mm step size)), and then extracted the abovementioned features from the “with skull” signals, creating 1150 training dataset.

3. Results

3.1 Skull-induced aberration compensation: investigation on time traces

Initially, we evaluated the compensation algorithm in correcting the induced aberration of the PA signal produced from a 0.1 mm absorbing sphere and passes through a skull tissue. The tested scenarios are as follows. (i) Skull thickness of 0.5, 1, and 2 mm, when the absorbing sphere is located at the depth of 5 mm, (ii) skull thickness of 1 mm, when the absorbing sphere is located at the depths of 2, 8, and 20 mm. The unaberrated (i.e., without skull), aberrated (i.e., with skull) and compensated signals are shown in Fig. 2 and Fig. 3.

 figure: Fig. 2.

Fig. 2. Skull-induced aberration compensation of PA signals produced from a 0.1 mm absorbing sphere passing through a skull tissue with the thickness of 1 mm located at depths (a) 2 mm, (b) 8 mm, and (c) 20 mm. (i) Signal amplitudes and (ii) signal gradients. In this simulation, there is a 5 mm layer of ultrasound gel between the ultrasound transducer and the skull.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Skull-induced aberration compensation of PA signals produced from a 0.1 mm absorbing sphere located at the depth of 5 mm passing through a skull tissue with the thicknesses of (a) 0.5 mm, (b) 1 mm, and (c) 2 mm. (i) Signal amplitudes and (ii) signal gradients. In this simulation, there is a 5 mm layer of ultrasound gel between the ultrasound transducer and the skull.

Download Full Size | PDF

We extracted the PA background noise, comprised of a combination of electronic noise and system thermal noise from an experimental setup, and added that to our simulated signals to evaluate the tolerance of our compensation algorithm to noise. The experimental setup was as follow. An Nd:YAG laser (PhocusMobil, Opotek Inc., CA, USA) with a repetition rate of 10 Hz and a pulse width of 8 ns was used. For light delivery, we used a custom fiber bundle (Newport Corporation, Irvine, CA, USA). For data acquisition, a Verasonics Vantage 128 system was used. For PA signal detection, L22-14v linear array (Verasonics Inc., USA) ultrasound probe with 128-elements and 18.5 MHz central frequency and 65% bandwidth was used. On the imaging end, the transducer was placed and held perpendicularly to the sample. A 2 mm diameter carbon lead phantom in water was imaged at 690 nm. The noise was extracted following the deconvolution algorithm explained in [59]. We used 100 frames of data and modeled the noise distribution. The noise signal was normalized, and two levels of noise were formed: 10% and 20%. The noise signals were then added to some of the distorted signals in Fig. 2 and Fig. 3. The compensation algorithm were applied to the noisy PA signals. The results of this experiment are shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Skull-induced aberration compensation of PA signals produced from a 0.1 mm absorbing sphere passing through a skull tissue. (a) Skull thickness is 2 mm and the target is located at 5 mm depth (PA signal is contaminated with 10% background noise), (b) skull thickness is 2 mm and the target is located at 5 mm depth (PA signal is contaminated with 20% background noise), (c) skull thickness is 1 mm and the target is located at 20 mm depth (PA signal is contaminated with 10% background noise), and (d) skull thickness is 1 mm and the target is located at 20 mm depth (PA signal is contaminated with 20% background noise). (i) Signal amplitudes and (ii) signal gradients. In this simulation, there is a 5 mm layer of ultrasound gel between the ultrasound transducer and the skull.

Download Full Size | PDF

In order to improve the accuracy of the compensation algorithm, we considered a pre-processing step before compensation, where we thresholded low amplitude samples (< 5% of the signal peak) to zero.

The results in Fig. 2 and 3 show that the signal distortion due to both time shift and amplitude distortion have well been recovered. The gradient of the recovered signal and undistorted signals are almost identical which suggest that both the depth of target and skull thickness do not introduce phase distortions to the recovered signal. Figure 4 shows that with a noisy distorted signal, the compensation algorithm still recovers the undistorted signal. The phase information however is not affected unless there is a steep rise or fall in the signal; such distortion is translated in displacement of the components of the imaging target in axial direction and slight speckle-looking artifact in the image. Comparing the results in Figs. 4 a, b, c, and d, one can conclude that the amplitude recovery of our proposed compensation algorithm is more sensitive than time shift recovery.

To quantify the performance of the compensation algorithm, we defined a quantitative measure, we called it: “recovery percentage”, calculated as:

$$recovery \_ ~percentage~(\%)=(1 - \frac{l^2(\mid{signal_{comp}}\mid-\mid{signal_{without}}\mid)}{l^2(\mid{signal_{without}}\mid)}) \times100$$
Where, $l^2$ norm is calculated as the square root of the sum of the squared signal sample values. Table 2, shows the recovery percentage for the results showed in Figures (24).

Tables Icon

Table 2. Recovery percentage calculated for compensated original signal, noisy signal with 10% noise and with 20% noise.

3.2 Skull-induced aberration compensation: investigation on synthetic TsPAM images

The compensation algorithm was then applied to aberrated signals collected from a synthetic PAM experiment (see the setup and the imaging target in Fig. 1). The results are shown in Fig. 5. A representative depth profile of the unaberrated, aberrated and compensated images (indicated with green dotted lines in the images in Fig. 5(a)) are plotted in Fig. 5(b). As can be seen in Fig. 5(b), the axial profile has been almost perfectly recovered, in terms of both amplitude and phase. The recovery percentage calculated for the compensated image was 91.9%.

 figure: Fig. 5.

Fig. 5. Aberration correction of TsPAM images. (a) Synthetic TsPAM image acquired from the experimental setup depicted in Fig. 1, (i) unaberrated image, (ii) aberrated image, (iii) compensated image. (b) A representative depth profile, indicated with green dotted lines in images in (a), of the unaberrated, aberrated and compensated images.

Download Full Size | PDF

Using the same method explained in the results section (a), we created two noisy aberrated TsPAM images, one with 10% and one with 20% noise (see Fig. 6(ii)). We then compensated the images, the results are shown in Fig. 6(iii). The recovery percentage calculated for these compensated images are 90.60% and 87.95%, respectively.

 figure: Fig. 6.

Fig. 6. Aberration correction of noisy TsPAM images, reproduced from the TsPAM in Fig. 5, (a) with 10% noise, and (b) with 20% noise. (i) Unaberrated image, (ii) aberrated image, and (iii) compensated image.

Download Full Size | PDF

3.3 Execution time analysis

Both the signal simulator [21] and the compensation algorithm were implemented in Java openJDK 13. All the signal processing were conducted in MATLAB R2016a. Utilizing a hexa-core Intel Core i7 CPU with 6 cores, 32 GB of RAM, and 3.60 GHz, the compensation algorithm, took 24 $\pm$ 2 ms for one task of compensation; with the current code, the compensation is done offline. By implementing the compensation algorithm in graphical processing unit (GPU) or in field programmable gate array (FPGA), hundreds or thousands fold speed-up is achievable, that could make the real-time compensation of the signal in the data acquisition line feasible.

3.4 Non-flat skull aberration compensation: preliminary results

We evaluated our compensation algorithm to correct aberrated PA signal produced from a 0.1 mm spherical absorber and passed through an angled skull tissue when the absorbing sphere was located at the depth of 5 mm from the outer-skull surface. The pressure wave travels a longer path when passes the angled skull; the larger the angle, the longer the travel time. Table 3, shows the preliminary results for skull thickness variation (i.e., $\Delta$h), the percentage of the signal transmitted, and the recovery percentage of the distorted signal after compensation at angles $\theta$ = 5$^{\circ }$ to 30$^{\circ }$ relative to the transducer axis. Since the critical angle for the fluid-solid interface is about 31$^{\circ }$ [22], we reported simulation results for angles up to 30$^{\circ }$ only.

Tables Icon

Table 3. Non-flat skull aberration compensation. Skull thickness variation ($\Delta$h), transmitted signal percentage, and recovery percentage of the distorted signals generated with non-flat skulls with angles from 5$^{\circ }$ to 30$^{\circ }$ versus the transducer axis.

4. Discussion

We study a novel algorithm based on VSS model for quasi real-time compensation of the skull-induced distortions in transcranial photoacoustic microscopy. Although VSS is an effective and efficient similarity measurement algorithm, it is not the only similarity algorithm that is explored in the literature. There are other similarity metric such as Pearson Correlation Coefficient (PCC) [60] that works based on similar principle but with a heavier computational complexity. Both VSS and PCC have trouble in distinguishing different importance of features. To deal with this problem, many variations of similarity measurement including weighting approaches, combination measures, and rating normalization methods have been developed [61].

The skull-induced aberration compensation algorithm described here is designed for TsPAM imaging. The proposed compensation method is based on the longitudinal PA waves generated from a single point source which is placed in the axis of the point detector and the effect of other adjacent sources on the PA signal is neglected. Therefore, the transducer receives waves with only small incident angle with almost no shear wave component. The consideration of the point source can be accurately assumed if the lateral resolution of the image is governed by the diffraction-limited size of the focused light beam. The simulated numerical phantom results (Figs. 5 and 6), confirmed the ability of the proposed algorithm to accurately compensate for the four previously explored skull-induced acoustic distortions [20], including the signal amplitude attenuation, time shift, signal broadening, and multiple reflections (Figs. 2 and 3); due to the high attenuating effect of the skull, the multiple reflections cannot be visualized in simulated aberrated signals. In Fig. 3(a), only one peak can be visualized after the main peak which is because of the strength of the first reflection at the skull surface when the thickness of the skull was 0.5 mm. With thicker skulls, the attenuation induced by the skull makes the reflected signals too weak such that they cannot be visualized. The noise tolerance of the compensation algorithm was also tested. It was shown that the recovery rate is only slightly affected with the presence of 10% and 20% noise; the error appears as a mild speckle-looking artifact in the image that can easily be removed by median and mean filters. We also observed that the amplitude recovery of our proposed compensation algorithm is more sensitive than the time shift recovery.

The main feature of the algorithm is its simplicity and fast execution time that are translated in its light computation. Introducing more parallelism to the implementation of the algorithm is possible in various ways. One possible way is to convert the code from currently used CPU-based multi thread execution to GPU accelerated execution. There are two independent anchor points within the compensator code that can be utilized for this purpose: (i) using 32 cores in parallel to calculate similarity of a given signal against the training dataset through evaluation of feature-vector angles in which 32 is the number of frequency channels used, and (ii) employing a GPU/many-core system with each core responsible to check similarity with one or a predefined subset of the dataset. Obviously, use of the first method would decrease the computation time for a single A-line by a factor of 32 while the latter can decrease the entire end-to-end compensation time with a factor equal to the number of cores; this can be up to 7000 for modern GPUs. The fact that our method can easily be segregated amongst parallel threads argues for a FPGA-based hardware implementation to be plausible which can be integrated into the transducers, outputting aberration compensated image in real-time.

The proposed method is independent from the skull anatomy. This is a valuable feature of our proposed algorithm because in a TsPAM experiment, the anatomy of the skull as well as the spatial characteristics of the skull are not available. In this preliminary work, we assumed that the skull is flat and perpendicular to the transducer axis.

Although the compensation algorithm is based on preparing a training set, it is not considered a machine learning algorithm, mainly because it does not have a layered kernel to yield the compensated signal from the input aberrated signal.

The aberration compensation proposed algorithm has several limitations: (i) the skull is considered as a homogenized single layer bone with smooth surfaces and no curvature; (ii) the brain is considered as a homogenized soft tissue with constant acoustic properties; (iii) attenuation of light due to the skull or depth is ignored and it is assumed that sufficient initial pressure is generated at the imaging target location; (iv) PA signal is assumed to be generated from a single point spherical source and the effect of other adjacent sources on the PA signal is neglected; (v) the simulation framework and wave propagation model could be extended to simulate a line-shaped absorbing source to account for a realistic shape of the optical absorption source in the tissue.

In a real-world application after we acquire raw data from a TsPAM system, we will use our proposed algorithm as explained in Section 2.3, with only one change which is, adding more data to the VSS training dataset. So far we have trained the VSS algorithm only with flat skulls. In the future, by using finite-element method the skull surface will be segmented into very small regions (that are comparable to the acoustic wavelength). These small regions can each be approximated as a layer with a flat surface but angled versus the axis that connects the transducer (defined as a point detector) and the absorbing target (defined as a point source). Our simulator [21] will then generate data with different angles of the flat skull (see Table 3 representing preliminary data related to angled skull transmission) to determine how much of the incident signal is diffracted and how much of it received by the transducer; such data will be added to the VSS training dataset.

5. Conclusion

We developed a skull-induced aberration compensation algorithm based on vector space similarity model and ray-tracing-based simulations. The main feature of the algorithm is its simplicity and fast execution time. We demonstrated the effectiveness of the algorithm tested on numerical phantoms with a recovery percentage of 91.9%; i.e., 91.9% of the distorted signal due to the amplitude attenuation, time shift, and signal broadening were retrieved. By adding noise to the aberrated signals, the noise tolerance of the algorithm was evaluated; the recovery percentage was decreased to 90.60% (adding 10% noise) and 87.95% (adding 20% noise). Using GPU and FPGA for parallel implementation of the code, considering more sophisticated skull and tissue models, and taking into account the effect of the fluence, are the future plans of the current study.

Funding

National Institutes of Health (R01EB027769-01 R01EB028661-01).

Acknowledgments

We thank Dr. Rayyan Manwar from Wayne State University for editing the manuscript. The portion of research performed at the University of Illinois Chicago was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under Award Numbers R01EB027769-01R01EB028661-01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Disclosures

The authors declare no conflicts of interest.

References

1. L. V. Wang, “Tutorial on photoacoustic microscopy and computed tomography,” IEEE J. Sel. Top. Quantum Electron. 14(1), 171–179 (2008). [CrossRef]  

2. M. Li, Y. Tang, and J. Yao, “Photoacoustic tomography of blood oxygenation: A mini review,” Photoacoustics (2018).

3. A.-R. Mohammadi-Nejad, M. Mahmoudzadeh, M. S. Hassanpour, F. Wallois, O. Muzik, C. Papadelis, A. Hansen, H. Soltanian-Zadeh, J. Gelovani, and M. Nasiriavanaki, “Neonatal brain resting-state functional connectivity imaging modalities,” Photoacoustics 10, 1 (2018). [CrossRef]  

4. J. Kang, E. M. Boctor, S. Adams, E. Kulikowicz, H. K. Zhang, R. C. Koehler, and E. M. Graham, “Validation of noninvasive photoacoustic measurements of sagittal sinus oxyhemoglobin saturation in hypoxic neonatal piglets,” J. Appl. Physiol. 125(4), 983–989 (2018). [CrossRef]  

5. M. Nasiriavanaki, J. Xia, H. Wan, A. Q. Bauer, J. P. Culver, and L. V. Wang, “High-resolution photoacoustic tomography of resting-state functional connectivity in the mouse brain,” Proc. Natl. Acad. Sci. 111(1), 21–26 (2014). [CrossRef]  

6. J.-M. Yang, C. Favazza, R. Chen, J. Yao, X. Cai, K. Maslov, Q. Zhou, K. K. Shung, and L. V. Wang, “Simultaneous functional photoacoustic and ultrasonic endoscopy of internal organs in vivo,” Nat. Med. 18(8), 1297–1302 (2012). [CrossRef]  

7. S. Mahmoodkalayeh, H. Z. Jooya, A. Hariri, Y. Zhou, Q. Xu, M. A. Ansari, and M. R. Avanaki, “Low temperature-mediated enhancement of photoacoustic imaging depth,” Sci. Rep. 8(1), 4873 (2018). [CrossRef]  

8. K. Kratkiewicz, R. Manwara, Y. Zhou, M. Mozaffarzadeh, and K. Avanaki, “Technical considerations when using verasonics research ultrasound platform for developing a photoacoustic imaging system,” arXiv preprint arXiv:2008.06086 (2020).

9. X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnol. 21(7), 803–806 (2003). [CrossRef]  

10. J. Yao, J. Xia, K. I. Maslov, M. Nasiriavanaki, V. Tsytsarev, A. V. Demchenko, and L. V. Wang, “Noninvasive photoacoustic computed tomography of mouse brain metabolism in vivo,” NeuroImage 64, 257–266 (2013). [CrossRef]  

11. L. Li, J. Xia, G. Li, A. Garcia-Uribe, Q. Sheng, M. A. Anastasio, and L. V. Wang, “Label-free photoacoustic tomography of whole mouse brain structures ex vivo,” Neurophotonics 3(3), 1 (2016). [CrossRef]  

12. J. Yao, L. Wang, J.-M. Yang, K. I. Maslov, T. T. Wong, L. Li, C.-H. Huang, J. Zou, and L. V. Wang, “High-speed label-free functional photoacoustic microscopy of mouse brain in action,” Nat. Methods 12(5), 407–410 (2015). [CrossRef]  

13. L. Lin, J. Xia, T. T. Wong, L. Li, and L. V. Wang, “In vivo deep brain imaging of rats using oral-cavity illuminated photoacoustic computed tomography,” J. Biomed. Opt. 20(1), 016019 (2015). [CrossRef]  

14. M.-L. Li, J.-T. Oh, X. Xie, G. Ku, W. Wang, C. Li, G. Lungu, G. Stoica, and L. V. Wang, “Simultaneous molecular and hypoxia imaging of brain tumors in vivo using spectroscopic photoacoustic tomography,” Proc. IEEE 96(3), 481–489 (2008). [CrossRef]  

15. S. Hu, K. Maslov, V. Tsytsarev, and L. V. Wang, “Functional transcranial brain imaging by optical-resolution photoacoustic microscopy,” J. Biomed. Opt. 14(4), 040503 (2009). [CrossRef]  

16. M. Kneipp, J. Turner, H. Estrada, J. Rebling, S. Shoham, and D. Razansky, “Effects of the murine skull in optoacoustic brain microscopy,” J. Biophotonics 9(1-2), 117–123 (2016). [CrossRef]  

17. H. Estrada, J. Rebling, J. Turner, and D. Razansky, “Broadband acoustic properties of a murine skull,” Phys. Med. Biol. 61(5), 1932–1946 (2016). [CrossRef]  

18. M. A. O’Reilly, A. Muller, and K. Hynynen, “Ultrasound insertion loss of rat parietal bone appears to be proportional to animal mass at submegahertz frequencies,” Ultrasound in medicine & biology 37(11), 1930–1937 (2011). [CrossRef]  

19. X. Jin, C. Li, and L. V. Wang, “Effects of acoustic heterogeneities on transcranial brain imaging with microwave-induced thermoacoustic tomography,” Med. Phys. 35(7Part1), 3205–3214 (2008). [CrossRef]  

20. H. Estrada, X. Huang, J. Rebling, M. Zwack, S. Gottschalk, and D. Razansky, “Virtual craniotomy for high-resolution optoacoustic brain microscopy,” Sci. Rep. 8(1), 1459 (2018). [CrossRef]  

21. L. Mohammadi, H. Behnam, J. Tavakkoli, and M. Avanaki, “Skull’s photoacoustic attenuation and dispersion modeling with deterministic ray-tracing: Towards real-time aberration correction,” Sensors 19(2), 345 (2019). [CrossRef]  

22. R. W. Schoonover, L. V. Wang, and M. A. Anastasio, “Numerical investigation of the effects of shear waves in transcranial photoacoustic tomography with a planar geometry,” J. Biomed. Opt. 17(6), 061215 (2012). [CrossRef]  

23. F. Fry and J. Barger, “Acoustical properties of the human skull,” J. Acoust. Soc. Am. 63(5), 1576–1590 (1978). [CrossRef]  

24. Q. Xu, B. Volinski, A. Hariri, A. Fatima, and M. Nasiriavanaki, “Effect of small and large animal skull bone on photoacoustic signal,” in Photons Plus Ultrasound: Imaging and Sensing 2017, vol. 10064 (International Society for Optics and Photonics, 2017), p. 100643S.

25. B. Volinski, A. Hariri, A. Fatima, Q. Xu, and M. Nasiriavanaki, “Photoacoustic investigation of a neonatal skull phantom,” in Photons Plus Ultrasound: Imaging and Sensing 2017, vol. 10064 (International Society for Optics and Photonics, 2017), p. 100643T.

26. B. E. Treeby and B. Cox, “Modeling power law absorption and dispersion for acoustic propagation using the fractional laplacian,” J. Acoust. Soc. Am. 127(5), 2741–2748 (2010). [CrossRef]  

27. B. E. Treeby, “Acoustic attenuation compensation in photoacoustic tomography using time-variant filtering,” J. Biomed. Opt. 18(3), 036008 (2013). [CrossRef]  

28. A. Wydra, “Development of a new forming process to fabricate a wide range of phantoms that highly match the acoustical properties of human bone,” M.Sc. Thesis (2013).

29. X. L. Deán-Ben, D. Razansky, and V. Ntziachristos, “The effects of acoustic attenuation in optoacoustic signals,” Phys. Med. Biol. 56(18), 6129–6148 (2011). [CrossRef]  

30. K. Nam, I. M. Rosado-Mendez, N. C. Rubert, E. L. Madsen, J. A. Zagzebski, and T. J. Hall, “Ultrasound attenuation measurements using a reference phantom with sound speed mismatch,” Ultrasonic imaging 33(4), 251–263 (2011). [CrossRef]  

31. K. Yoon, W. Lee, P. Croce, A. Cammalleri, and S.-S. Yoo, “Multi-resolution simulation of focused ultrasound propagation through ovine skull from a single-element transducer,” Phys. Med. Biol. 63(10), 105001 (2018). [CrossRef]  

32. A. Kyriakou, E. Neufeld, B. Werner, G. Székely, and N. Kuster, “Full-wave acoustic and thermal modeling of transcranial ultrasound propagation and investigation of skull-induced aberration correction techniques: a feasibility study,” J. therapeutic ultrasound 3(1), 11 (2015). [CrossRef]  

33. C. Huang, L. Nie, R. W. Schoonover, Z. Guo, C. O. Schirra, M. A. Anastasio, and L. V. Wang, “Aberration correction for transcranial photoacoustic tomography of primates employing adjunct image data,” J. Biomed. Opt. 17(6), 066016 (2012). [CrossRef]  

34. J.-F. Aubry, M. Tanter, M. Pernot, J.-L. Thomas, and M. Fink, “Experimental demonstration of noninvasive transskull adaptive focusing based on prior computed tomography scans,” J. Acoust. Soc. Am. 113(1), 84–93 (2003). [CrossRef]  

35. F. Marquet, M. Pernot, J. Aubry, G. Montaldo, L. Marsac, M. Tanter, and M. Fink, “Non-invasive transcranial ultrasound therapy based on a 3d ct scan: protocol validation and in vitro results,” Phys. Med. Biol. 54(9), 2597–2613 (2009). [CrossRef]  

36. G. Pinton, J.-F. Aubry, E. Bossy, M. Muller, M. Pernot, and M. Tanter, “Attenuation, scattering, and absorption of ultrasound in the skull bone,” Med. Phys. 39(1), 299–307 (2012). [CrossRef]  

37. S. Almquist, D. L. Parker, and D. A. Christensen, “Rapid full-wave phase aberration correction method for transcranial high-intensity focused ultrasound therapies,” Journal of therapeutic ultrasound 4(1), 30 (2016). [CrossRef]  

38. A. Kyriakou, E. Neufeld, B. Werner, M. M. Paulides, G. Szekely, and N. Kuster, “A review of numerical and experimental compensation techniques for skull-induced phase aberrations in transcranial focused ultrasound,” Int. J. Hyperthermia 30(1), 36–46 (2014). [CrossRef]  

39. S. Antholzer, M. Haltmeier, R. Nuster, and J. Schwab, “Photoacoustic image reconstruction via deep learning,” in Photons Plus Ultrasound: Imaging and Sensing 2018, Vol. 10494 (International Society for Optics and Photonics, 2018), p. 104944U.

40. D. Allman, A. Reiter, and M. A. L. Bell, “A machine learning method to identify and remove reflection artifacts in photoacoustic channel data,” in 2017 IEEE International Ultrasonics Symposium (IUS) (IEEE, 2017), pp. 1–4.

41. D. Allman, A. Reiter, and M. A. L. Bell, “Photoacoustic source detection and reflection artifact removal enabled by deep learning,” IEEE Trans. Med. Imaging 37(6), 1464–1477 (2018). [CrossRef]  

42. A. Reiter and M. A. L. Bell, “A machine learning approach to identifying point source locations in photoacoustic data,” in Photons Plus Ultrasound: Imaging and Sensing 2017, Vol. 10064 (International Society for Optics and Photonics, 2017), p. 100643J.

43. D. Waibel, J. Gröhl, F. Isensee, T. Kirchner, K. Maier-Hein, and L. Maier-Hein, “Reconstruction of initial pressure from limited view photoacoustic images using deep learning,” in Photons Plus Ultrasound: Imaging and Sensing 2018, Vol. 10494 (International Society for Optics and Photonics, 2018), p. 104942S.

44. S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 27(7), 987–1005 (2019). [CrossRef]  

45. M. Mozaffarzadeh, A. Mahloojifar, M. Orooji, S. Adabi, and M. Nasiriavanaki, “Double-stage delay multiply and sum beamforming algorithm: Application to linear-array photoacoustic imaging,” IEEE Trans. Biomed. Eng. 65(1), 31–42 (2018). [CrossRef]  

46. M. Mozaffarzadeh, A. Mahloojifar, M. Orooji, K. Kratkiewicz, S. Adabi, and M. Nasiriavanaki, “Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm,” J. Biomed. Opt. 23(02), 1 (2018). [CrossRef]  

47. P. Omidi, M. Zafar, M. Mozaffarzadeh, A. Hariri, X. Haung, M. Orooji, and M. Nasiriavanaki, “A novel dictionary-based image reconstruction for photoacoustic computed tomography,” Appl. Sci. 8(9), 1570 (2018). [CrossRef]  

48. G. Salton and C. Buckley, “Term-weighting approaches in automatic text retrieval,” Inf. Process. Manage. 24(5), 513–523 (1988). [CrossRef]  

49. M. Kwak, G. Leroy, J. D. Martinez, and J. Harwell, “Development and evaluation of a biomedical search engine using a predicate-based vector space model,” J. Biomed. Inf. 46(5), 929–939 (2013). [CrossRef]  

50. H.-C. Cho, L. Hadjiiski, B. Sahiner, H.-P. Chan, M. Helvie, C. Paramagul, and A. V. Nees, “Similarity evaluation in a content-based image retrieval (cbir) cadx system for characterization of breast masses on ultrasound images,” Med. Phys. 38(4), 1820–1831 (2011). [CrossRef]  

51. L. Mohammadi, R. Manwar, H. Behnam, J. Tavakkoli, and M. R. N. Avanaki, “Skull’s aberration modeling: towards photoacoustic human brain imaging,” in Photons Plus Ultrasound: Imaging and Sensing 2019, Vol. 10878 (International Society for Optics and Photonics, 2019), p. 108785W.

52. L. Mohammadi, H. Behnam, J. Tavakkoli, and M. Nasiriavanaki, “Skull’s acoustic attenuation and dispersion modeling on photoacoustic signal,” in Photons Plus Ultrasound: Imaging and Sensing 2018, Vol. 10494 (International Society for Optics and Photonics, 2018), p. 104946K.

53. L. Mohammadi, H. Behnam, and M. Nasiriavanaki, “Modeling skull’s acoustic attenuation and dispersion on photoacoustic signal,” in Photons Plus Ultrasound: Imaging and Sensing 2017, Vol. 10064 (International Society for Optics and Photonics, 2017), p. 100643U.

54. B. Pourebrahimi, A. Al-Mahrouki, J. Zalev, J. Nofiele, G. J. Czarnota, and M. C. Kolios, “Classifying normal and abnormal vascular tissues using photoacoustic signals,” in Photons Plus Ultrasound: Imaging and Sensing 2013, Vol. 8581 (International Society for Optics and Photonics, 2013), p. 858141.

55. B. Pourebrahimi, M. C. Kolios, and G. J. Czarnota, “Method for classifying tissue response to cancer treatment using photoacoustics signal analysis,” (2014). US Patent App. 14/169,421.

56. G. Clement, P. White, and K. Hynynen, “Enhanced ultrasound transmission through the human skull using shear mode conversion,” J. Acoust. Soc. Am. 115(3), 1356–1364 (2004). [CrossRef]  

57. D. L. Lee, H. Chuang, and K. Seamons, “Document ranking and the vector-space model,” IEEE Softw. 14(2), 67–75 (1997). [CrossRef]  

58. B. Boashash, Time-frequency Signal Analysis and Processing: A Comprehensive Reference (Academic Press, 2015).

59. B. Petrovic and S. Parolai, “Joint deconvolution of building and downhole strong-motion recordings: Evidence for the seismic wavefield being radiated back into the shallow geological layers,” Bull. Seismol. Soc. Am. 106(4), 1720–1732 (2016). [CrossRef]  

60. P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl, “Grouplens: an open architecture for collaborative filtering of netnews,” in Proceedings of the 1994 ACM conference on Computer supported cooperative work, (1994), pp. 175–186.

61. J. Herlocker, J. A. Konstan, and J. Riedl, “An empirical analysis of design choices in neighborhood-based collaborative filtering algorithms,” Information retrieval 5(4), 287–310 (2002). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. 2-D illustration of a simple model of numerical skull phantom used in our simulations. The initial pressure point source modeled as a sphere with the radius of $r_0$ at a depth of $d$ from the outer-skull surface. A single layer homogenized skull tissue with a thickness variation between 0.5 mm and 2.5 mm is considered. The solid acceptance angle of the transducer is indicated by the dashed lines.
Fig. 2.
Fig. 2. Skull-induced aberration compensation of PA signals produced from a 0.1 mm absorbing sphere passing through a skull tissue with the thickness of 1 mm located at depths (a) 2 mm, (b) 8 mm, and (c) 20 mm. (i) Signal amplitudes and (ii) signal gradients. In this simulation, there is a 5 mm layer of ultrasound gel between the ultrasound transducer and the skull.
Fig. 3.
Fig. 3. Skull-induced aberration compensation of PA signals produced from a 0.1 mm absorbing sphere located at the depth of 5 mm passing through a skull tissue with the thicknesses of (a) 0.5 mm, (b) 1 mm, and (c) 2 mm. (i) Signal amplitudes and (ii) signal gradients. In this simulation, there is a 5 mm layer of ultrasound gel between the ultrasound transducer and the skull.
Fig. 4.
Fig. 4. Skull-induced aberration compensation of PA signals produced from a 0.1 mm absorbing sphere passing through a skull tissue. (a) Skull thickness is 2 mm and the target is located at 5 mm depth (PA signal is contaminated with 10% background noise), (b) skull thickness is 2 mm and the target is located at 5 mm depth (PA signal is contaminated with 20% background noise), (c) skull thickness is 1 mm and the target is located at 20 mm depth (PA signal is contaminated with 10% background noise), and (d) skull thickness is 1 mm and the target is located at 20 mm depth (PA signal is contaminated with 20% background noise). (i) Signal amplitudes and (ii) signal gradients. In this simulation, there is a 5 mm layer of ultrasound gel between the ultrasound transducer and the skull.
Fig. 5.
Fig. 5. Aberration correction of TsPAM images. (a) Synthetic TsPAM image acquired from the experimental setup depicted in Fig. 1, (i) unaberrated image, (ii) aberrated image, (iii) compensated image. (b) A representative depth profile, indicated with green dotted lines in images in (a), of the unaberrated, aberrated and compensated images.
Fig. 6.
Fig. 6. Aberration correction of noisy TsPAM images, reproduced from the TsPAM in Fig. 5, (a) with 10% noise, and (b) with 20% noise. (i) Unaberrated image, (ii) aberrated image, and (iii) compensated image.

Tables (3)

Tables Icon

Table 1. Acoustic properties of skull, brain soft tissue, and the ultrasound gel used in the simulations.a

Tables Icon

Table 2. Recovery percentage calculated for compensated original signal, noisy signal with 10% noise and with 20% noise.

Tables Icon

Table 3. Non-flat skull aberration compensation. Skull thickness variation (Δh), transmitted signal percentage, and recovery percentage of the distorted signals generated with non-flat skulls with angles from 5 to 30 versus the transducer axis.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

cos(fij,fqj)=fij.fqjfij.fqj=Σk=1nfij,k.fqj,kΣk=1nfij,k2.Σk=1nfqj,k2
Similarity(fij,fqj)=arccos(Σk=1nfij,k.fqj,kΣk=1nfij,k2.Σk=1nfqj,k2)
Similarity(q,di)=12(Similarity(Ti,Tq)+Similarity(Ai,Aq)
recovery_ percentage (%)=(1l2(signalcompsignalwithout)l2(signalwithout))×100
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.