Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Sparse sampling and reconstruction for an optoacoustic ultrasound volumetric hand-held probe

Open Access Open Access

Abstract

Accurate anatomical localization of functional information is the main goal of hybridizing optoacoustic and ultrasound imaging, with the promise of early stage diagnosis and disease pathophysiology. Optoacoustic integration to ultrasound is a relatively mature technique for clinical two-dimensional imaging, however the complexity of biological samples places particular demands for volumetric measurement and reconstruction. This integration is a multi-fold challenge that is mainly associated with the system geometry, the sampling and beam quality. In this study, we evaluated the design geometry for the sparse ultrasonic hand-held probe that is popularly associated with three-dimensional imaging of anatomical deformation, to incorporate the three-dimensional optoacoustic physiological information. We explored the imaging performance of three unconventional annular geometries; namely, segmented, spiral, and circular geometries. To avoid bias evaluation, two classes of analytical and model-based algorithms were used. The superior performance of the segmented annular array for recovery of the true object is demonstrated. Along with the model-based approach, this geometry offers spatial invariant resolution for the optoacoustic mode for the given field of view.The analytical approach, on the other hand, is computationally less expensive and is the method of choice for ultrasound imaging. Our design can potentially evolve into a valuable diagnostic tool, particularly for vascular-related disease.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

It has become increasingly clear that a comprehensive understanding of morphogenesis and disease requires three-dimensional (3D) measurements of tissue structure. An effective 3D imaging technique that can report on multi-scale targets in a time-resolved manner is crucial for achieving this goal. Although ultrasound imaging is a main tool for clinical examination, the current requirement for 3D imaging poses new difficult challenges with respect to the design pattern [1, 2]. Additionally, due to the insignificant echogenicity of microstructures, physiological changes are almost transparent under ultrasound B-mode imaging [3]. By taking advantage of optical contrast, optoacoustic imaging can be used to detect this type of information and translate it into ultrasound waves [4–7]. More recently, the combination of these two provides the potential for functional imaging in a range of clinical applications [8–10], such as angiogenesis [11], hemodynamics [12], atheroma [13], oncology [14], thyroidology [15] and hepatology [16], to name but a few. However, for volumetric imaging, this combination has several limitations [17, 18]. The low signal-to-noise ratio (SNR) of optoacoustic signals might require relatively large (>10λ) transducers [19–21], while beam-forming in ultrasound imaging is demanding for transducers of the order of central wavelengths (λ) [22]. Both modalities require large apertures through sparse distribution of limited numbers of elements [23, 24]. Until recently, the Shannon-Nyquist theorem was the dominant choice for digital sampling by dictating a regularly spaced sensing pattern with the minimum rate of twice the central frequency. Following the theory of compressed sensing, it was shown that this line of thought can be obviated at the acquisition site [25–27]. The geometrical distribution of transducers (elements) has a similar role to the sampling pattern in compressed sensing. To take the advantage of this framework, careful engineering must be implemented, in terms of a sensing mechanism that emulates a randomized incoherent sampling pattern. Hence, by practicing the three principles of incoherence, sparsity, and random subsampling, a drastic reduction in the number of transducers with confidence in reconstruction fidelity is expected. Indeed, sparsity is exploited with a proper reconstruction algorithm, where the recovery of important coefficients is guaranteed.

In our companion paper [28], we presented the conceptual framework for designing a bimodal array, and argued the performance of three sampling patterns (Fig. 1). Here, we complement our original study by providing reconstruction-based analysis and further validation for simulated data. We investigate feature fidelity to the object that is associated with the merit of the reconstruction algorithm. In practical terms, the use of inverse problems in array imaging forms the deconvolution problem, which faces the ill-posedness [29, 30] and large-scale [31–33] challenges. Given the partial view angle of the hand-held probe, the forward model is rank-deficient [28], and thus effective regularization must be incorporated. In this paper, we seek the answer in Krylov subspace, in which a hybrid regularization method combines the few steps of conjugate gradient least squares (CGLS) with total-variation (TV) penalization. In general, the success of sparsity-based reconstruction appears to be tied to the incoherence and to the restricted isometry property [26]. However, to the best of our knowledge, there is no useful theoretical guarantees for compressed sensing in acoustic-array imaging, specifically with respect to the sampling pattern. Here, we pursued an empirical method to establish an intuitive link between the sampling pattern and the uniqueness of the solution. To avoid the bias evaluation, we compared our result with a modified version of back-projection developed in our earlier study [34], here referred to as virtual element weighted delay and sum back-projection (VE-WDSBP). As the hand-held probe is far from an ideal imaging system, mainly due to the associated limited angle of view, spatial under-sampling, and finite-element size, the potentials of both reconstruction algorithms to deal with nonideal imaging scenarios are investigated in detail. The results are evaluated in terms of reconstruction quality, achievable resolution, and quantitative metrics.

 figure: Fig. 1

Fig. 1 The geometrical distribution of the 128 elements (convex transducers). The recorded signal for the large elements equates to the ensemble signals recorded by λ/2 sub-elements of the same size, which constitutes the sampling pattern.

Download Full Size | PDF

2. Image reconstruction problem

In general, acoustic image reconstruction involves estimation of the object inner structure based on emitted or reflected waves. These waves are recorded by transducers in the form of time-gated radiofrequency signals, at several positions. Basically, each recorded signal equates to a projection in the tomographic set-up, and can thus be formulated in the form of least squares (Ax=b), and by using the inverse problem to recover the acoustic (or optical) properties of the tissue.

2.1. Problem formulation

A linear discrete forward model for an idealized discrete-time linear shift-variant system can be derived by modeling a spherical acoustic wave for each voxel-transducer pair [35]. The sources can be assumed to be uncorrelated bipolar (optoacoustic) or monopolar (ultrasound), and are arranged in a lattice pattern with spacings of the diffraction limit. In general, the recorded pressure in a linear system can be approximated by the convolution of three terms: the acoustic impulse response (AIR), the spatial impulse response (SIR), and the source signal itself [28]. Along with the AIR, the SIR represents the spatial variant low pass filter in a linear shift invariant system, otherwise known as the transfer function. Basically, it correlates the acoustic wave of the kth source Sk to the recorded signal Urec through the nth element of the array.

Urec(r,t)=rFoVhAIR(t)*hSIR(r,t)*S(rsrn),
hIR=hAIR(t)*hSIR(rsrn,t),
where the impulse response hIR is sampled temporally and discretized to consist of L temporal samples. Expanding the formula for every discretized point within the aperture field of view (FoV) yields a time-discrete matrix representation of the above formula.
Urecn=MSkx,y,z,

For the array of N transducers, the received signals represent the response of the transducers to an ensemble of incident waves that emanate from K sources within the FoV, with a vector representation of Skx,y,z=[S1,S2,...,SK]T. Similarly, the recorded signals of length L can be represented in the form of Urecn=[Urec1,Urec2,...,UrecN]T. The forward model M(N×L)×K describes the spatiotemporal response of the transducers to a set of sources. Once the relationship between the sources and the recorded signals has been modeled via a linear and discrete imaging operator M, diffraction-limited source localization is possible by least-squares inversion:

S^=arg minzM.SUrec 22
S^=M.Urec
where S^ is the discretized estimation of the source; viz. the final image. The quality of S^ is profoundly dependent on the properties of M, and thence the aperture. For instance, it was shown that the number of projections is strictly bound to the number of elements, or the frequency response of the transducers influences the spatial and temporal resolution [36]. However, quality assessment of an imaging system solely on the basis of the final image is not a flawless approach, as the inversion step can be devious. Due to the limited number of elements, the forward model or sensing matrix can be ’fat’ (i.e., more columns than rows). Therefore, M is a nonsquare matrix with large condition number that further increases the ill-posedness of the inversion, such that M1eSexact, thus hiding the Sexact among the inverted errors (e). To cope with this issue, regularization can be imposed to reduce the sensitivity of the solution to error, and to compute a stable solution. However, when the variance of error is not known, the choice of method (e.g., generalized cross validation, normalized cumulative periodogram, L-curve, etc.) for finding the optimal regularization parameter would bias the results [37]. Regularization by projection is yet another way in which the solution to the least squares is restricted to lie in a low dimensional subspace Wκ (i.e., Axb2 is subjected to xWκ). Theoretically, the subspace Wκ is spanned by vectors that represent the desirable features for the regularized solution. A particular examples of this type is the truncated SVD (TSVD), where the κ dimension subspace is the space spanned by the first κ right singular vectors v.
TSVD:Wκ=span{v1,v2,,vκ}

Kowing that the solution exists in this subspace with the requirement of S=Wκz, the regularized solution can be expressed as a projection problem:

S(κ)=Wκz(κ)
z(κ)=arg minz(MWκ)zUrec 2

For small κ, MWκ can be explicitly calculated to solve the projected least-squares problem, and hence to achieve the low-rank approximation [38]. The advantage of the SVD basis is that it can adjust itself to the problem explicitly by accommodating to the matrix M, although there are limitations associated with the SVD, which include the computation cost. Another equally comprehensive yet computationally attractive subspace is Krylov, which is defined as:

Krylov:Kκ=span{MTUrec,(MTM)MTUrec,,(MTM)κ1MTUrec},

Krylov subspace with a maximum κ dimension can adapt itself fully to the case in hand by incorporating the information of both of the quantities M and Urec. Let us acknowledge that the linear least-squares functions are associated with so-called normal equations of the form MTMS=MTUrec; with MTM being a Hermitian positive semi-definite matrix of M. This property allows us to use conjugate gradients to solve the least-squares problem. By applying κ steps of conjugate-gradient iteration on the normal function, the S(κ) will be realized. The use of CGLS in the computing of regularized solutions in the Krylov subspace Kκ is referred to as regularizing iterations [37]. Intuitively, CGLS constructs a polynomial approximation to the regularized pseudo-inverse of M. The intrinsic polynomial of CGLS explains how it converges faster than SVD-based regularization. When the SVD component (uiTUrec) is large, with ui as the ith left singular vector, CGLS automatically constructs a polynomial with eigenvalues (σ) of the MTM projection on the Kκ. This acts as a filter factor by enforcing ’near σi roots’ to knock out large SVD components (uiTUrec)2 [39]. Therefore, the solution space is minimized to a subspace of small dimensionality. The downside is that it accommodates to the error correlated with the recorded data Urec, which can lead to false estimation (e.g., artifacts).

2.2. Total-variation minimization

The use of an accurate forward model is crucial for iterative reconstruction algorithms, although the choice of the regularization method is of critical importance as well. The aim of the regularization is to make the problem well-posed by constraining the estimate to a prior, based on the statistical assumption of the error. The Hessian matrix MTM has poor conditioning, which in turn results in very slow convergence and non-unique solutions [40]. As the sampling transducers are limited in number, M is an underdetermined matrix equation that obviates the Hadamard well-posedness definition. Thus, additional constraint is crucial to estimate the ’best’ candidate. A well-known technique is to impose a linear transformation to precondition the linear least-squares problem. The conjugate gradient can be preconditioned by classical Tikhonov or other forms of l2 in order to face the erroneous. These quadratic techniques are more suitable when the object tends to be smooth, as high-frequency variations are penalized. To tackle the under sampled measurements in a much more efficient way, there is the need to incorporate the nonstandard data fidelity terms [41]. However, in practical computations, the estimate s^ is indistinguishable from any s^+e if s^+e2 is less than the round-off error. These challenges can be faced by external regularization that considers nonquadratic penalties that incorporate different types of a priori, such as sparsity and nonnegativity. The remarkable gradient-based sparsifying characteristics of the TV allow achievement of the piece-wise constant l1 form of recovery [42]. TV overcomes the limitation of Tikhonov by preserving the sharp edges.

TV(S)=|S|ϵ 1=i(ΔixS)2+(ΔiyS)2+ϵ2
where Δix and Δiy correspond to the horizontal and vertical first-order differences at the ith pixels. The superior performance of TV over the l1 norm (l1=i|ΔixS|+|ΔiyS|) has been demonstrated inmany applications of compressed sensing [43–45]. The advantage lies on penalization where the discontinuities, such as sharp edges, are preserved but are not necessarily preferred over the smooth ones. Instead, their presence hinges upon the detected signals. By applying TV to Krylov subspace, an inexact solution (although reliable) can be found in the restricted Krylov subspace [46].
z(κ)=arg minz(MWκ)zUrec 2+λTVTV(z)
that satisfies:
MTMS+λTVTV(S)=MTUrec
where the regularization parameter (λTV) balances the trade-off between data fidelity (the first term) and regularization (the second term), and it is chosen heuristically. To the best of our knowledge, there is no general criteria, such as L-curve, to estimate the TV regularization parameter. Moreover, the nonquadratic properties of the TV norm hinders the integration with the linear conjugate gradient [47]. To solve the above equation, we followed the so-called superiorization approach [48, 49], where the penalty term is replaced by perturbation steps between iterations, to shift the path toward an intermediate solution [50]. This shift is subjected to the second criterion, defined by the TV. Here we implemented the S-CG-K pseudo-code in Ref. [49] in Matlab.

There are two questions that need to be addressed here: (a) can S be represented sparsely in the formed basis; and (b) whether the enforced sparsity suits the sampling patterns. The answer might lie in the principle of incoherence, which demands asampling pattern in which the induced artifacts spread through the sparse domain such that their corresponding coefficients can be easily penalized by λTV [51]. For the given sampling patterns, we compare the performance of this approach with a variation of gold-standard analytical back-projections.

2.3. Virtual element weighted back-projections

Alternatively, one can back-project the recorded pressure signals with respect to the time of flight via a spherical polygon, which is centered at the detector element. For ultrasound imaging, the time of flight from the center of the transmitting elements centered at rt is added accordingly. When large defocused elements (i.e., convex surfaces, negative lens), the concept of the virtual element must be applied as well. Briefly, an extra distance that corresponds to the radius of curvature of the defocused element is added, and a weighting with respect to the calculated directivity would be applied on the polygon [52]. To form the image, the previous step is repeated for every receiving element or sub-aperture that is associated with a projection. Finally, all of the projections are compounded coherently on a voxel-by-voxel basis to perform synthetic focusing. The quality of this localization is conditional on the λ/2 inter-element spacing, the active area of the transducer, and the number of angular projections. Albeit, not all can be met flawlessly for the sparse aperture. Consequently, the back-projection estimation suffers from the reconstruction artifact. In our previous study [34], we suggested a modified weighting factor W that penalizes these artifacts in an adaptive manner. The performance of this approach is highly dependent on the number of projections, and it might suit ultrasound techniques better than optoacoustics. For the sake of completeness, we summarize here the algorithm, as follows:

  1. Back-project the recorded signals for each element (or sub-aperture);
  2. Repeat step 1 for every virtual sub-element of size λ/2;
  3. Apply the inverted SIR map to compensate for the imposed directivity;
  4. Coherently compound the projections associated with every sub-aperture;
  5. Calculate and apply the adaptive weighting factor to surpass the quality of the image in terms of the SNR, contrast, and resolution.

2.4. Figures of merit

Here we take advantage of the commonly used image metrics to quantify the quality of the visual information. Meaningful visual information is conveyed by contrast. This is particularly important in ultrasound imaging, specifically for anechoic regions, such as cysts and blood vessels, where the off-axis clutter noise and phase aberrations complicate the anatomical measurements [53, 54]. The contrast resolution is commonly quantified by the contrast-to-noise ratio (CNR), which indicates the relation between lesion detectability, object contrast, and acoustic noise (e.g., speckle variance, acoustic clutter.)

CNR=SiSoσi2+σo2
where Si and S0 are the spatial means, and σi2 and σ02 are standard deviations of the log image inside and outside the lesion, respectively. Additionally, we considered other metrics, such as range of full width at half maximum (FWHM) for each measured point spread function, peak SNR (PSNR), and the structure similarity (SSIM) index. We used the point source/ reflector images to compute the local point spread function. As the point spread function is the measure of intensity distribution, -6 dB is often used in the logarithmic scale images. We studied the axial plane (C-scan) for optoacoustic imaging and the lateral plane (B-mode) for ultrasound imaging. Multiple point reflectors (for ultrasound) and point sources (for optoacoustics) are positioned within the FoV as a measure of the point spread function of the system. Certain structural artifacts occur due to the compressed sensing and reconstruction processes. Along with the FWHM, the PSNR and the SSIM provide additional information for system performance evaluation [55–57]. The PSNR is a common image quality assessment metric, and is defined as:
PSNR(f,g)=20log10MAXI1MNi=1Mj=1N(fijgij)2
where MAXI is the the maximum possible value of the image, and f and g are the reference and test images, respectively. A higher PSNR indicates less distortion and high reconstruction quality. However, the PSNR is not sensitive to the pixel spatial relationship. Another widely used metric, the SSIM, assesses the image quality based on distortion of the structural information.
SSIM(f,g)=(2μfμg+c1).(2σf,g+c2)(μf2+μg2+c1).(σf2+σg2+c2)
where μ is the average intensity of the given image, and c is the infinitesimal variable to stabilize the division.

 figure: Fig. 2

Fig. 2 Evaluation of the B-mode performance of the three virtual arrays based on the contrast and resolution. I. The images of the anechoic cysts. II. The CNR value based on the white (cysts) and red (background) frames. III. The VE-WSAFT reconstructed B-mode images of the point reflectors. IV. The lateral profile of the reconstructed images for each of the arrays at the range of 20 mm.

Download Full Size | PDF

3. Results

A natural question arises on the abilities of the designed arrays, represented in Fig. 1, in delivering high-quality images. The two introduced reconstruction algorithms were used to evaluate the performance of arrays in terms of the quality of the detected signal and artifacts, and to characterize the data fidelity.

3.1. Ultrasound

To exploit the achievable ultrasound resolution through the proposed sampling patterns, we simulated the total focusing method, in which signals from every transmit-receive pair are processed [58]. Here, we only investigate the VE-WDSBP algorithm performance, as the forward model is relatively large [33] and requires a powerful computer [31]. Thus, the model-based reconstruction is deferred to later studies.

Figure 2.(a) illustrates the results of the B-mode reconstruction of the numerical phantom that contains three anechoic cysts of 3 mm in diameter, using the VE-WDSBP method. The CNR of each image has been calculated, considering the speckle background as the signal-dependent noise. Even though the annular circular array shows better contrast (Fig. 2.(b)), the segmented annular array represents a uniform energy distribution and speckle shape across the entire FoV. Figure 2.(c) depicts the achievable resolution and the dynamic range (main lobe to side lobe ratio) for each array. The simulated phantom contains a set of point reflectors with unity amplitude distributed over the volume of 16 mm by 32 mm, for y-plane=0. The superior estimation of the segmented annular array in obtaining an isotropic spatial response, precise localization and slightly better dynamic range (1-5 dB) is evident in Fig. 2.(d). The lateral resolution is the same for all of the apertures, and is in the range of 223-229μm.

 figure: Fig. 3

Fig. 3 Optoacoustic imaging performance of the three virtual arrays for retrieving the point sources (point spread function) situated in the axial plane at a depth of z = 20 mm, in the xy plane of 16 × 16 mm2 size. The reconstructed images used VE-DSBP (upper row) and model-based CGLS (lower row) with only 10 iterations (10 basis). The color bar is in dB.

Download Full Size | PDF

3.2. Optoacoustics

Figure 3 shows the optoacoustic imaging performance of three virtual arrays in C-scan. A set of point sources situated at the depth of 20 mm are distributed equidistantly (1.5 mm) within the axial plane of 16 × 16 mm2, parallel to the surface of the arrays. The C-scan reconstruction was carried out using the two methods discussed, VE-WDSBP and CGLS. As these results are suggesting, it is only the segmented annular array that can retrieve all of the points in the medium, using the CGLS algorithm. The peculiar geometry of the segmented annular array allows for every voxel within the volume of interest to be sensed by all of the elements. Figure 3 suggests that VE-WDSBP is a reliable method only when the FoV is equal to the aperture size, and with lower resolution compared to CGLS. Regardless of the reconstruction algorithm, the other two apertures are not capable of isotropic focusing. Indeed, the estimated values for many of the points are below the artifact level.

 figure: Fig. 4

Fig. 4 Model-based reconstruction using CGLS (left) and CGLS TV (right) for the segmented annular array. Ten and fifty iterations were used correspondingly for the CGLS and CGLS TV algorithms.

Download Full Size | PDF

Figure 4 shows the result of CGLS TV for the segmented annular array, which is compared with CGLS. These results are indicative and perhaps conclusive for subjective qualitative assessments. The CGLS TV shows a more robust estimation, with no sign of artifacts up to 30 dB.

Table 1 quantifies these through a set of metrics as a measure of the data fidelity, and outlines the range of achievable resolution. Based on these metrics, the least distortion and best match with respect to the ground truth is provided by CGLS TV for the segmented annular array. Similarly, the best resolution with the smallest FWHM is achievable by CGLS TV for the segmented annular array.

Tables Icon

Table 1. Quantitative evaluation of sampling patterns.

4. Discussion and Conclusion

In this paper, we evaluated the effects of sensing element patterns suggested by our previous pre-reconstruction analysis on the imaging performance of a sparse bimodal hand-held probe. A major emphasis was put on modifying the image reconstruction algorithm to increase the fidelity of the object estimation, and hence the accuracy of the reconstruction. The two befitting classes of the reconstruction algorithm were modified to incorporate the geometric properties of the aperture. An adaptive weighting factor along the virtual element concept was used in the VE-WDSBP algorithm, which allowed to relax the imposed λ/2 inter-element spacing required in the classical sampling. Despite the honed accuracy of the estimation, there are shortcomings in the optoacoustic images (Fig. 3) due to the limited number of available projections. Specifically, the marginal areas suffer the most due to the limited view angle associated with the individual elements in the aperture. The diffraction imposed by the finite size of the transducer is demanding for an alternate approach in which the effects of a spatiotemporal filter are nullified. By using the forward model as the imaging operator, the model-based approach offers a more accurate solution, with the results in favor of the segmented annular array. In the reconstruction of the C-scan optoacoustics, the CGLS clearly outperforms the VE-WDSBP. To increase the robustness for outliers and sharper profiles, a form of TV minimization using the superiorization technique was incorporated. The important feature is that the estimation errors can be minimized by perturbing the solution within each iteration. This algorithm remarkably mitigated the artifacts that arose from the limited angle of view of the hand-held probe, as well as the sparse sampling.

Withal, VE-WDSBP has been used in ultrasound B-mode images for the sake of simplicity of calculation and to avoid the computational cost of the large total focusing method matrix inversion. Considering the valuable diagnostic information of the speckles in B-mode ultrasound, we noted the distinguishable different speckle patterns between the arrays. Initially, we postulated a granular structure as the resolution cells. Nonetheless, the shapes of these coherent interference artifacts are highly associated with the phase of back-scattered echoes, and their averaging over the surface of the aperture. Therefore, this leaves the question whether the structure offered by the annular spiral is better or worse in comparison with the segmented annular array. While we successfully demonstrated the superior performance of the segmented annular array in terms of resolution, uniform sensitivity, and CNR, the blurring artifact is clearly visible for the marginal points for all of the images. We interpret this as the limitation of the algorithm for FoVs larger than the aperture size, and not as a physical limitation of the array itself.

Based on the reconstructed images, a qualitative and quantitative comparison analysis was followed between the three proposed geometries. Overall, the imaging performance of the segmented annular array outweighs the others, considering the resolution, detectability, and uniform response for both modalities. In conclusion, this study presents the development of a hand-held probe and the adopted algorithms for volumetric optoacoustic ultrasound imaging. The dynamics of a new geometry for volumetric optoacoustic ultrasound imaging has been assessed. Last but not least, we restricted our analysis to the geometric properties of the arrays. As optoacoustic/ ultrasound imaging, the segmented annular array provided acceptable performance and is expected to have an impact on bimodal portable measurement systems. Further optimization with respect to the transducing properties and the experimental evaluation are deferred to future studies.

Funding

The People Programme (Marie Curie Actions) of the European Union Seventh Framework Programme FP7/2007-2013/ under REA grant agreement no. 317526 within the framework of the OILTEBIA project.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. E. Roux, F. Varray, L. Petrusca, P. Mattesini, C. Cachard, P. Tortoli, and H. Liebgott, “3d diverging waves with 2d sparse arrays: A feasibility study,” 2017 IEEE Int. Ultrason. Symp. (IUS)1–4 (2017).

2. J. Sauvage, M. Flesch, G. Ferin, A. Nguyen-Dinh, J. Poree, M. Tanter, M. Pernot, and T. Deffieux, “A large aperture row column addressed probe for in vivo 4d ultrafast doppler ultrasound imaging,” Phys. Medicine & Biol. 63, 215012 (2018). [CrossRef]  

3. M. Gesnik, K. Blaize, T. Deffieux, J.-L. Gennisson, J.-A. Sahel, M. Fink, S. Picaud, and M. Tanter, “3d functional ultrasound imaging of the cerebral visual system in rodents,” NeuroImage 149, 267–274 (2017). [CrossRef]   [PubMed]  

4. L. R. McNally, M. Mezera, D. E. Morgan, P. J. Frederick, E. S. Yang, I.-E. Eltoum, and W. E. Grizzle, “Current and emerging clinical applications of multispectral optoacoustic tomography (msot) in oncology,” Clin. Cancer Res. 22, 3432–3439 (2016). [CrossRef]   [PubMed]  

5. A. Taruttis, A. C. Timmermans, P. C. Wouters, M. Kacprowicz, G. M. van Dam, and V. Ntziachristos, “Optoacoustic imaging of human vasculature: feasibility by using a handheld probe,” Radiology 281, 256–263 (2016). [CrossRef]   [PubMed]  

6. M. Schwarz, M. Omar, A. Buehler, J. Aguirre, and V. Ntziachristos, “Implications of ultrasound frequency in optoacoustic mesoscopy of the skin,” IEEE Transactions on Med. Imaging 34, 672–677 (2015). [CrossRef]  

7. G. Xu, Z.-x. Meng, J.-d. Lin, C. X. Deng, P. L. Carson, J. B. Fowlkes, C. Tao, X. Liu, and X. Wang, “High resolution physio-chemical tissue analysis: towards non-invasive in vivo biopsy,” Sci. Reports 6, 16937 (2016). [CrossRef]  

8. X. Deán-Ben, E. Merčep, and D. Razansky, “Hybrid-array-based optoacoustic and ultrasound (opus) imaging of biological tissues,” Appl. Phys. Lett. 110, 203703 (2017). [CrossRef]  

9. M. Yang, L. Zhao, X. He, N. Su, C. Zhao, H. Tang, T. Hong, W. Li, F. Yang, L. Lin, B. Zhang, R. Zhang, Y. Jiang, and C. Li, “Photoacoustic/ultrasound dual imaging of human thyroid cancers: an initial clinical study,” Biomed. Opt. Express 8, 3449–3457 (2017). [CrossRef]   [PubMed]  

10. J. Kim, S. Park, Y. Jung, S. Chang, J. Park, Y. Zhang, J. F. Lovell, and C. Kim, “Programmable real-time clinical photoacoustic and ultrasound imaging system,” Sci. Reports 6, 35137 (2016). [CrossRef]  

11. X. L. Deán-Ben and D. Razansky, “Adding fifth dimension to optoacoustic imaging: volumetric time-resolved spectrally enriched tomography,” Light. Sci. & Appl. 3, e137 (2014). [CrossRef]  

12. E. Merčep, X. L. Deán-Ben, and D. Razansky, “Imaging of blood flow and oxygen state with a multi-segment optoacoustic ultrasound array,” Photoacoustics 10, 48–53 (2018). [CrossRef]  

13. M. U. Arabul, M. Heres, M. C. Rutten, M. R. van Sambeek, F. N. van de Vosse, and R. G. Lopata, “Toward the detection of intraplaque hemorrhage in carotid artery lesions using photoacoustic imaging,” J. Biomed. Opt. 22, 041010 (2016). [CrossRef]  

14. A. Oraevsky, B. Clingman, J. Zalev, A. Stavros, W. Yang, and J. Parikh, “Clinical optoacoustic imaging combined with ultrasound for coregistered functional and anatomical mapping of breast tumors,” Photoacoustics 12, 30–45 (2018). [CrossRef]   [PubMed]  

15. A. Dima and V. Ntziachristos, “In-vivo handheld optoacoustic tomography of the human thyroid,” Photoacoustics 4, 65–69 (2016). [CrossRef]   [PubMed]  

16. P. J. van den Berg, R. Bansal, K. Daoudi, W. Steenbergen, and J. Prakash, “Preclinical detection of liver fibrosis using dual-modality photoacoustic/ultrasound system,” Biomed. Opt. Express 7, 5081–5091 (2016). [CrossRef]   [PubMed]  

17. M. K. A. Singh, W. Steenbergen, and S. Manohar, “Handheld probe-based dual mode ultrasound/photoacoustics for biomedical imaging,” Front. Biophotonics for Transl. Medicine pp. 209–247 (2016). [CrossRef]  

18. M. W. Schellenberg and H. K. Hunt, “Hand-held optoacoustic imaging: A review,” Photoacoustics , 7, 1 (2018). [CrossRef]   [PubMed]  

19. X. L. Deán-Ben and D. Razansky, “Portable spherical array probe for volumetric real-time optoacoustic imaging at centimeter-scale depths,” Opt. Express 21, 28062–28071 (2013). [CrossRef]  

20. W. Xia, D. Piras, J. C. van Hespen, S. Van Veldhoven, C. Prins, T. G. van Leeuwen, W. Steenbergen, and S. Manohar, “An optimized ultrasound detector for photoacoustic breast tomography,” Med. Phys. 40, 032901 (2013). [CrossRef]  

21. J. A. Guggenheim, J. Li, T. J. Allen, R. J. Colchester, S. Noimark, O. Ogunlade, I. P. Parkin, I. Papakonstantinou, A. E. Desjardins, E. Z. Zhang, and P. C. Beard, “Ultrasensitive plano-concave optical microresonators for ultrasound sensing,” Nat. Photonics 11, 714 (2017). [CrossRef]  

22. A. Ramalli, E. Boni, A. S. Savoia, and P. Tortoli, “Density-tapered spiral arrays for ultrasound 3-d imaging,” IEEE Transactions on ultrasonics, ferroelectrics, and frequency control 62, 1580 (2015)

23. E. Roux, A. Ramalli, P. Tortoli, C. Cachard, M. C. Robini, and H. Liebgott, “2-D ultrasound sparse arrays multidepth radiation optimization using simulated annealing and spiral-array inspired energy functions,” IEEE Transactions on Ultrason. Ferroelectr. Freq. Control. 63, 2138–2149 (2016). [CrossRef]  

24. P. Wong, I. Kosik, A. Raess, and J. J. Carson, “Objective assessment and design improvement of a staring, sparse transducer array by the spatial crosstalk matrix for 3d photoacoustic tomography,” PloS One 10, e0124759 (2015). [CrossRef]   [PubMed]  

25. E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. on Pure Appl. Math. A J. Issued by Courant Inst. Math. Sci. 59, 1207–1223 (2006). [CrossRef]  

26. B. Roman, A. Bastounis, B. Adcock, and A. C. Hansen, “On fundamentals of models and sampling in compressed sensing,” Preprint (2015).

27. B. Adcock, A. C. Hansen, C. Poon, and B. Roman, “Breaking the coherence barrier: A new theory for compressed sensing,” Forum Math. Sigma 5, 32 (2017). [CrossRef]  

28. M. A. Kalkhoran and D. Vray, “Theoretical characterization of annular array as a volumetric optoacoustic ultrasound handheld probe,” J. Biomed. Opt. 23, 025004 (2018). [CrossRef]  

29. J. Prakash, A. S. Raju, C. B. Shaw, M. Pramanik, and P. K. Yalavarthy, “Basis pursuit deconvolution for improving model-based reconstructed images in photoacoustic tomography,” Biomed. Opt. Express 5, 1363–1377 (2014). [CrossRef]   [PubMed]  

30. J. A. Fessler and W. L. Rogers, “Spatial resolution properties of penalized-likelihood image reconstruction: space-invariant tomographs,” IEEE Transactions on Image Processing 5, 1346–1358 (1996). [CrossRef]   [PubMed]  

31. B. Berthon, P. Morichau-Beauchant, J. Porée, A. Garofalakis, B. Tavitian, M. Tanter, and J. Provost, “Spatiotemporal matrix image formation for programmable ultrasound scanners,” Phys. Medicine & Biol. 63, 03NT03 (2018). [CrossRef]  

32. A. Rosenthal, V. Ntziachristos, and D. Razansky, “Acoustic inversion in optoacoustic tomography: A review,” Curr. Med. imaging reviews 9, 318–336 (2013). [CrossRef]  

33. F. Lingvall, T. Olofsson, and T. Stepinski, “Synthetic aperture imaging using sources with finite aperture: Deconvolution of the spatial impulse response,” The J. Acoust. Soc. Am. 114, 225–234 (2003). [CrossRef]   [PubMed]  

34. M. A. Kalkhoran, F. Varray, M. Vallet, and D. Vray, “Volumetric pulse echo and optoacoustic imaging by elaborating a weighted synthetic aperture technique,” Ultrason. Symp. (IUS), 2015 IEEE Int. pp. 1–4 (2015).

35. A. Lhémery, “Impulse-response method to predict echo-responses from targets of complex geometry. part i: Theory,” The J. Acoust. Soc. Am. 90, 2799–2807 (1991). [CrossRef]  

36. M. Xu and L. V. Wang, “Analytic explanation of spatial resolution related to bandwidth and detector aperture size in thermoacoustic or photoacoustic reconstruction,” Phys. Rev. E 67, 056605 (2003). [CrossRef]  

37. P. C. Hansen, Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion, vol. 4 (Siam, 2005).

38. E. J. Candes, C. A. Sing-Long, and J. D. Trzasko, “Unbiased risk estimates for singular value thresholding and spectral estimators,” IEEE Transactions on Signal Process. 61, 4643–4657 (2013). [CrossRef]  

39. P. C. Hansen, “Rank-deficient prewhitening with quotientsvd andulv decompositions,” BIT Numer. Math. 38, 34–43 (1998). [CrossRef]  

40. C. R. Vogel, Computational methods for inverse problems, vol. 23 (Siam, 2002). [CrossRef]  

41. K. A. Mohan, S. Venkatakrishnan, J. W. Gibbs, E. B. Gulsoy, X. Xiao, M. De Graef, P. W. Voorhees, and C. A. Bouman, “Timbir: A method for time-space reconstruction from interlaced views,” IEEE Trans. Computat. Imaging 1, 96–111 (2015). [CrossRef]  

42. T. Goldstein and S. Osher, “The split bregman method for l1-regularized problems,” SIAM J. on Imaging Sci. 2, 323–343 (2009). [CrossRef]  

43. C. Poon, “On the role of total variation in compressed sensing,” SIAM J. on Imaging Sci. 8, 682–720 (2015). [CrossRef]  

44. F. Krahmer and R. Ward, “Stable and robust sampling strategies for compressive imaging,” IEEE Transactions on Image Process. 23, 612–622 (2014). [CrossRef]  

45. E. Garduño, G. T. Herman, and R. Davidi, “Reconstruction from a few projections by l1-minimization of the haar transform,” Inverse Probl. 27, 055006 (2011). [CrossRef]  

46. P. Rodríguez and B. Wohlberg, “Efficient minimization method for a generalized total variation functional,” IEEE Transactions on Image Process. 18, 322–332 (2009). [CrossRef]  

47. F. Zhao, D. C. Noll, J.-F. Nielsen, and J. A. Fessler, “Separate magnitude and phase regularization via compressed sensing,” IEEE Transactions on Med. Imaging 31, 1713–1723 (2012). [CrossRef]  

48. G. T. Herman, E. Garduño, R. Davidi, and Y. Censor, “Superiorization: An optimization heuristic for Med. Phys,” Med. Phys. 39, 5532–5546 (2012). [CrossRef]   [PubMed]  

49. M. V. Zibetti, C. Lin, and G. T. Herman, “Total variation superiorized conjugate gradient method for image reconstruction,” Inverse Probl. 34, 034001 (2018). [CrossRef]  

50. Y. Censor, R. Davidi, and G. T. Herman, “Perturbation resilience and superiorization of iterative algorithms,” Inverse Probl. 26, 065008 (2010). [CrossRef]  

51. R. Leary, Z. Saghi, P. A. Midgley, and D. J. Holland, “Compressed sensing electron tomography,” Ultramicroscopy 131, 70–91 (2013). [CrossRef]   [PubMed]  

52. M. Pramanik, G. Ku, and L. V. Wang, “Tangential resolution improvement in thermoacoustic and photoacoustic tomography using a negative acoustic lens,” J. Biomed. Opt. 14, 024028 (2009). [CrossRef]   [PubMed]  

53. V. Chan and A. Perlas, “Basics of ultrasound imaging,” Atlas of ultrasound-guided procedures in interventional pain management pp. 13–19 (Springer,2011). [CrossRef]  

54. J. Shin and L. Huang, “Spatial prediction filtering of acoustic clutter and random noise in medical ultrasound imaging,” IEEE Transactions on Med. Imaging 36, 396–406 (2017). [CrossRef]  

55. W. Lin and C.-C. J. Kuo, “Perceptual visual quality metrics: A survey,” J. Vis. Commun. Image Represent. 22, 297–312 (2011). [CrossRef]  

56. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Process. 13, 600–612 (2004). [CrossRef]  

57. D. Brunet, E. R. Vrscay, and Z. Wang, “On the mathematical properties of the structural similarity index,” IEEE Transactions on Image Process. 21, 1488–1499 (2012). [CrossRef]  

58. S. J. Norton, “Synthetic aperture imaging with arrays of arbitrary shape. ii. the annular array,” IEEE Transactions on Ultrason. Ferroelectr. Freq. Control. 49, 404–408 (2002). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 The geometrical distribution of the 128 elements (convex transducers). The recorded signal for the large elements equates to the ensemble signals recorded by λ/2 sub-elements of the same size, which constitutes the sampling pattern.
Fig. 2
Fig. 2 Evaluation of the B-mode performance of the three virtual arrays based on the contrast and resolution. I. The images of the anechoic cysts. II. The CNR value based on the white (cysts) and red (background) frames. III. The VE-WSAFT reconstructed B-mode images of the point reflectors. IV. The lateral profile of the reconstructed images for each of the arrays at the range of 20 mm.
Fig. 3
Fig. 3 Optoacoustic imaging performance of the three virtual arrays for retrieving the point sources (point spread function) situated in the axial plane at a depth of z = 20 mm, in the xy plane of 16 × 16 mm2 size. The reconstructed images used VE-DSBP (upper row) and model-based CGLS (lower row) with only 10 iterations (10 basis). The color bar is in dB.
Fig. 4
Fig. 4 Model-based reconstruction using CGLS (left) and CGLS TV (right) for the segmented annular array. Ten and fifty iterations were used correspondingly for the CGLS and CGLS TV algorithms.

Tables (1)

Tables Icon

Table 1 Quantitative evaluation of sampling patterns.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

U r e c ( r , t ) = r F o V h A I R ( t ) * h S I R ( r , t ) * S ( r s r n ) ,
h I R = h A I R ( t ) * h S I R ( r s r n , t ) ,
U r e c n = M S k x , y , z ,
S ^ = arg  min z M . S U r e c   2 2
S ^ = M . U r e c
TSVD : W κ = span { v 1 , v 2 , , v κ }
S ( κ ) = W κ z ( κ )
z ( κ ) = arg  min z ( M W κ ) z U r e c   2
Krylov : K κ = s p a n { M T U r e c , ( M T M ) M T U r e c , , ( M T M ) κ 1 M T U r e c } ,
TV ( S ) = | S | ϵ   1 = i ( Δ i x S ) 2 + ( Δ i y S ) 2 + ϵ 2
z ( κ ) = arg  min z ( M W κ ) z U rec   2 + λ T V TV ( z )
M T M S + λ T V T V ( S ) = M T U r e c
CNR = S i S o σ i 2 + σ o 2
PSNR ( f , g ) = 20 log 10 MAX I 1 MN i = 1 M j = 1 N ( f ij g ij ) 2
SSIM ( f , g ) = ( 2 μ f μ g + c 1 ) . ( 2 σ f , g + c 2 ) ( μ f 2 + μ g 2 + c 1 ) . ( σ f 2 + σ g 2 + c 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.