Abstract

Pulsed lasers in photoacoustic tomography systems are expensive, which limit their use to a few clinics and small animal labs. We present a method to realize tomographic ultrasound and photoacoustic imaging using a commercial LED-based photoacoustic and ultrasound system. We present two illumination configurations using LED array units and an optimal number of angular views for tomographic reconstruction. The proposed method can be a cost-effective solution for applications demanding tomographic imaging and can be easily integrated into conventional linear array-based ultrasound systems. We present a potential application for finger joint imaging in vivo, which can be used for point-of-care rheumatoid arthritis diagnosis and monitoring.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In photoacoustic (PA) imaging, pulsed or modulated light induces thermoelastic expansion in optical absorbers resulting in ultrasound (US) signals, which are detected to form an image [1]. This emerging modality can surpass the high optical scattering in biological tissue by exciting the tissue with light and detecting relatively less scattered US [2]. This feature of optical absorption contrast with US resolution has attracted many medical applications in recent years, including cancer diagnosis [3], brain functional imaging [4], hemodynamics monitoring [5], surgical guidance [6,7] and many more [8]. A clinically interesting aspect is the capability to combine US and PA imaging in a system to obtain both structural and functional information of the target tissue. Linear transducer arrays offer real-time imaging and are widely used in clinics hence they are preferred for combined PA and US imaging [911].

Due to the finite aperture, PA and US imaging using linear arrays suffer from limited-view artifacts. Additionally, due to the directional nature of the transducer, US signal from tissue structures having a larger angle with it goes undetected, resulting in a loss of information [12,13]. One way to overcome the limited view is to have tomographic imaging, as many clinical applications allow us to measure around the target tissue, giving a view from many angles. However, commercially available tomographic systems use pulsed (nanosecond) lasers which are expensive, bulky and generally operate at low repetition rates ($10-20$ Hz) [14]. Additionally, acoustic detection for tomographic imaging is a trade-off between imaging speed, cost and number of parallel transducer elements and acquisition channels [14]. Hence, an inexpensive, compact and real-time tomographic imaging system that performs combined PA and US imaging can be advantageous to leverage the potential of this new modality for wide clinical use.

Two aspects can potentially bring down the system cost and improve the imaging speed of combined PA-US tomography. The first one is the use of low-cost pulsed (nanosecond) light sources such as laser diodes and LEDs. LED-based PA imaging is gaining research interest with the availability of LEDs in a wide wavelength range, high pulse repetition rate, and low production cost [1517]. A limitation of LEDs is the low power compared to a laser and therefore need to be used in large numbers for an imaging application. The second aspect is the use of a linear array. Compared to custom made transducer arrays for tomographic imaging, the linear array can be produced in large numbers with high yield and low cost. Additionally, by using a linear array a higher imaging speed can be achieved with fast switching for transmit and receive modes and real-time image reconstruction with widely used Fourier domain algorithms [18,19]. Most importantly, it is commonly used in conventional US imaging in the clinic and can be easily integrated with light sources for combined PA-US imaging. Hence, a combination of LED-based illumination and linear transducer array-based imaging can address many of the tomographic requirements.

Linear transducers to form PA images were first used by Oraevsky et. al. in 1999 [20]. Tomographic PA and US imaging using spatial compounding of images from multiple views was first reported by Kruger et. al. in 2003 [21]. They reported small animal imaging by scanning around the sample using a linear array and illumination using Nd:YAG laser. Yang et. al. [22] reported a directivity incorporated limited-view filtered backprojection to reconstruct images from multiple angular views and of the mouse brain using a linear transducer array. Li et. al. [10,23] developed a multi-view Hilbert transform for the circular scanning configuration using a linear array and showed resolution improvement with this approach. Imaging quality improvement based on resolution, signal to noise ratio (SNR) and contrast was also studied for linear array-based tomographic imaging [24,25]. The above-mentioned advantages are associated with tomographic imaging with the long axis of the transducer array rotating around the target, where all the angular views are imaging the same plane and the focusing of the transducer is advantageous to eliminate out-of-plane artifacts. Three-dimensional imaging can be done with the short axis of the array scanning around the sample [2628]. In this case, the cylindrical focusing of the linear transducer array can degrade the image quality [29]. Due to the focused nature of the transducer, each angular view acquires acoustic signals from an entirely different imaging plane. Three-dimensional reconstruction using acoustic signals from these non-overlapping planes result in discontinuities and smearing of the structures [29]. To obtain a full view using this configuration a linear scan at each angular view is required [26,27]. Hence, the 3D imaging configuration is not explored in our work. All the above works have used a nanosecond pulsed laser source for illumination.

For applications demanding a point-of-care system, light sources with small footprint such as laser diodes and LEDs are preferred over bulky pulsed lasers [30]. One such application is finger joint imaging for rheumatoid arthritis screening where tomographic imaging is of clinical importance to obtain the full view of the joint for early diagnosis [31]. LED-based illumination incorporated in a tomographic system can be potentially used in this scenario. As far as we know, three groups [29,32,33] have reported LED-based illumination for tomographic PA imaging, providing initial results showing feasibility. However, light delivery and transducer configuration need to be optimized to obtain high-quality PA and US imaging.

We propose tomographic imaging using a linear transducer array for PA-US imaging with LED-based excitation. To obtain high-quality PA and US tomographic imaging, we developed configurations for optimized light delivery and US detection. We present a method to determine the optimal number of angular views using a linear array for tomographic imaging. To obtain a higher power level for illumination, we propose the use of a large number of LEDs in two configurations. First, an illumination from the top of the sample and second an illumination from the sides of the sample. We have modeled the fluence distribution for both illumination configurations and studied the resulting image quality in a soft tissue-mimicking phantom. Both configurations have potential applications in biomedical imaging such as brain imaging for the former and finger joint imaging for the latter. Further, we demonstrate in vivo finger joint imaging which can potentially be used for a point-of-care rheumatoid arthritis diagnosis.

2. Materials and methods

We used a commercially available LED-based PA and US imaging system AcousticX (CYBERDYNE Inc., Japan) in this study. A linear transducer array with $128$ elements, $7$ MHz center frequency with $80\%$ bandwidth was used for the experiments. Four LED array units having $576$ elements ($36\times 4$ in each array) at $850$ nm wavelength were used for illumination. Each LED unit had pulse energy of $200 \mu$J with a pulse duration of $70$ ns. In this section, we present the tomographic imaging configurations using this system and provide the details of our simulation and experimental studies.

2.1 Optimal number of angular views

We are interested in obtaining a 2D PA and US tomographic image of the target tissue. To achieve this, the linear US transducer is rotated around a rotational center. The rotational center should preferably lie in the focus-zone of the transducer, since then the center can be detected by the transducer from any angle. Further, the number of angular views for full-view tomographic imaging should be selected based on the directivity of the transducer. We first performed a characterization experiment to find the focal length, focus-zone and the directivity of the transducer. A black suture wire (Vetsuture, France) of $30 \mu$m diameter was used as a PA target and the experiments were performed in the acoustic receive mode. The PA target and two LED arrays for illumination were fixed and the transducer was moved using linear stages to obtain the acoustic field. The axial response was obtained by measuring the peak-to-peak PA signal for multiple depths. Directivity was measured by observing the change in the PA signal peak at multiple lateral locations. Additionally, the axial and lateral resolution of the system was also measured using the same target.

As explained in detail by Xu et. al. in [34], object boundaries can have a stable reconstruction if the normal from every point on the boundary passes through a transducer element position. However, this condition is only true in ideal detectors with an opening angle of $180^{\circ }$. For a directional transducer, to reconstruct a boundary of an object, the normal from each point should fall within the opening angle of an element. Consider a line passing through the rotational center making an angle with the transducer array. A theoretical value for the number of angular views can be obtained by dividing the entire $360^{\circ }$ with the opening angle of the transducer, so that for any random orientation of the line, it can be detected by a transducer element [34]. This theoretical value is a minimum since the transducer sensitivity does not have a clear cutoff, and more importantly, some points in the plane can only be detected by a small number of transducer elements. Hence we performed a simulation study to find out the optimal number of angular views for tomographic imaging with the given system specifications. A specific numerical phantom was developed as shown in Fig. 2, with three distinct features. First, $24$ line targets were placed at $15^{\circ }$ steps with the center as the origin. The thickness of the line targets is half the wavelength ($\lambda _{0}$ = $0.2$ mm), corresponding to the center frequency of the transducer ($7$ MHz). The second set of line targets are placed perpendicular to each other with varying thickness of $\lambda _{0}/4, \lambda _{0}/2, \lambda _{0}, 2\lambda _{0}$ and $4 \lambda _{0}$ and four different initial pressure levels. Additionally, four circular targets of $\lambda _{0}/2, \lambda _{0}, 3\lambda _{0}/2$ and $2 \lambda _{0}$ diameter were also placed at the corners of the phantom. The structures are selected to make the image quality sensitive to angle, initial pressure levels, and resolution of the structures in the phantom. The k-Wave toolbox of MATLAB was used for the simulation [35]. The acoustic properties of the medium were set to that of water with a speed-of-sound of $1502$ m/s and a density of $1000$ kg/m$^3$. The linear sensor specifications are set to that of the practical transducer with the center frequency of $7$ MHz, $-6$ dB bandwidth from $4$ to $10$ MHz, pitch and elevation of $0.315$ mm with $128$ elements in the array. The opening angle of the transducer was modeled in the k-Wave toolbox by assigning the size of the transducer and a directivity mask [35]. Forward acoustic wave propagation was performed using the first-order k-space model. Gaussian noise with an SNR of $50$ dB, considering the root mean squared value of the input signal was added to mimic measurement noise. A Fourier domain reconstruction was used to form a B-scan image from the measurement [36]. More details on the reconstruction algorithm are provided in Section 2.2. The reconstructed image quality was measured in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [37] for several numbers of angular views. These image quality metrics are chosen to measure both structural information as well as the noise level in the reconstructed image. The optimal number of angular views was estimated based on image quality.

2.2 Tomographic image reconstruction

Multi-angle spatial compounding has been used to form tomographic PA and US images [25]. As shown in the configuration (Fig. 1(a) and (b)), a linear array is rotated around a center acquiring PA and US data from multiple angles. The collected PA and US raw data are first reconstructed to form B-scan images from each angle. The B-scan images can be formed either directly from the system using the build-in real-time reconstruction algorithm or with an offline reconstruct of both PA and plane wave US imaging using Fourier domain algorithm [36]. For the reconstruction of both experimental and simulated data we used an open source Fourier domain algorithm [36]. The US system can perform both plane wave and conventional line-by-line US imaging. The B-scan images were rotated to the angle from which they were acquired. The image rotation to respective angle is performed using spline interpolation in MATLAB from the original image to a predefined grid with rotated coordinates. In this way, we can define any arbitrary rotation center and the computational complexity of this step depends only on the interpolation method used. The rotated images from all angles are then averaged to obtain the tomographic image.

 figure: Fig. 1.

Fig. 1. System configurations and finger joint imager. (a) Illumination from the top using 4 LED array. (b) Illumination from the side of the sample, two LED units parallel to the long axis of the array ($30.8^{\circ }$ with the imaging plane) and two on either side of the sample ($105^{\circ }$ with the transducer). (c) Schematic and (d) photograph of finger joint imager.

Download Full Size | PPT Slide | PDF

2.3 Illumination configurations using LED units

Two illumination configurations were implemented, namely top illumination and side illumination as shown in Fig. 1. To be able to view the whole sample from different angles and to form perfect tomographic imaging, uniform illumination of the entire sample is desired. To a large extent, this can be achieved by top illumination. However, for most applications, an illumination from the side of the sample is required as the target is not accessible with top illumination. In the side illumination configuration, with the light source rotating along with the transducer the uniform illumination of the entire sample cannot be met. A simulation study was conducted to understand the difference in the optical fluence map of these two illumination approaches and to study the nature of the tomographic reconstructed images.

To model the light propagation from LED array units into the tissue and to obtain the fluence map, we performed Monte Carlo simulations using the GPU-accelerated Monte Carlo eXtreme (MCX) photon transport simulator [38]. The above described illumination strategies were modeled on a homogeneous cylindrical phantom with a $25$ mm diameter in water, having an average soft tissue optical properties ($\mu _{a} = 0.56$ mm$^{-1}$, $\mu _{s} = 9.9$ mm$^{-1}$, g = $0.90$, n = $1.4$) [39]. The LED elements were modeled to produce a cone of illumination with a solid angle of $120^{\circ }$. For the top illumination case, four adjacent LED units were positioned at a distance of $5$ mm above the phantom as depicted in Fig. 1(a). The distance between the object and the LEDs were selected to prevent part of the LED array toughing the transducer holder and to minimize light falling directly on the transducer surface. For the side illumination case, two LED bars were placed above and below the active part of the transducer. They were placed under an angle of $30.8^{\circ }$ and $-30.8^{\circ }$ relative to the imaging plane, to let the illumination intersect in a non-scattering medium with the imaging plane at the focus of the transducer (at $20$ mm). Two other LED bars were placed in the imaging plane, with an angle of $105^{\circ }$ and $-105^{\circ }$ relative to the transducer array, with the left LED array unit looking downwards with $5^{\circ }$ and the right LED array looking upwards with $5^{\circ }$ (Fig. 1(b)). The position and the angles of the LED arrays were selected to enable the imaging of a $40$ mm diameter objects using the system and to minimize acoustic reflections from the LED arrays reaching the transducer. The tilt of $5^\circ$ with the imaging plane was selected to direct the reflected acoustic waves out of the imaging plane.

The fluence map obtained from the two LED array configurations was coupled with the acoustic simulation to further understand the difference in the reconstructed image. We used a normalized fluence map and multiplied it with the ground truth to obtain the initial pressure. The Grüneisen parameter was not considered here, as we were interested only in the spatial variation of initial pressure and not in its absolute value. A vascular structure obtained from a retinal image in the DRIVE database was used as ground truth image [40]. Speed-of-sound of $1580$ m/s and density of $1000$ kg/m$^3$ were used for the phantom to mimic the acoustic properties of soft tissue. We assigned acoustic properties of water for the coupling medium with a speed-of-sound of $1502$ m/s and a density of $1000$ kg/m$^3$. The acoustic attenuation was modeled as a power-law with pre-factor chosen to be $0.75$ dB/(MHz$^{1.5}$ cm). As in the previous simulation (section 2.1), a directivity and bandlimited nature of the transducer were incorporated in the simulation. The first-order k-space model was used for forward acoustic wave propagation and to mimic measurement noise, Gaussian noise with SNR of $30$ dB (with respect to the PA signal) was added to the RF signals. For the reconstruction, considering that the spatial variation in acoustic properties of the medium is unknown for a realistic scenario, the acoustic properties were assumed to be homogeneous and set to that of water. The reconstructed image from each angle is spatially compounded to form the tomographic image. For analysis, the reconstructed tomographic images were normalized such that the total intensity of the vascular structures in the reconstructed image is the same as that of the ground truth. The normalization is performed by segmenting the pixels corresponding to the vascular structures and normalizing the whole image such that the sum of pixel values in the segmented region is equal to that of the ground truth [41]. Tomographic images from both the illumination configurations are compared to understand the difference and validated against the ground truth.

2.4 Experimental setup

Acquired data (US and PA) are reconstructed using the in-built Fourier-domain based reconstruction algorithm and then displayed in real-time on a high-resolution monitor. The system can drive the LED arrays as well as transmit and acquire data parallelly from all $128$ elements of the US probe to generate interleaved PA and US (plane-wave) images at a maximum frame rate of $30.3$ Hz. The pulse energy of the LEDs is limited with each unit providing a maximum of $200 \mu$J per pulse. However, with the maximum pulse repetition frequency (PRF) of $4$ kHz for the LEDs, SNR can be improved by averaging more PA frames while still maintaining a frame rate high enough to qualify for real-time imaging. In our experiments, $64$ PA raw data frames were averaged on-board within the DAQ and further, $6$ frames were averaged on the computer before reconstruction. This results in a frame rate of $10.4$ Hz.

The illumination configuration developed in the simulation is replicated in the experiments. Two system configurations were used in the experiments. In the first one (Fig. 1(a)), all four LED units were stacked together and placed $5$ mm above the sample. The transducer was then rotated around the sample using a motorized stage. The center of rotation, in this case, was chosen as the focus of the transducer ($20$ mm) for high image quality. However, for larger samples, the center can be shifted further from the focus, within the focus-zone of the transducer. Multiple slices of the samples can also be obtained by translating the transducer to a different depth. Two samples were imaged using this configuration: a leaf skeleton stained with India ink (Skeleton Leaf Inc., UK) and an ex vivo mouse knee. Both samples were embedded in a $3\%$ agar phantom. The leaf phantom was selected as it has structures of different thickness and orientation for a resolution study in the tomographic setting [42].

In the second configuration, illumination from the sides of the sample was considered as shown in Fig. 1(b). In this case, the transducer and the illumination are rotated around the sample for tomographic imaging. A holder to attach the transducer and the LED array units with the illumination configuration design explained in subsection 2.3 was 3D printed. A schematic of the finger joint imager is shown in Fig. 1(c), which consists of the imaging unit, scanning system and hand rest. A photograph of the finger joint imaging system with the above configuration is shown in Fig. 1(d). The scanning system consists of a rotational motor with a $1:4$ belt which can rotate the imaging unit for the whole $360^{\circ }$ with an accuracy of $0.1^{\circ }$ and a translational stage with a maximum range of $157.7$ mm with an accuracy of $100 \mu$m. A hand rest and a stationary fingertip positioner were used to keep the finger in position and to reduce movements during the scanning. The imaging probe (including the transducer and the LED arrays) and the hand rest were mounted in a water tank. The imaging experiments were performed in water for acoustic coupling.

3. Results and discussion

3.1 Optimal number of angular views

The focus of the transducer was measured to be at $20$ mm with a near-symmetric drop in sensitivity in the axial direction. The measured directivity of a single element in the array was $26.8\pm 0.2^{\circ }$. In this study, the center of rotation is chosen to be $20$ mm from the transducer. The opening angle of a single element need to be considered for PA mode as well as conventional line-by-line B-mode US imaging as all elements are in the receiving mode. The acoustic field of the whole transducer was computed by summing $128$ laterally shifted replicas of the field of a single element. We have calculated the directivity of the whole array to be $16.6\pm 1.5^{\circ }$. In the plane wave US mode, the directivity of the whole transducer needs to be considered, as US transmission from all the elements is involved in this mode. Using the opening angle of the transducer, the theoretical minimal number of angular views required for tomographic imaging was calculated to be $14$ for PA mode and conventional B-mode US imaging and $24$ in the plane wave US mode. To find the optimal number of angular views, we performed PA tomographic imaging simulations on the digital phantom shown in Fig. 2(a) with varying number of angular views from $1$ to $128$. With the structures in the phantom being designed to test the angular dependence, resolution and acoustic pressure levels, the image quality metrics, SSIM and PSNR were calculated as shown in Fig. 2. It can be observed that SSIM and PSNR do not significantly improve beyond $16$ angular views. This is also evident in the reconstructed images in Fig. 2(c-f). Figure 2(c) was reconstructed using one view. Only vertical lines and ones falling within the directivity of the transducer are reconstructed well. This limited view problem can also be observed in the circular targets, as only the top and bottom boundaries were reconstructed. An additional view from $180^{\circ }$ can only provide a small improvement in the image quality as no additional angular information is available. With $4$ angular views, all the vertical and horizontal structures are reconstructed. However, lines having a larger angle and the circular targets are not fully reconstructed and the noise level is still high. From $4$ to $16$ views there is a linear increase in reconstructed image quality. In the case of reconstruction with $16$ number of angular views, all the structures at different angles, initial pressure levels, and sizes are resolved well. The smallest point source with $\lambda _{0}/2$, was also reconstructed fully. The initial pressure levels in the phantom are not fully recovered. This can be because not all structures were detected by an equal number of transducer elements due to the directional nature of the transducer. Additionally, the low-frequency components were not detected due to the bandlimited nature. Further increasing the number of views has little impact on image quality. Hence, we consider $16$ angular views with a step of $22.5^{\circ }$ as an optimum for tomographic imaging using our system configuration. This number is a good match with the theoretical estimation of $14$ angular views.

 figure: Fig. 2.

Fig. 2. Image quality with an increasing number of angular views. (a) Ground truth image of the digital phantom used as photoacoustic initial pressure. (b) Image quality measured with Structural Similarity (SSIM) index and Peak Signal to Noise Ratio (PSNR) for an increasing number of equispaced angular views. (c)-(f) Reconstructed photoacoustic tomographic images from 1,4,16 and 64 angular views respectively.

Download Full Size | PPT Slide | PDF

3.2 Tomographic imaging using top and side illumination

A uniform optical fluence in the entire imaging plane is ideal for tomographic PA imaging. In this simulation study, we have investigated the difference in image quality between fixed top illumination and rotating side illumination. Figure 3(a) and (b) show optical fluence from the top and side illumination in a soft tissue-mimicking phantom generated using the Monte Carlo simulations. In the top illumination, as expected, the fluence is mostly uniform with an asymmetric drop at the edges in the vertical direction compared to the horizontal. This asymmetry comes from the stacked LED units with an area of $50$ mm $\times 40$ mm resulting in a rectangular illuminated region. From the center of the phantom to the edge a maximum drop of $46\%$ was observed. In the case of side illumination, fluence dropped to $30\%$ at the center compared to the one at the boundary. Considering the 1/e value of the fluence, a depth of $9.2$ mm was achieved from the surface of the sample using top illumination. In the case of side illumination, 1/e value of fluence in the illuminated slice was computed to be $9.7$ mm. Although the sensitivity of the transducer was not considered in this calculation, these values can provide an indication about the achievable depth.

 figure: Fig. 3.

Fig. 3. A simulation study comparing top and side illumination configurations. (a) - (b) Normalized optical fluence maps in the two cases respectively. For the side illumination the probe is positioned on the right side of the phantom. (c) Ground truth vascular phantom. (d) - (e) Initial pressure obtained from top and side illumination respectively. (f) - (g) Reconstructed and normalized tomographic images from 16 angular views. (h) - (i) Comparison of line profiles between the ground truth (c) and the reconstructed images (f) and (g), along horizontal (green) and vertical (white) lines passing through the center of the phantom respectively.

Download Full Size | PPT Slide | PDF

A vascular phantom as shown in Fig. 3(c) was used in the simulation as ground truth. The initial pressure map obtained by multiplying the ground truth with the fluence map is shown in Fig. 3(d) and (e). Tomographic images obtained from $16$ angular views are shown in Fig. 3(f) and (g). A primary observation is that the vascular phantom was reconstructed well in both configurations. A detailed inspection of the image shows that with top illumination, the reconstructed pressure level is lower towards the boundary of the phantom while for side illumination, it is lower at the center of the phantom. This is further evident in the line profiles extracted from the reconstructed images along the vertical (Fig. 3(h)) and horizontal (Fig. 3(i)) direction. It can be concluded that both configurations can be used for tomographic imaging. However, with top illumination, a region of interest around the center can be reconstructed well. This aspect can be helpful in applications like small animal brain imaging. In the case of side illumination, with the illumination from three sides of the phantom, there is a significant amount of overlap in the illuminated regions in the phantom enabling tomographic reconstruction. The fluence at the center of the sample is lower which can result in a reduction in reconstructed pressure. However, the number of transducer elements observing the center is higher, resulting in higher averaging which improves the SNR of these structures. Given that the rim of the object is illuminated fairly uniform, the side illumination can be a potential configuration for finger joint tomographic imaging.

It can also be noted that the speed-of-sound of the phantom is different from the coupling medium but assumed to be uniform in the reconstruction. A homogeneous speed-of-sound is considered to use the real-time Fourier domain reconstruction algorithm. As a result of this assumption, in the line profiles, there is a lateral shift in the peaks compared to the ground truth. From a spatial compounding based tomographic perspective, this can result in a change in the size of the structures and smear artifacts from multiple angles. We preferred the Fourier domain algorithm even with a slight degradation of image quality due to it’s real-time nature.

3.3 Imaging experiment

In the first experiment, a leaf skeleton (Fig. 4(a)) was imaged using illumination from the top of the sample. Tomographic PA and US images obtained from $18$ angular views with a step of $20^{\circ }$ are shown in Fig. 4(b) and (c) respectively. From the estimated number of $16$ angular views ($22.5^\circ$) of angular views for optimal image quality, a slight oversampling is considered for the experiments with $18$ angular views ($20^\circ$). There are four levels of vein structures based on its thickness as shown in the zoomed-in photograph (Fig. 4(d)), which makes it an ideal test object for resolution analysis. Additionally, the structures appear at a wide angular range which demands tomographic imaging to fully reconstruct the imaging. It can be observed in the PA image in Fig. 4(b) that the three levels of details in the leaf structure are reconstructed well, leaving out only the smallest veins undetected. Figure 4(e-h), shows tomographic PA imaging from $1,4,12$ and $18$ angular views respectively. These images demonstrate how the finer structures with different orientations can be reconstructed with an increasing number of angular views. These structures can also be seen in the US tomographic image. However, the specular nature of the imaging resulted in high intensity along the boundary of the leaf and discontinuous low-intensity structures towards the center.

 figure: Fig. 4.

Fig. 4. Photoacoustic and ultrasound tomographic imaging of leaf phantom. (a) Photograph of leaf skeleton stained with India ink and embedded in an agar phantom. (b) Photoacoustic tomographic image of the phantom using top illumination and (c) the corresponding ultrasound image. (d-g) Photoacoustic tomographic image obtained from 1,4, 12 and 18 angular views.

Download Full Size | PPT Slide | PDF

The resolution study using the linear transducer array provided an axial resolution of $0.22$ mm and a lateral resolution of $0.47$ mm. An analysis of the tomographic PA image (Fig. 4(b)) shows that the smaller structures with a mean diameter of $0.26$ mm were resolved well. The resolution along the lateral direction due to the limited view of the linear array is significantly improved with tomographic imaging at the cost of a slight drop in axial resolution. It was also observed that there were some streak artifacts from individual angular views adding together as background noise in the tomographic image. The contrast to noise ratio of the structures was $35.6\pm 11.3$ for the tomographic PA image and $14.5\pm 6.8$ for the US image. The contrast might be improved with combined reconstruction from all the views [10] or by utilizing an improved reconstruction algorithm to remove artifacts from individual angular views [43]. These image reconstruction aspects are not used in this work to make use of the real-time image reconstruction from the system.

To explore the potential of finger joint imaging, we performed imaging experiments on an ex vivo mouse knee sample and in vivo human finger joint. Figure 5(a) shows the photograph of the mouse knee. Tomographic images formed using plane wave and B-mode US imaging from $18$ angular views are presented in Fig. 5(b) and (c) respectively. The B-mode images are formed using conventional line-by-line scanning resulting in a low frame rate. However, the structures are better visible in the B-mode tomographic image compared to the plane wave one. In the plane-wave US imaging, the transmitted plane wave is highly directional resulting in narrow directional sensitivity while in the B-mode, the line-by-line acquisition provides a much broader directional sensitivity. The knee joint is visible with two bones and the tissue around it. Figure 5(d) shows the PA tomographic image obtained using top illumination. Multiple blood vessels and clotting near the dissected point, are visible in the image. A major blood vessel running through the joint with several branches is visible in Fig. 5(a), with some discontinuities as being partially out of the imaging plane. The combined PA and B-mode US image in Fig. 5(e) shows the blood vessels and the joint. The ability to detect vascularization using PA imaging and the structure of the joint using US imaging can be of clinical relevance for an early-stage diagnosis rheumatoid arthritis.

 figure: Fig. 5.

Fig. 5. Photoacoustic and ultrasound tomographic imaging of an ex vivo mouse knee. (a) Photograph of the sample. The tomographic image from 18 angular views formed using (b) plane wave and (c) B-mode ultrasound imaging from multiple angles. (d) Photoacoustic image using top illumination. (e) Overlaid photoacoustic and ultrasound image.

Download Full Size | PPT Slide | PDF

Results from a proof-of-concept in vivo joint imaging of index finger from a healthy female volunteer using the side illumination tomographic imaging are shown in Fig. 6. Figure 6(a) shows PA and US maximum intensity projection image from a linear scan along the finger. The finger joint is visible with several blood vessels around it. To reduce the imaging time in the tomographic imaging, in this first study we performed only $12$ angular scans at $30^{\circ }$ steps. Additionally, plane-wave US imaging was used. The tomographic PA and US image at the joint (p1 in Fig. 6(a)) shows a hypoechogenic region for the bones and blood vessels. The joint is visible with two distinct parts possibly from the curved region of the proximal interphalangeal joint (Fig. 6(c)). The skin and blood vessels are visible in the PA image in Fig. 6(b). A slice obtained $5$ mm away from the joint (p2 in Fig. 6(a)) is shown in Fig. 6(e - g). With more angles and B-mode imaging, the image quality can be better. However, methods need to be developed to minimize or correct movements of the subject during the scanning, which will be explored in the future. The side illumination configuration considers illumination and detection from the same side. Hence, acoustic waves that are transmitted through the bone are not used for tomographic image formation. This configuration allows to reduce artifacts from acoustic waves traveling through the bone. However, there are acoustic reflections from the bone, which are visible as artifacts in the PA images in Fig. 6(b) and (e).

 figure: Fig. 6.

Fig. 6. In vivo finger joint tomographic imaging using side illumination configuration. (a) Overlaid photoacoustic and ultrasound maximum intensity projection image showing finger joint from a linear scan. (b - d) Photoacoustic, ultrasound and combined tomographic image of the finger joint (p1). (e - g) Photoacoustic, ultrasound and combined tomographic image 5 mm in front of the joint (p2) respectively. (The dynamic range of the color bar is not applicable for the maximum intensity projection image in (a))

Download Full Size | PPT Slide | PDF

A major advantage of the proposed tomographic system is imaging speed. One US (plane-wave) frame is acquired between every $64$ frame-averaged PA data sets to generate US/PA overlaid images at an interleaved frame rate of $10.3$ Hz. The frame rate can be increased with less averaging. For a single angle, $97$ ms is required for PA and US acquisition and image formation. The speed of the rotational stage is $8.8$ deg/sec. Using this system for tomographic imaging with $16$ angular views for the entire $360^{\circ }$, a maximum imaging speed of $42.5$ sec can be achieved. In the B-mode US, better image quality is obtained (as in Fig. 5) compared to plane wave US. This improvement in US image comes at an expense of lower frame rate in the combined imaging, which is $6.25$ Hz in B-mode compared to $10.3$ Hz for the plane-wave case. In the small animal ex vivo experiment, this setting was used since high-speed scanning was not required in this case. In this work, with the combination of $576$ LED elements maximum pulse energy of $800 \mu$J was achieved resulting in high-quality tomographic imaging. Compared to pulsed lasers this can still be a bottleneck for tomographic imaging of large and highly absorbing tissue. However, with the high PRF of LEDs, SNR can be improved by frame averaging [32]. In this way, inexpensive and compact tomographic systems can be developed and can still retain high SNR and imaging speed. The imaging example in Fig. 4 indicates that applications such as imaging small animal brain [44] is feasible using this system. In finger joint imaging where a depth of maximum $5$ mm from the surface of the skin is sufficient, the system is expected to find clinical use in point-of-care applications. Other applications such as small animal whole-body imaging will be explored in the future. Other features of the system which are unexplored in this work are multi-wavelength PA imaging which can be used to extract functional and molecular information of the tissues.

4. Conclusion

Through our results we have successfully demonstrated photoacoustic and ultrasound tomographic imaging using an LED-based illumination and linear transducer array-based acoustic detection. Since LEDs have low power, we used $576$ elements to obtain sufficient pulse energy for the tomographic imaging. Two illumination configurations were developed and analyzed the efficacy for tomographic photoacoustic imaging. Imaging using the two configurations were compared and system aspects such as optimal number of angular views for good image quality and speed were explored. Both the configurations have potential applications in biomedical imaging. We have demonstrated an application of joint imaging both in a mouse knee ex vivo with the illumination from the top of the sample and in vivo human finger imaging using side illumination. Results show that the LED-based illumination is sufficient for our intended application, namely finger joint imaging. However, for larger samples such as breast, a custom developed LED array illuminating from all sides, with a much larger number of elements should be considered. Provided that a pulse duration of several tens of nanoseconds or longer is acceptable, inexpensive and compact LED-based light source and fast imaging capability demonstrated in this work can find a wide range of clinical applications, especially in point-of-care imaging.

Funding

National Centre for the Replacement Refinement and Reduction of Animals in Research (CRACKITRT-P1-3).

Disclosures

M. K. A. S is employed by CYBERDYNE Inc. The authors have no financial interests or conflict of interest to disclose.

References

1. P. Beard, “Biomedical photoacoustic imaging,” Interface Focus 1(4), 602–631 (2011). [CrossRef]  

2. J. Yao and L. V. Wang, “Recent progress in photoacoustic molecular imaging,” Curr. Opin. Chem. Biol. 45, 104–112 (2018). [CrossRef]  

3. S. Manohar and M. Dantuma, “Current and future trends in photoacoustic breast imaging,” Photoacoustics 16, 100134 (2019). [CrossRef]  

4. L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012). [CrossRef]  

5. V. Ntziachristos and D. Razansky, “Molecular imaging by means of multispectral optoacoustic tomography (msot),” Chem. Rev. 110(5), 2783–2794 (2010). [CrossRef]  

6. K. J. Francis and S. Manohar, “Photoacoustic imaging in percutaneous radiofrequency ablation: device guidance and ablation visualization,” Phys. Med. Biol. 64(18), 184001 (2019). [CrossRef]  

7. K. J. Francis, E. Rascevska, and S. Manohar, “Photoacoustic imaging assisted radiofrequency ablation: Illumination strategies and prospects,” in TENCON 2019-2019 IEEE Region 10 Conference (TENCON) (IEEE, 2019), pp. 118–122.

8. I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019). [CrossRef]  

9. M. Oeri, W. Bost, S. Tretbar, and M. Fournelle, “Calibrated linear array-driven photoacoustic/ultrasound tomography,” Ultrasound Med. Biol. 42(11), 2697–2707 (2016). [CrossRef]  

10. G. Li, L. Li, L. Zhu, J. Xia, and L. V. Wang, “Multiview hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt. 20(6), 066010 (2015). [CrossRef]  

11. R. G. Kolkman, P. J. Brands, W. Steenbergen, and T. G. van Leeuwen, “Real-time in vivo photoacoustic and ultrasound imaging,” J. Biomed. Opt. 13(5), 050510 (2008). [CrossRef]  

12. E. Mercep, G. Jeng, S. Morscher, P.-C. Li, and D. Razansky, “Hybrid optoacoustic tomography and pulse-echo ultrasonography using concave arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 62(9), 1651–1661 (2015). [CrossRef]  

13. K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Characterization of lens based photoacoustic imaging system,” Photoacoustics 8, 37–47 (2017). [CrossRef]  

14. A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019). [CrossRef]  

15. Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018). [CrossRef]  

16. W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018). [CrossRef]  

17. E. Maneas, R. Aughwane, N. Huynh, W. Xia, O. J. Ansari, and J. Deprest, “Photoacoustic imaging of the human placental vasculature,” Journal of biophotonics (2019).

18. L. V. Wang, “Multiscale photoacoustic microscopy and computed tomography,” Nat. Photonics 3(9), 503–509 (2009). [CrossRef]  

19. C. Lutzweiler and D. Razansky, “Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification,” Sensors 13(6), 7345–7384 (2013). [CrossRef]  

20. A. A. Oraevsky, V. A. Andreev, A. A. Karabutov, and R. O. Esenaliev, “Two-dimensional optoacoustic tomography: transducer array and image reconstruction algorithm,” in Laser-Tissue Interaction X: Photochemical, Photothermal, and Photomechanical, vol. 3601 (International Society for Optics and Photonics, 1999), pp. 256–267.

21. R. A. Kruger, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys. 30(5), 856–860 (2003). [CrossRef]  

22. D. Yang, D. Xing, S. Yang, and L. Xiang, “Fast full-view photoacoustic imaging by combined scanning with a linear transducer array,” Opt. Express 15(23), 15566–15575 (2007). [CrossRef]  

23. X. Lin, J. Yu, N. Feng, and M. Sun, “Synthetic aperture-based linear-array photoacoustic tomography considering the aperture orientation effect,” J. Innovative Opt. Health Sci. 11(04), 1850015 (2018). [CrossRef]  

24. H. J. Kang, M. A. L. Bell, X. Guo, and E. M. Boctor, “Spatial angular compounding of photoacoustic images,” IEEE Trans. Med. Imaging 35(8), 1845–1855 (2016). [CrossRef]  

25. K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Multiview spatial compounding using lens-based photoacoustic imaging system,” Photoacoustics 13, 85–94 (2019). [CrossRef]  

26. J. Gateau, M. Á. A. Caballero, A. Dima, and V. Ntziachristos, “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: Whole-body tomographic system for small animals,” Med. Phys. 40(1), 013302 (2012). [CrossRef]  

27. M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017). [CrossRef]  

28. C. Liu, B. Zhang, C. Xue, W. Zhang, G. Zhang, and Y. Cheng, “Multi-perspective ultrasound imaging technology of the breast with cylindrical motion of linear arrays,” Appl. Sci. 9(3), 419 (2019). [CrossRef]  

29. S. Agrawal, C. Fadden, A. Dangi, and S.-R. Kothapalli, “Light-emitting-diode-based multispectral photoacoustic computed tomography system,” Sensors 19(22), 4861 (2019). [CrossRef]  

30. P. J. van den Berg, K. Daoudi, H. J. B. Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017). [CrossRef]  

31. P. van Es, S. K. Biswas, H. J. B. Moens, W. Steenbergen, and S. Manohar, “Initial results of finger imaging using photoacoustic computed tomography,” J. Biomed. Opt. 19(6), 060501 (2014). [CrossRef]  

32. T. J. Allen and P. C. Beard, “High power visible light emitting diodes as pulsed excitation sources for biomedical photoacoustics,” Biomed. Opt. Express 7(4), 1260–1270 (2016). [CrossRef]  

33. J. Leskinen, A. Pulkkinen, J. Tick, and T. Tarvainen, “Photoacoustic tomography setup using led illumination,” in Opto-Acoustic Methods and Applications in Biophotonics IV, vol. 11077 (International Society for Optics and Photonics, 2019), p. 110770Q.

34. Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys. 31(4), 724–733 (2004). [CrossRef]  

35. B. E. Treeby and B. T. Cox, “k-wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010). [CrossRef]  

36. M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Problems 23(6), S51–S63 (2007). [CrossRef]  

37. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

38. L. Yu, F. Nina-Paravecino, D. R. Kaeli, and Q. Fang, “Scalable and massively parallel monte carlo photon transport simulations for heterogeneous computing platforms,” J. Biomed. Opt. 23(1), 010504 (2018). [CrossRef]  

39. S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Medicine & Biol. 58(11), R37–R61 (2013). [CrossRef]  

40. J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23(4), 501–509 (2004). [CrossRef]  

41. Y. E. Boink, M. J. Lagerwerf, W. Steenbergen, S. A. Van Gils, S. Manohar, and C. Brune, “A framework for directional and higher-order reconstruction in photoacoustic tomography,” Phys. Medicine & Biol. 63(4), 045018 (2018). [CrossRef]  

42. J. Jose, R. G. Willemink, W. Steenbergen, C. H. Slump, T. G. van Leeuwen, and S. Manohar, “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys. 39(12), 7262–7271 (2012). [CrossRef]  

43. S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019). [CrossRef]  

44. J. Gamelin, A. Maurudis, A. Aguirre, F. Huang, P. Guo, L. V. Wang, and Q. Zhu, “A real-time photoacoustic tomography system for small animals,” Opt. Express 17(13), 10489–10498 (2009). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. P. Beard, “Biomedical photoacoustic imaging,” Interface Focus 1(4), 602–631 (2011).
    [Crossref]
  2. J. Yao and L. V. Wang, “Recent progress in photoacoustic molecular imaging,” Curr. Opin. Chem. Biol. 45, 104–112 (2018).
    [Crossref]
  3. S. Manohar and M. Dantuma, “Current and future trends in photoacoustic breast imaging,” Photoacoustics 16, 100134 (2019).
    [Crossref]
  4. L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012).
    [Crossref]
  5. V. Ntziachristos and D. Razansky, “Molecular imaging by means of multispectral optoacoustic tomography (msot),” Chem. Rev. 110(5), 2783–2794 (2010).
    [Crossref]
  6. K. J. Francis and S. Manohar, “Photoacoustic imaging in percutaneous radiofrequency ablation: device guidance and ablation visualization,” Phys. Med. Biol. 64(18), 184001 (2019).
    [Crossref]
  7. K. J. Francis, E. Rascevska, and S. Manohar, “Photoacoustic imaging assisted radiofrequency ablation: Illumination strategies and prospects,” in TENCON 2019-2019 IEEE Region 10 Conference (TENCON) (IEEE, 2019), pp. 118–122.
  8. I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019).
    [Crossref]
  9. M. Oeri, W. Bost, S. Tretbar, and M. Fournelle, “Calibrated linear array-driven photoacoustic/ultrasound tomography,” Ultrasound Med. Biol. 42(11), 2697–2707 (2016).
    [Crossref]
  10. G. Li, L. Li, L. Zhu, J. Xia, and L. V. Wang, “Multiview hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt. 20(6), 066010 (2015).
    [Crossref]
  11. R. G. Kolkman, P. J. Brands, W. Steenbergen, and T. G. van Leeuwen, “Real-time in vivo photoacoustic and ultrasound imaging,” J. Biomed. Opt. 13(5), 050510 (2008).
    [Crossref]
  12. E. Mercep, G. Jeng, S. Morscher, P.-C. Li, and D. Razansky, “Hybrid optoacoustic tomography and pulse-echo ultrasonography using concave arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 62(9), 1651–1661 (2015).
    [Crossref]
  13. K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Characterization of lens based photoacoustic imaging system,” Photoacoustics 8, 37–47 (2017).
    [Crossref]
  14. A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
    [Crossref]
  15. Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
    [Crossref]
  16. W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
    [Crossref]
  17. E. Maneas, R. Aughwane, N. Huynh, W. Xia, O. J. Ansari, and J. Deprest, “Photoacoustic imaging of the human placental vasculature,” Journal of biophotonics (2019).
  18. L. V. Wang, “Multiscale photoacoustic microscopy and computed tomography,” Nat. Photonics 3(9), 503–509 (2009).
    [Crossref]
  19. C. Lutzweiler and D. Razansky, “Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification,” Sensors 13(6), 7345–7384 (2013).
    [Crossref]
  20. A. A. Oraevsky, V. A. Andreev, A. A. Karabutov, and R. O. Esenaliev, “Two-dimensional optoacoustic tomography: transducer array and image reconstruction algorithm,” in Laser-Tissue Interaction X: Photochemical, Photothermal, and Photomechanical, vol. 3601 (International Society for Optics and Photonics, 1999), pp. 256–267.
  21. R. A. Kruger, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys. 30(5), 856–860 (2003).
    [Crossref]
  22. D. Yang, D. Xing, S. Yang, and L. Xiang, “Fast full-view photoacoustic imaging by combined scanning with a linear transducer array,” Opt. Express 15(23), 15566–15575 (2007).
    [Crossref]
  23. X. Lin, J. Yu, N. Feng, and M. Sun, “Synthetic aperture-based linear-array photoacoustic tomography considering the aperture orientation effect,” J. Innovative Opt. Health Sci. 11(04), 1850015 (2018).
    [Crossref]
  24. H. J. Kang, M. A. L. Bell, X. Guo, and E. M. Boctor, “Spatial angular compounding of photoacoustic images,” IEEE Trans. Med. Imaging 35(8), 1845–1855 (2016).
    [Crossref]
  25. K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Multiview spatial compounding using lens-based photoacoustic imaging system,” Photoacoustics 13, 85–94 (2019).
    [Crossref]
  26. J. Gateau, M. Á. A. Caballero, A. Dima, and V. Ntziachristos, “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: Whole-body tomographic system for small animals,” Med. Phys. 40(1), 013302 (2012).
    [Crossref]
  27. M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
    [Crossref]
  28. C. Liu, B. Zhang, C. Xue, W. Zhang, G. Zhang, and Y. Cheng, “Multi-perspective ultrasound imaging technology of the breast with cylindrical motion of linear arrays,” Appl. Sci. 9(3), 419 (2019).
    [Crossref]
  29. S. Agrawal, C. Fadden, A. Dangi, and S.-R. Kothapalli, “Light-emitting-diode-based multispectral photoacoustic computed tomography system,” Sensors 19(22), 4861 (2019).
    [Crossref]
  30. P. J. van den Berg, K. Daoudi, H. J. B. Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017).
    [Crossref]
  31. P. van Es, S. K. Biswas, H. J. B. Moens, W. Steenbergen, and S. Manohar, “Initial results of finger imaging using photoacoustic computed tomography,” J. Biomed. Opt. 19(6), 060501 (2014).
    [Crossref]
  32. T. J. Allen and P. C. Beard, “High power visible light emitting diodes as pulsed excitation sources for biomedical photoacoustics,” Biomed. Opt. Express 7(4), 1260–1270 (2016).
    [Crossref]
  33. J. Leskinen, A. Pulkkinen, J. Tick, and T. Tarvainen, “Photoacoustic tomography setup using led illumination,” in Opto-Acoustic Methods and Applications in Biophotonics IV, vol. 11077 (International Society for Optics and Photonics, 2019), p. 110770Q.
  34. Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys. 31(4), 724–733 (2004).
    [Crossref]
  35. B. E. Treeby and B. T. Cox, “k-wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010).
    [Crossref]
  36. M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Problems 23(6), S51–S63 (2007).
    [Crossref]
  37. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
    [Crossref]
  38. L. Yu, F. Nina-Paravecino, D. R. Kaeli, and Q. Fang, “Scalable and massively parallel monte carlo photon transport simulations for heterogeneous computing platforms,” J. Biomed. Opt. 23(1), 010504 (2018).
    [Crossref]
  39. S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Medicine & Biol. 58(11), R37–R61 (2013).
    [Crossref]
  40. J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23(4), 501–509 (2004).
    [Crossref]
  41. Y. E. Boink, M. J. Lagerwerf, W. Steenbergen, S. A. Van Gils, S. Manohar, and C. Brune, “A framework for directional and higher-order reconstruction in photoacoustic tomography,” Phys. Medicine & Biol. 63(4), 045018 (2018).
    [Crossref]
  42. J. Jose, R. G. Willemink, W. Steenbergen, C. H. Slump, T. G. van Leeuwen, and S. Manohar, “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys. 39(12), 7262–7271 (2012).
    [Crossref]
  43. S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019).
    [Crossref]
  44. J. Gamelin, A. Maurudis, A. Aguirre, F. Huang, P. Guo, L. V. Wang, and Q. Zhu, “A real-time photoacoustic tomography system for small animals,” Opt. Express 17(13), 10489–10498 (2009).
    [Crossref]

2019 (8)

S. Manohar and M. Dantuma, “Current and future trends in photoacoustic breast imaging,” Photoacoustics 16, 100134 (2019).
[Crossref]

K. J. Francis and S. Manohar, “Photoacoustic imaging in percutaneous radiofrequency ablation: device guidance and ablation visualization,” Phys. Med. Biol. 64(18), 184001 (2019).
[Crossref]

I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019).
[Crossref]

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Multiview spatial compounding using lens-based photoacoustic imaging system,” Photoacoustics 13, 85–94 (2019).
[Crossref]

C. Liu, B. Zhang, C. Xue, W. Zhang, G. Zhang, and Y. Cheng, “Multi-perspective ultrasound imaging technology of the breast with cylindrical motion of linear arrays,” Appl. Sci. 9(3), 419 (2019).
[Crossref]

S. Agrawal, C. Fadden, A. Dangi, and S.-R. Kothapalli, “Light-emitting-diode-based multispectral photoacoustic computed tomography system,” Sensors 19(22), 4861 (2019).
[Crossref]

S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019).
[Crossref]

2018 (6)

Y. E. Boink, M. J. Lagerwerf, W. Steenbergen, S. A. Van Gils, S. Manohar, and C. Brune, “A framework for directional and higher-order reconstruction in photoacoustic tomography,” Phys. Medicine & Biol. 63(4), 045018 (2018).
[Crossref]

X. Lin, J. Yu, N. Feng, and M. Sun, “Synthetic aperture-based linear-array photoacoustic tomography considering the aperture orientation effect,” J. Innovative Opt. Health Sci. 11(04), 1850015 (2018).
[Crossref]

L. Yu, F. Nina-Paravecino, D. R. Kaeli, and Q. Fang, “Scalable and massively parallel monte carlo photon transport simulations for heterogeneous computing platforms,” J. Biomed. Opt. 23(1), 010504 (2018).
[Crossref]

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

J. Yao and L. V. Wang, “Recent progress in photoacoustic molecular imaging,” Curr. Opin. Chem. Biol. 45, 104–112 (2018).
[Crossref]

2017 (3)

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Characterization of lens based photoacoustic imaging system,” Photoacoustics 8, 37–47 (2017).
[Crossref]

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

P. J. van den Berg, K. Daoudi, H. J. B. Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017).
[Crossref]

2016 (3)

H. J. Kang, M. A. L. Bell, X. Guo, and E. M. Boctor, “Spatial angular compounding of photoacoustic images,” IEEE Trans. Med. Imaging 35(8), 1845–1855 (2016).
[Crossref]

T. J. Allen and P. C. Beard, “High power visible light emitting diodes as pulsed excitation sources for biomedical photoacoustics,” Biomed. Opt. Express 7(4), 1260–1270 (2016).
[Crossref]

M. Oeri, W. Bost, S. Tretbar, and M. Fournelle, “Calibrated linear array-driven photoacoustic/ultrasound tomography,” Ultrasound Med. Biol. 42(11), 2697–2707 (2016).
[Crossref]

2015 (2)

G. Li, L. Li, L. Zhu, J. Xia, and L. V. Wang, “Multiview hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt. 20(6), 066010 (2015).
[Crossref]

E. Mercep, G. Jeng, S. Morscher, P.-C. Li, and D. Razansky, “Hybrid optoacoustic tomography and pulse-echo ultrasonography using concave arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 62(9), 1651–1661 (2015).
[Crossref]

2014 (1)

P. van Es, S. K. Biswas, H. J. B. Moens, W. Steenbergen, and S. Manohar, “Initial results of finger imaging using photoacoustic computed tomography,” J. Biomed. Opt. 19(6), 060501 (2014).
[Crossref]

2013 (2)

S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Medicine & Biol. 58(11), R37–R61 (2013).
[Crossref]

C. Lutzweiler and D. Razansky, “Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification,” Sensors 13(6), 7345–7384 (2013).
[Crossref]

2012 (3)

L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012).
[Crossref]

J. Gateau, M. Á. A. Caballero, A. Dima, and V. Ntziachristos, “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: Whole-body tomographic system for small animals,” Med. Phys. 40(1), 013302 (2012).
[Crossref]

J. Jose, R. G. Willemink, W. Steenbergen, C. H. Slump, T. G. van Leeuwen, and S. Manohar, “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys. 39(12), 7262–7271 (2012).
[Crossref]

2011 (1)

P. Beard, “Biomedical photoacoustic imaging,” Interface Focus 1(4), 602–631 (2011).
[Crossref]

2010 (2)

V. Ntziachristos and D. Razansky, “Molecular imaging by means of multispectral optoacoustic tomography (msot),” Chem. Rev. 110(5), 2783–2794 (2010).
[Crossref]

B. E. Treeby and B. T. Cox, “k-wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010).
[Crossref]

2009 (2)

2008 (1)

R. G. Kolkman, P. J. Brands, W. Steenbergen, and T. G. van Leeuwen, “Real-time in vivo photoacoustic and ultrasound imaging,” J. Biomed. Opt. 13(5), 050510 (2008).
[Crossref]

2007 (2)

M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Problems 23(6), S51–S63 (2007).
[Crossref]

D. Yang, D. Xing, S. Yang, and L. Xiang, “Fast full-view photoacoustic imaging by combined scanning with a linear transducer array,” Opt. Express 15(23), 15566–15575 (2007).
[Crossref]

2004 (3)

Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys. 31(4), 724–733 (2004).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23(4), 501–509 (2004).
[Crossref]

2003 (1)

R. A. Kruger, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys. 30(5), 856–860 (2003).
[Crossref]

Á. A. Caballero, M.

J. Gateau, M. Á. A. Caballero, A. Dima, and V. Ntziachristos, “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: Whole-body tomographic system for small animals,” Med. Phys. 40(1), 013302 (2012).
[Crossref]

Abràmoff, M. D.

J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23(4), 501–509 (2004).
[Crossref]

Agano, T.

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

Agrawal, S.

S. Agrawal, C. Fadden, A. Dangi, and S.-R. Kothapalli, “Light-emitting-diode-based multispectral photoacoustic computed tomography system,” Sensors 19(22), 4861 (2019).
[Crossref]

Aguirre, A.

Allen, T. J.

Ambartsoumian, G.

Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys. 31(4), 724–733 (2004).
[Crossref]

Andreev, V. A.

A. A. Oraevsky, V. A. Andreev, A. A. Karabutov, and R. O. Esenaliev, “Two-dimensional optoacoustic tomography: transducer array and image reconstruction algorithm,” in Laser-Tissue Interaction X: Photochemical, Photothermal, and Photomechanical, vol. 3601 (International Society for Optics and Photonics, 1999), pp. 256–267.

Ansari, O. J.

E. Maneas, R. Aughwane, N. Huynh, W. Xia, O. J. Ansari, and J. Deprest, “Photoacoustic imaging of the human placental vasculature,” Journal of biophotonics (2019).

Aughwane, R.

E. Maneas, R. Aughwane, N. Huynh, W. Xia, O. J. Ansari, and J. Deprest, “Photoacoustic imaging of the human placental vasculature,” Journal of biophotonics (2019).

Avanaki, M.

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

Beard, P.

P. Beard, “Biomedical photoacoustic imaging,” Interface Focus 1(4), 602–631 (2011).
[Crossref]

Beard, P. C.

Bell, M. A. L.

H. J. Kang, M. A. L. Bell, X. Guo, and E. M. Boctor, “Spatial angular compounding of photoacoustic images,” IEEE Trans. Med. Imaging 35(8), 1845–1855 (2016).
[Crossref]

Biswas, S. K.

P. van Es, S. K. Biswas, H. J. B. Moens, W. Steenbergen, and S. Manohar, “Initial results of finger imaging using photoacoustic computed tomography,” J. Biomed. Opt. 19(6), 060501 (2014).
[Crossref]

Boctor, E. M.

H. J. Kang, M. A. L. Bell, X. Guo, and E. M. Boctor, “Spatial angular compounding of photoacoustic images,” IEEE Trans. Med. Imaging 35(8), 1845–1855 (2016).
[Crossref]

Boink, Y. E.

Y. E. Boink, M. J. Lagerwerf, W. Steenbergen, S. A. Van Gils, S. Manohar, and C. Brune, “A framework for directional and higher-order reconstruction in photoacoustic tomography,” Phys. Medicine & Biol. 63(4), 045018 (2018).
[Crossref]

Bost, W.

M. Oeri, W. Bost, S. Tretbar, and M. Fournelle, “Calibrated linear array-driven photoacoustic/ultrasound tomography,” Ultrasound Med. Biol. 42(11), 2697–2707 (2016).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Brands, P. J.

R. G. Kolkman, P. J. Brands, W. Steenbergen, and T. G. van Leeuwen, “Real-time in vivo photoacoustic and ultrasound imaging,” J. Biomed. Opt. 13(5), 050510 (2008).
[Crossref]

Brune, C.

Y. E. Boink, M. J. Lagerwerf, W. Steenbergen, S. A. Van Gils, S. Manohar, and C. Brune, “A framework for directional and higher-order reconstruction in photoacoustic tomography,” Phys. Medicine & Biol. 63(4), 045018 (2018).
[Crossref]

Channappayya, S. S.

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Multiview spatial compounding using lens-based photoacoustic imaging system,” Photoacoustics 13, 85–94 (2019).
[Crossref]

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Characterization of lens based photoacoustic imaging system,” Photoacoustics 8, 37–47 (2017).
[Crossref]

Cheng, Y.

C. Liu, B. Zhang, C. Xue, W. Zhang, G. Zhang, and Y. Cheng, “Multi-perspective ultrasound imaging technology of the breast with cylindrical motion of linear arrays,” Appl. Sci. 9(3), 419 (2019).
[Crossref]

Chinni, B.

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Multiview spatial compounding using lens-based photoacoustic imaging system,” Photoacoustics 13, 85–94 (2019).
[Crossref]

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Characterization of lens based photoacoustic imaging system,” Photoacoustics 8, 37–47 (2017).
[Crossref]

Choi, W.

S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019).
[Crossref]

Cox, B. T.

B. E. Treeby and B. T. Cox, “k-wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010).
[Crossref]

Dadashzadesh, N.

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

Dangi, A.

S. Agrawal, C. Fadden, A. Dangi, and S.-R. Kothapalli, “Light-emitting-diode-based multispectral photoacoustic computed tomography system,” Sensors 19(22), 4861 (2019).
[Crossref]

Dantuma, M.

S. Manohar and M. Dantuma, “Current and future trends in photoacoustic breast imaging,” Photoacoustics 16, 100134 (2019).
[Crossref]

Daoudi, K.

P. J. van den Berg, K. Daoudi, H. J. B. Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017).
[Crossref]

Demirci, H.

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

Deprest, J.

E. Maneas, R. Aughwane, N. Huynh, W. Xia, O. J. Ansari, and J. Deprest, “Photoacoustic imaging of the human placental vasculature,” Journal of biophotonics (2019).

Desjardins, A. E

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

Dima, A.

J. Gateau, M. Á. A. Caballero, A. Dima, and V. Ntziachristos, “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: Whole-body tomographic system for small animals,” Med. Phys. 40(1), 013302 (2012).
[Crossref]

Dogra, V. S.

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Multiview spatial compounding using lens-based photoacoustic imaging system,” Photoacoustics 13, 85–94 (2019).
[Crossref]

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Characterization of lens based photoacoustic imaging system,” Photoacoustics 8, 37–47 (2017).
[Crossref]

Esenaliev, R. O.

A. A. Oraevsky, V. A. Andreev, A. A. Karabutov, and R. O. Esenaliev, “Two-dimensional optoacoustic tomography: transducer array and image reconstruction algorithm,” in Laser-Tissue Interaction X: Photochemical, Photothermal, and Photomechanical, vol. 3601 (International Society for Optics and Photonics, 1999), pp. 256–267.

Fadden, C.

S. Agrawal, C. Fadden, A. Dangi, and S.-R. Kothapalli, “Light-emitting-diode-based multispectral photoacoustic computed tomography system,” Sensors 19(22), 4861 (2019).
[Crossref]

Fang, Q.

L. Yu, F. Nina-Paravecino, D. R. Kaeli, and Q. Fang, “Scalable and massively parallel monte carlo photon transport simulations for heterogeneous computing platforms,” J. Biomed. Opt. 23(1), 010504 (2018).
[Crossref]

Fatima, A.

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

Feng, N.

X. Lin, J. Yu, N. Feng, and M. Sun, “Synthetic aperture-based linear-array photoacoustic tomography considering the aperture orientation effect,” J. Innovative Opt. Health Sci. 11(04), 1850015 (2018).
[Crossref]

Fournelle, M.

M. Oeri, W. Bost, S. Tretbar, and M. Fournelle, “Calibrated linear array-driven photoacoustic/ultrasound tomography,” Ultrasound Med. Biol. 42(11), 2697–2707 (2016).
[Crossref]

Francis, K. J.

K. J. Francis and S. Manohar, “Photoacoustic imaging in percutaneous radiofrequency ablation: device guidance and ablation visualization,” Phys. Med. Biol. 64(18), 184001 (2019).
[Crossref]

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Multiview spatial compounding using lens-based photoacoustic imaging system,” Photoacoustics 13, 85–94 (2019).
[Crossref]

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Characterization of lens based photoacoustic imaging system,” Photoacoustics 8, 37–47 (2017).
[Crossref]

K. J. Francis, E. Rascevska, and S. Manohar, “Photoacoustic imaging assisted radiofrequency ablation: Illumination strategies and prospects,” in TENCON 2019-2019 IEEE Region 10 Conference (TENCON) (IEEE, 2019), pp. 118–122.

Frenz, M.

M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Problems 23(6), S51–S63 (2007).
[Crossref]

Frostig, H. E.

I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019).
[Crossref]

Gambhir, S. S.

I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019).
[Crossref]

Gamelin, J.

Gandikota, G.

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

Gateau, J.

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

J. Gateau, M. Á. A. Caballero, A. Dima, and V. Ntziachristos, “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: Whole-body tomographic system for small animals,” Med. Phys. 40(1), 013302 (2012).
[Crossref]

Gertsch, A.

M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Problems 23(6), S51–S63 (2007).
[Crossref]

Guo, P.

Guo, X.

H. J. Kang, M. A. L. Bell, X. Guo, and E. M. Boctor, “Spatial angular compounding of photoacoustic images,” IEEE Trans. Med. Imaging 35(8), 1845–1855 (2016).
[Crossref]

Hu, S.

L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012).
[Crossref]

Huang, B.

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

Huang, F.

Huland, D. M.

I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019).
[Crossref]

Huynh, N.

E. Maneas, R. Aughwane, N. Huynh, W. Xia, O. J. Ansari, and J. Deprest, “Photoacoustic imaging of the human placental vasculature,” Journal of biophotonics (2019).

Jacques, S. L.

S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Medicine & Biol. 58(11), R37–R61 (2013).
[Crossref]

Jaeger, M.

M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Problems 23(6), S51–S63 (2007).
[Crossref]

Jeng, G.

E. Mercep, G. Jeng, S. Morscher, P.-C. Li, and D. Razansky, “Hybrid optoacoustic tomography and pulse-echo ultrasonography using concave arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 62(9), 1651–1661 (2015).
[Crossref]

Jeon, S.

S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019).
[Crossref]

Jo, J.

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

jong Lee, K.

S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019).
[Crossref]

Jose, J.

J. Jose, R. G. Willemink, W. Steenbergen, C. H. Slump, T. G. van Leeuwen, and S. Manohar, “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys. 39(12), 7262–7271 (2012).
[Crossref]

Kaeli, D. R.

L. Yu, F. Nina-Paravecino, D. R. Kaeli, and Q. Fang, “Scalable and massively parallel monte carlo photon transport simulations for heterogeneous computing platforms,” J. Biomed. Opt. 23(1), 010504 (2018).
[Crossref]

Kang, H. J.

H. J. Kang, M. A. L. Bell, X. Guo, and E. M. Boctor, “Spatial angular compounding of photoacoustic images,” IEEE Trans. Med. Imaging 35(8), 1845–1855 (2016).
[Crossref]

Karabutov, A. A.

A. A. Oraevsky, V. A. Andreev, A. A. Karabutov, and R. O. Esenaliev, “Two-dimensional optoacoustic tomography: transducer array and image reconstruction algorithm,” in Laser-Tissue Interaction X: Photochemical, Photothermal, and Photomechanical, vol. 3601 (International Society for Optics and Photonics, 1999), pp. 256–267.

Kim, C.

S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019).
[Crossref]

Kiser Jr, W. L.

R. A. Kruger, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys. 30(5), 856–860 (2003).
[Crossref]

Kitz, M.

M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Problems 23(6), S51–S63 (2007).
[Crossref]

Kolkman, R. G.

R. G. Kolkman, P. J. Brands, W. Steenbergen, and T. G. van Leeuwen, “Real-time in vivo photoacoustic and ultrasound imaging,” J. Biomed. Opt. 13(5), 050510 (2008).
[Crossref]

Kothapalli, S.-R.

S. Agrawal, C. Fadden, A. Dangi, and S.-R. Kothapalli, “Light-emitting-diode-based multispectral photoacoustic computed tomography system,” Sensors 19(22), 4861 (2019).
[Crossref]

Kratkiewicz, K.

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

Kruger, G. A.

R. A. Kruger, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys. 30(5), 856–860 (2003).
[Crossref]

Kruger, R. A.

R. A. Kruger, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys. 30(5), 856–860 (2003).
[Crossref]

Kuchment, P.

Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys. 31(4), 724–733 (2004).
[Crossref]

Kuniyil Ajith Singh, M.

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

Lagerwerf, M. J.

Y. E. Boink, M. J. Lagerwerf, W. Steenbergen, S. A. Van Gils, S. Manohar, and C. Brune, “A framework for directional and higher-order reconstruction in photoacoustic tomography,” Phys. Medicine & Biol. 63(4), 045018 (2018).
[Crossref]

Leskinen, J.

J. Leskinen, A. Pulkkinen, J. Tick, and T. Tarvainen, “Photoacoustic tomography setup using led illumination,” in Opto-Acoustic Methods and Applications in Biophotonics IV, vol. 11077 (International Society for Optics and Photonics, 2019), p. 110770Q.

Li, G.

G. Li, L. Li, L. Zhu, J. Xia, and L. V. Wang, “Multiview hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt. 20(6), 066010 (2015).
[Crossref]

Li, L.

G. Li, L. Li, L. Zhu, J. Xia, and L. V. Wang, “Multiview hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt. 20(6), 066010 (2015).
[Crossref]

Li, P.-C.

E. Mercep, G. Jeng, S. Morscher, P.-C. Li, and D. Razansky, “Hybrid optoacoustic tomography and pulse-echo ultrasonography using concave arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 62(9), 1651–1661 (2015).
[Crossref]

Lin, X.

X. Lin, J. Yu, N. Feng, and M. Sun, “Synthetic aperture-based linear-array photoacoustic tomography considering the aperture orientation effect,” J. Innovative Opt. Health Sci. 11(04), 1850015 (2018).
[Crossref]

Liu, C.

C. Liu, B. Zhang, C. Xue, W. Zhang, G. Zhang, and Y. Cheng, “Multi-perspective ultrasound imaging technology of the breast with cylindrical motion of linear arrays,” Appl. Sci. 9(3), 419 (2019).
[Crossref]

López-Schier, H.

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

Lutzweiler, C.

C. Lutzweiler and D. Razansky, “Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification,” Sensors 13(6), 7345–7384 (2013).
[Crossref]

Managuli, R.

S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019).
[Crossref]

Maneas, E.

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

E. Maneas, R. Aughwane, N. Huynh, W. Xia, O. J. Ansari, and J. Deprest, “Photoacoustic imaging of the human placental vasculature,” Journal of biophotonics (2019).

Manohar, S.

K. J. Francis and S. Manohar, “Photoacoustic imaging in percutaneous radiofrequency ablation: device guidance and ablation visualization,” Phys. Med. Biol. 64(18), 184001 (2019).
[Crossref]

S. Manohar and M. Dantuma, “Current and future trends in photoacoustic breast imaging,” Photoacoustics 16, 100134 (2019).
[Crossref]

Y. E. Boink, M. J. Lagerwerf, W. Steenbergen, S. A. Van Gils, S. Manohar, and C. Brune, “A framework for directional and higher-order reconstruction in photoacoustic tomography,” Phys. Medicine & Biol. 63(4), 045018 (2018).
[Crossref]

P. van Es, S. K. Biswas, H. J. B. Moens, W. Steenbergen, and S. Manohar, “Initial results of finger imaging using photoacoustic computed tomography,” J. Biomed. Opt. 19(6), 060501 (2014).
[Crossref]

J. Jose, R. G. Willemink, W. Steenbergen, C. H. Slump, T. G. van Leeuwen, and S. Manohar, “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys. 39(12), 7262–7271 (2012).
[Crossref]

K. J. Francis, E. Rascevska, and S. Manohar, “Photoacoustic imaging assisted radiofrequency ablation: Illumination strategies and prospects,” in TENCON 2019-2019 IEEE Region 10 Conference (TENCON) (IEEE, 2019), pp. 118–122.

Manwar, R.

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

Mappes, T.

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

Maurudis, A.

Mercep, E.

E. Mercep, G. Jeng, S. Morscher, P.-C. Li, and D. Razansky, “Hybrid optoacoustic tomography and pulse-echo ultrasonography using concave arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 62(9), 1651–1661 (2015).
[Crossref]

Moens, H. J. B.

P. J. van den Berg, K. Daoudi, H. J. B. Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017).
[Crossref]

P. van Es, S. K. Biswas, H. J. B. Moens, W. Steenbergen, and S. Manohar, “Initial results of finger imaging using photoacoustic computed tomography,” J. Biomed. Opt. 19(6), 060501 (2014).
[Crossref]

Morscher, S.

E. Mercep, G. Jeng, S. Morscher, P.-C. Li, and D. Razansky, “Hybrid optoacoustic tomography and pulse-echo ultrasonography using concave arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 62(9), 1651–1661 (2015).
[Crossref]

Niemeijer, M.

J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23(4), 501–509 (2004).
[Crossref]

Nina-Paravecino, F.

L. Yu, F. Nina-Paravecino, D. R. Kaeli, and Q. Fang, “Scalable and massively parallel monte carlo photon transport simulations for heterogeneous computing platforms,” J. Biomed. Opt. 23(1), 010504 (2018).
[Crossref]

Ntziachristos, V.

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

J. Gateau, M. Á. A. Caballero, A. Dima, and V. Ntziachristos, “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: Whole-body tomographic system for small animals,” Med. Phys. 40(1), 013302 (2012).
[Crossref]

V. Ntziachristos and D. Razansky, “Molecular imaging by means of multispectral optoacoustic tomography (msot),” Chem. Rev. 110(5), 2783–2794 (2010).
[Crossref]

Oeri, M.

M. Oeri, W. Bost, S. Tretbar, and M. Fournelle, “Calibrated linear array-driven photoacoustic/ultrasound tomography,” Ultrasound Med. Biol. 42(11), 2697–2707 (2016).
[Crossref]

Omar, M.

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

Oraevsky, A. A.

A. A. Oraevsky, V. A. Andreev, A. A. Karabutov, and R. O. Esenaliev, “Two-dimensional optoacoustic tomography: transducer array and image reconstruction algorithm,” in Laser-Tissue Interaction X: Photochemical, Photothermal, and Photomechanical, vol. 3601 (International Society for Optics and Photonics, 1999), pp. 256–267.

Ourselin, S.

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

Pachamuthu, R.

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Multiview spatial compounding using lens-based photoacoustic imaging system,” Photoacoustics 13, 85–94 (2019).
[Crossref]

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Characterization of lens based photoacoustic imaging system,” Photoacoustics 8, 37–47 (2017).
[Crossref]

Park, E.-Y.

S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019).
[Crossref]

Pulkkinen, A.

J. Leskinen, A. Pulkkinen, J. Tick, and T. Tarvainen, “Photoacoustic tomography setup using led illumination,” in Opto-Acoustic Methods and Applications in Biophotonics IV, vol. 11077 (International Society for Optics and Photonics, 2019), p. 110770Q.

Rao, N.

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Multiview spatial compounding using lens-based photoacoustic imaging system,” Photoacoustics 13, 85–94 (2019).
[Crossref]

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Characterization of lens based photoacoustic imaging system,” Photoacoustics 8, 37–47 (2017).
[Crossref]

Rascevska, E.

K. J. Francis, E. Rascevska, and S. Manohar, “Photoacoustic imaging assisted radiofrequency ablation: Illumination strategies and prospects,” in TENCON 2019-2019 IEEE Region 10 Conference (TENCON) (IEEE, 2019), pp. 118–122.

Razansky, D.

E. Mercep, G. Jeng, S. Morscher, P.-C. Li, and D. Razansky, “Hybrid optoacoustic tomography and pulse-echo ultrasonography using concave arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 62(9), 1651–1661 (2015).
[Crossref]

C. Lutzweiler and D. Razansky, “Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification,” Sensors 13(6), 7345–7384 (2013).
[Crossref]

V. Ntziachristos and D. Razansky, “Molecular imaging by means of multispectral optoacoustic tomography (msot),” Chem. Rev. 110(5), 2783–2794 (2010).
[Crossref]

Rebling, J.

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

Reinecke, D. R.

R. A. Kruger, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys. 30(5), 856–860 (2003).
[Crossref]

Sato, N.

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

Schmitt-Manderbach, T.

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

Schüpbach, S.

M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Problems 23(6), S51–S63 (2007).
[Crossref]

Schwarz, M.

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Shigeta, Y.

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Slump, C. H.

J. Jose, R. G. Willemink, W. Steenbergen, C. H. Slump, T. G. van Leeuwen, and S. Manohar, “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys. 39(12), 7262–7271 (2012).
[Crossref]

Staal, J.

J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23(4), 501–509 (2004).
[Crossref]

Steenbergen, W.

Y. E. Boink, M. J. Lagerwerf, W. Steenbergen, S. A. Van Gils, S. Manohar, and C. Brune, “A framework for directional and higher-order reconstruction in photoacoustic tomography,” Phys. Medicine & Biol. 63(4), 045018 (2018).
[Crossref]

P. J. van den Berg, K. Daoudi, H. J. B. Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017).
[Crossref]

P. van Es, S. K. Biswas, H. J. B. Moens, W. Steenbergen, and S. Manohar, “Initial results of finger imaging using photoacoustic computed tomography,” J. Biomed. Opt. 19(6), 060501 (2014).
[Crossref]

J. Jose, R. G. Willemink, W. Steenbergen, C. H. Slump, T. G. van Leeuwen, and S. Manohar, “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys. 39(12), 7262–7271 (2012).
[Crossref]

R. G. Kolkman, P. J. Brands, W. Steenbergen, and T. G. van Leeuwen, “Real-time in vivo photoacoustic and ultrasound imaging,” J. Biomed. Opt. 13(5), 050510 (2008).
[Crossref]

Steinberg, I.

I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019).
[Crossref]

Sun, M.

X. Lin, J. Yu, N. Feng, and M. Sun, “Synthetic aperture-based linear-array photoacoustic tomography considering the aperture orientation effect,” J. Innovative Opt. Health Sci. 11(04), 1850015 (2018).
[Crossref]

Tarvainen, T.

J. Leskinen, A. Pulkkinen, J. Tick, and T. Tarvainen, “Photoacoustic tomography setup using led illumination,” in Opto-Acoustic Methods and Applications in Biophotonics IV, vol. 11077 (International Society for Optics and Photonics, 2019), p. 110770Q.

Tick, J.

J. Leskinen, A. Pulkkinen, J. Tick, and T. Tarvainen, “Photoacoustic tomography setup using led illumination,” in Opto-Acoustic Methods and Applications in Biophotonics IV, vol. 11077 (International Society for Optics and Photonics, 2019), p. 110770Q.

Treeby, B. E.

B. E. Treeby and B. T. Cox, “k-wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010).
[Crossref]

Tretbar, S.

M. Oeri, W. Bost, S. Tretbar, and M. Fournelle, “Calibrated linear array-driven photoacoustic/ultrasound tomography,” Ultrasound Med. Biol. 42(11), 2697–2707 (2016).
[Crossref]

Tummers, W. S.

I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019).
[Crossref]

van den Berg, P. J.

P. J. van den Berg, K. Daoudi, H. J. B. Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017).
[Crossref]

van Es, P.

P. van Es, S. K. Biswas, H. J. B. Moens, W. Steenbergen, and S. Manohar, “Initial results of finger imaging using photoacoustic computed tomography,” J. Biomed. Opt. 19(6), 060501 (2014).
[Crossref]

Van Gils, S. A.

Y. E. Boink, M. J. Lagerwerf, W. Steenbergen, S. A. Van Gils, S. Manohar, and C. Brune, “A framework for directional and higher-order reconstruction in photoacoustic tomography,” Phys. Medicine & Biol. 63(4), 045018 (2018).
[Crossref]

Van Ginneken, B.

J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23(4), 501–509 (2004).
[Crossref]

van Leeuwen, T. G.

J. Jose, R. G. Willemink, W. Steenbergen, C. H. Slump, T. G. van Leeuwen, and S. Manohar, “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys. 39(12), 7262–7271 (2012).
[Crossref]

R. G. Kolkman, P. J. Brands, W. Steenbergen, and T. G. van Leeuwen, “Real-time in vivo photoacoustic and ultrasound imaging,” J. Biomed. Opt. 13(5), 050510 (2008).
[Crossref]

Vermesh, O.

I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019).
[Crossref]

Viergever, M. A.

J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23(4), 501–509 (2004).
[Crossref]

Wang, L. V.

J. Yao and L. V. Wang, “Recent progress in photoacoustic molecular imaging,” Curr. Opin. Chem. Biol. 45, 104–112 (2018).
[Crossref]

G. Li, L. Li, L. Zhu, J. Xia, and L. V. Wang, “Multiview hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt. 20(6), 066010 (2015).
[Crossref]

L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012).
[Crossref]

L. V. Wang, “Multiscale photoacoustic microscopy and computed tomography,” Nat. Photonics 3(9), 503–509 (2009).
[Crossref]

J. Gamelin, A. Maurudis, A. Aguirre, F. Huang, P. Guo, L. V. Wang, and Q. Zhu, “A real-time photoacoustic tomography system for small animals,” Opt. Express 17(13), 10489–10498 (2009).
[Crossref]

Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys. 31(4), 724–733 (2004).
[Crossref]

Wang, X.

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

West, S. J

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

Wicker, K.

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

Willemink, R. G.

J. Jose, R. G. Willemink, W. Steenbergen, C. H. Slump, T. G. van Leeuwen, and S. Manohar, “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys. 39(12), 7262–7271 (2012).
[Crossref]

Xia, J.

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

G. Li, L. Li, L. Zhu, J. Xia, and L. V. Wang, “Multiview hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt. 20(6), 066010 (2015).
[Crossref]

Xia, W.

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

E. Maneas, R. Aughwane, N. Huynh, W. Xia, O. J. Ansari, and J. Deprest, “Photoacoustic imaging of the human placental vasculature,” Journal of biophotonics (2019).

Xiang, L.

Xing, D.

Xu, G.

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

Xu, Y.

Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys. 31(4), 724–733 (2004).
[Crossref]

Xue, C.

C. Liu, B. Zhang, C. Xue, W. Zhang, G. Zhang, and Y. Cheng, “Multi-perspective ultrasound imaging technology of the breast with cylindrical motion of linear arrays,” Appl. Sci. 9(3), 419 (2019).
[Crossref]

Yang, D.

Yang, S.

Yao, J.

J. Yao and L. V. Wang, “Recent progress in photoacoustic molecular imaging,” Curr. Opin. Chem. Biol. 45, 104–112 (2018).
[Crossref]

Yu, J.

X. Lin, J. Yu, N. Feng, and M. Sun, “Synthetic aperture-based linear-array photoacoustic tomography considering the aperture orientation effect,” J. Innovative Opt. Health Sci. 11(04), 1850015 (2018).
[Crossref]

Yu, L.

L. Yu, F. Nina-Paravecino, D. R. Kaeli, and Q. Fang, “Scalable and massively parallel monte carlo photon transport simulations for heterogeneous computing platforms,” J. Biomed. Opt. 23(1), 010504 (2018).
[Crossref]

Yuan, J.

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

Zafar, M.

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

Zhang, B.

C. Liu, B. Zhang, C. Xue, W. Zhang, G. Zhang, and Y. Cheng, “Multi-perspective ultrasound imaging technology of the breast with cylindrical motion of linear arrays,” Appl. Sci. 9(3), 419 (2019).
[Crossref]

Zhang, G.

C. Liu, B. Zhang, C. Xue, W. Zhang, G. Zhang, and Y. Cheng, “Multi-perspective ultrasound imaging technology of the breast with cylindrical motion of linear arrays,” Appl. Sci. 9(3), 419 (2019).
[Crossref]

Zhang, R.

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

Zhang, W.

C. Liu, B. Zhang, C. Xue, W. Zhang, G. Zhang, and Y. Cheng, “Multi-perspective ultrasound imaging technology of the breast with cylindrical motion of linear arrays,” Appl. Sci. 9(3), 419 (2019).
[Crossref]

Zhu, L.

G. Li, L. Li, L. Zhu, J. Xia, and L. V. Wang, “Multiview hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt. 20(6), 066010 (2015).
[Crossref]

Zhu, Q.

Zhu, Y.

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

Appl. Sci. (1)

C. Liu, B. Zhang, C. Xue, W. Zhang, G. Zhang, and Y. Cheng, “Multi-perspective ultrasound imaging technology of the breast with cylindrical motion of linear arrays,” Appl. Sci. 9(3), 419 (2019).
[Crossref]

Biomed. Opt. Express (1)

Chem. Rev. (1)

V. Ntziachristos and D. Razansky, “Molecular imaging by means of multispectral optoacoustic tomography (msot),” Chem. Rev. 110(5), 2783–2794 (2010).
[Crossref]

Curr. Opin. Chem. Biol. (1)

J. Yao and L. V. Wang, “Recent progress in photoacoustic molecular imaging,” Curr. Opin. Chem. Biol. 45, 104–112 (2018).
[Crossref]

IEEE Trans. Med. Imaging (2)

H. J. Kang, M. A. L. Bell, X. Guo, and E. M. Boctor, “Spatial angular compounding of photoacoustic images,” IEEE Trans. Med. Imaging 35(8), 1845–1855 (2016).
[Crossref]

J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23(4), 501–509 (2004).
[Crossref]

IEEE Trans. on Image Process. (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

IEEE Trans. Ultrason., Ferroelect., Freq. Contr. (1)

E. Mercep, G. Jeng, S. Morscher, P.-C. Li, and D. Razansky, “Hybrid optoacoustic tomography and pulse-echo ultrasonography using concave arrays,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 62(9), 1651–1661 (2015).
[Crossref]

Interface Focus (1)

P. Beard, “Biomedical photoacoustic imaging,” Interface Focus 1(4), 602–631 (2011).
[Crossref]

Inverse Problems (1)

M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Problems 23(6), S51–S63 (2007).
[Crossref]

J. Biomed. Opt. (5)

L. Yu, F. Nina-Paravecino, D. R. Kaeli, and Q. Fang, “Scalable and massively parallel monte carlo photon transport simulations for heterogeneous computing platforms,” J. Biomed. Opt. 23(1), 010504 (2018).
[Crossref]

B. E. Treeby and B. T. Cox, “k-wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010).
[Crossref]

P. van Es, S. K. Biswas, H. J. B. Moens, W. Steenbergen, and S. Manohar, “Initial results of finger imaging using photoacoustic computed tomography,” J. Biomed. Opt. 19(6), 060501 (2014).
[Crossref]

G. Li, L. Li, L. Zhu, J. Xia, and L. V. Wang, “Multiview hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt. 20(6), 066010 (2015).
[Crossref]

R. G. Kolkman, P. J. Brands, W. Steenbergen, and T. G. van Leeuwen, “Real-time in vivo photoacoustic and ultrasound imaging,” J. Biomed. Opt. 13(5), 050510 (2008).
[Crossref]

J. Innovative Opt. Health Sci. (1)

X. Lin, J. Yu, N. Feng, and M. Sun, “Synthetic aperture-based linear-array photoacoustic tomography considering the aperture orientation effect,” J. Innovative Opt. Health Sci. 11(04), 1850015 (2018).
[Crossref]

Light: Sci. Appl. (1)

M. Omar, J. Rebling, K. Wicker, T. Schmitt-Manderbach, M. Schwarz, J. Gateau, H. López-Schier, T. Mappes, and V. Ntziachristos, “Optical imaging of post-embryonic zebrafish using multi orientation raster scan optoacoustic mesoscopy,” Light: Sci. Appl. 6(1), e16186 (2017).
[Crossref]

Med. Phys. (4)

J. Gateau, M. Á. A. Caballero, A. Dima, and V. Ntziachristos, “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: Whole-body tomographic system for small animals,” Med. Phys. 40(1), 013302 (2012).
[Crossref]

Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys. 31(4), 724–733 (2004).
[Crossref]

R. A. Kruger, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys. 30(5), 856–860 (2003).
[Crossref]

J. Jose, R. G. Willemink, W. Steenbergen, C. H. Slump, T. G. van Leeuwen, and S. Manohar, “Speed-of-sound compensated photoacoustic tomography for accurate imaging,” Med. Phys. 39(12), 7262–7271 (2012).
[Crossref]

Nat. Photonics (1)

L. V. Wang, “Multiscale photoacoustic microscopy and computed tomography,” Nat. Photonics 3(9), 503–509 (2009).
[Crossref]

Opt. Express (2)

Photoacoustics (7)

S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019).
[Crossref]

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Characterization of lens based photoacoustic imaging system,” Photoacoustics 8, 37–47 (2017).
[Crossref]

A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019).
[Crossref]

I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019).
[Crossref]

S. Manohar and M. Dantuma, “Current and future trends in photoacoustic breast imaging,” Photoacoustics 16, 100134 (2019).
[Crossref]

P. J. van den Berg, K. Daoudi, H. J. B. Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017).
[Crossref]

K. J. Francis, B. Chinni, S. S. Channappayya, R. Pachamuthu, V. S. Dogra, and N. Rao, “Multiview spatial compounding using lens-based photoacoustic imaging system,” Photoacoustics 13, 85–94 (2019).
[Crossref]

Phys. Med. Biol. (1)

K. J. Francis and S. Manohar, “Photoacoustic imaging in percutaneous radiofrequency ablation: device guidance and ablation visualization,” Phys. Med. Biol. 64(18), 184001 (2019).
[Crossref]

Phys. Medicine & Biol. (2)

S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Medicine & Biol. 58(11), R37–R61 (2013).
[Crossref]

Y. E. Boink, M. J. Lagerwerf, W. Steenbergen, S. A. Van Gils, S. Manohar, and C. Brune, “A framework for directional and higher-order reconstruction in photoacoustic tomography,” Phys. Medicine & Biol. 63(4), 045018 (2018).
[Crossref]

Sci. Rep. (1)

Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018).
[Crossref]

Science (1)

L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012).
[Crossref]

Sensors (3)

W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time led-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018).
[Crossref]

C. Lutzweiler and D. Razansky, “Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification,” Sensors 13(6), 7345–7384 (2013).
[Crossref]

S. Agrawal, C. Fadden, A. Dangi, and S.-R. Kothapalli, “Light-emitting-diode-based multispectral photoacoustic computed tomography system,” Sensors 19(22), 4861 (2019).
[Crossref]

Ultrasound Med. Biol. (1)

M. Oeri, W. Bost, S. Tretbar, and M. Fournelle, “Calibrated linear array-driven photoacoustic/ultrasound tomography,” Ultrasound Med. Biol. 42(11), 2697–2707 (2016).
[Crossref]

Other (4)

K. J. Francis, E. Rascevska, and S. Manohar, “Photoacoustic imaging assisted radiofrequency ablation: Illumination strategies and prospects,” in TENCON 2019-2019 IEEE Region 10 Conference (TENCON) (IEEE, 2019), pp. 118–122.

A. A. Oraevsky, V. A. Andreev, A. A. Karabutov, and R. O. Esenaliev, “Two-dimensional optoacoustic tomography: transducer array and image reconstruction algorithm,” in Laser-Tissue Interaction X: Photochemical, Photothermal, and Photomechanical, vol. 3601 (International Society for Optics and Photonics, 1999), pp. 256–267.

E. Maneas, R. Aughwane, N. Huynh, W. Xia, O. J. Ansari, and J. Deprest, “Photoacoustic imaging of the human placental vasculature,” Journal of biophotonics (2019).

J. Leskinen, A. Pulkkinen, J. Tick, and T. Tarvainen, “Photoacoustic tomography setup using led illumination,” in Opto-Acoustic Methods and Applications in Biophotonics IV, vol. 11077 (International Society for Optics and Photonics, 2019), p. 110770Q.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. System configurations and finger joint imager. (a) Illumination from the top using 4 LED array. (b) Illumination from the side of the sample, two LED units parallel to the long axis of the array ($30.8^{\circ }$ with the imaging plane) and two on either side of the sample ($105^{\circ }$ with the transducer). (c) Schematic and (d) photograph of finger joint imager.
Fig. 2.
Fig. 2. Image quality with an increasing number of angular views. (a) Ground truth image of the digital phantom used as photoacoustic initial pressure. (b) Image quality measured with Structural Similarity (SSIM) index and Peak Signal to Noise Ratio (PSNR) for an increasing number of equispaced angular views. (c)-(f) Reconstructed photoacoustic tomographic images from 1,4,16 and 64 angular views respectively.
Fig. 3.
Fig. 3. A simulation study comparing top and side illumination configurations. (a) - (b) Normalized optical fluence maps in the two cases respectively. For the side illumination the probe is positioned on the right side of the phantom. (c) Ground truth vascular phantom. (d) - (e) Initial pressure obtained from top and side illumination respectively. (f) - (g) Reconstructed and normalized tomographic images from 16 angular views. (h) - (i) Comparison of line profiles between the ground truth (c) and the reconstructed images (f) and (g), along horizontal (green) and vertical (white) lines passing through the center of the phantom respectively.
Fig. 4.
Fig. 4. Photoacoustic and ultrasound tomographic imaging of leaf phantom. (a) Photograph of leaf skeleton stained with India ink and embedded in an agar phantom. (b) Photoacoustic tomographic image of the phantom using top illumination and (c) the corresponding ultrasound image. (d-g) Photoacoustic tomographic image obtained from 1,4, 12 and 18 angular views.
Fig. 5.
Fig. 5. Photoacoustic and ultrasound tomographic imaging of an ex vivo mouse knee. (a) Photograph of the sample. The tomographic image from 18 angular views formed using (b) plane wave and (c) B-mode ultrasound imaging from multiple angles. (d) Photoacoustic image using top illumination. (e) Overlaid photoacoustic and ultrasound image.
Fig. 6.
Fig. 6. In vivo finger joint tomographic imaging using side illumination configuration. (a) Overlaid photoacoustic and ultrasound maximum intensity projection image showing finger joint from a linear scan. (b - d) Photoacoustic, ultrasound and combined tomographic image of the finger joint (p1). (e - g) Photoacoustic, ultrasound and combined tomographic image 5 mm in front of the joint (p2) respectively. (The dynamic range of the color bar is not applicable for the maximum intensity projection image in (a))

Metrics