Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reducing artifacts in photoacoustic imaging by using multi-wavelength excitation and transducer displacement

Open Access Open Access

Abstract

The occurrence of artifacts is a major challenge in photoacoustic imaging. The artifacts negatively affect the quality and reliability of the images. An approach using multi-wavelength excitation has previously been reported for in-plane artifact identification. Yet, out-of-plane artifacts cannot be tackled with this method. Here we propose a new method using ultrasound transducer array displacement. By displacing the ultrasound transducer array axially, we can de-correlate out-of-plane artifacts with in-plane image features and thus remove them. Combining this new method with the previous one allows us to remove potentially completely both in-plane and out-of-plane artifacts in photoacoustic imaging. We experimentally demonstrate this with experiments in phantoms as well as in vivo.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Recent research has shown numerous potential clinical applications of photoacoustic imaging (PAI) [1–3]. This imaging technique is based on the photoacoustic (PA) effect. Samples are illuminated using short pulsed laser light. The local absorption of light generates ultrasound (US) waves which are then detected by a US transducer. PA images are reconstructed from the detected signals providing localized information about optical absorption properties of the samples. In clinical applications, the obtained information of endogenous chromophores such as hemoglobin helps diagnosing early stages of various diseases [2,4–6].

A typical PAI system basically consists of a light source and US transducer array. The transducer array can be classified as one-dimensional or two-dimensional [7]. While the two-dimensional transducer array provides 3D images, it requires significant users’ effort and experience for acquiring and interpreting these 3D images [8]. Additionally, two-dimensional transducer arrays and the associated scanners are unaffordable for many clinical applications [8]. One-dimensional transducer arrays, in contrast, are widely used for clinical studies [9,10], so from the point of view of clinical translation the incorporation of PAI in a one-dimensional array is preferred.

Several compact and low-cost PAI systems for clinical use have been developed. Integrating a laser source into a handheld US probe stands out among the approaches [9–12]. However, the occurrence of artifacts related to acoustic inhomogeneity of the tissue (clutter) is a major drawback of using a linear US transducer array. The artifacts aimed in this work include in-plane artifacts (IPAs), also called reflection artifacts, and out-of-plane artifacts (OPAs). While IPAs are caused by signals being reflected inside the imaging plane, OPAs are caused by absorbers located outside the imaging plane of the transducer array [13,14]. These artifacts appear as real image features, such as blood vessels, in the acquired image leading to misinterpretation. Therefore, correcting artifacts in PAI is of importance.

Previously, we proposed a method, photoacoustic-guided focused ultrasound (PAFUSion), to reduce IPAs [13,15]. This method has several limitations: a large number of US images are needed; the imaging plane has to be perpendicular to the PA sources; it is limited to reflectors within the angular aperture of the US probe. We then proposed another method using multi-wavelength excitation [14] which can overcome those limitations. In this method, the sample is imaged with laser light at multiple wavelengths to give the PA spectral response of the image features. It is assumed that the spectral response of the IPAs is highly correlated with the true image features of their corresponding real absorbers. Additionally, due to longer propagation of reflected US waves, IPAs appear at larger depths with weaker signals than their corresponding real features. These findings, therefore, can reveal IPAs. However, both this method and PAFUSion do not work for OPAs.

Several approaches reducing OPAs have been reported such as Deformation Compensated Averaging (DCA) and Localized Vibration Tagging (LOVIT) [16–18]. These approaches employ tissue deformation and motion tracking to de-correlate the OPAs. They can remove almost completely OPAs. However, these methods strongly rely on motion tracking and only work for easily deformable tissue. Additionally, the assumption that OPAs are deformed differently from in-plane features might be incorrect. Furthermore, a large number of US frames is required in these methods.

In this work, we propose a method for suppression of the OPAs by axially displacing the US transducer array. During the displacement, OPAs move up while in-plane features remain at their original position relative to the initial position of the transducer array. Comparing a sequence of images acquired during the displacement can therefore remove OPAs. We then combine this method with our previous work using multi-wavelength excitation to identify and remove both IPAs and OPAs.

We demonstrate the method in phantoms and in vivo. Results show that this is a promising approach for correcting artifacts in PAI.

2. Theory

2.1. Artifacts in photoacoustic imaging

Figure 1 illustrates the principle of artifacts in PAI. Figure 1(a) shows a configuration in which several artifacts are present in an acquired image, Fig. 1(b). This acquired image represents the imaging plane (orange plane in Fig. 1(a)) of the transducer array. There are two absorbers in this configuration: one is in the imaging plane and the other one is outside of the imaging plane. Since the light source excites a large volume, absorber located outside the imaging plane may absorb the light and generate signals. If the sensitivity of the transducer array at that angle is high enough [19], the out-of-plane absorber appears in the acquired image as an OPA, also called direct OPA (downward yellow arrow). In addition, underneath these absorbers are one or more acoustic reflectors. As a consequence, two more artifacts appear in the acquired image: IPA, also called reflection artifact (reflection of the in-plane absorber, upward dashed blue arrow) and indirect OPA (reflection of the out-of-plane absorber, downward dashed yellow arrow).

 figure: Fig. 1

Fig. 1 Artifacts in PAI. (a) Configuration resulting in artifacts. (b) Acquired PA image.

Download Full Size | PDF

Only one feature (upward blue arrow) should be present in the correct PA image of this configuration. However, three more visible features which are artifacts might lead to misinterpretation. In clinical imaging, direct OPAs might come from the skin surface such as human spots and birthmarks. Skin and bone layers which are strong acoustic reflectors can cause indirect OPAs.

3. Method

3.1. Transducer array displacement

The principle of this method for identifying OPAs is based on their depth in the acquired image. OPAs appear at depths equal to the distance of their corresponding absorbers to the transducer array. If the transducer array is axially displaced up, in-plane features move down exact the same distance as the displacement, while the OPAs experience in a lesser extent. In the coordinate system of the initial position (before the displacement) of the probe, the in-plane features stay at the same position whereas the OPAs move up. By exploiting these different behaviors, OPAs can be differentiated from the in-plane features.

Figure 2 shows a situation of one in-plane and one out-of-plane absorber. x, y, and z axes represent elevation, lateral, and axial axes respectively. Figure 2(c) shows a PA image of a configuration depicted in Fig. 2(a). The transducer array is then axially shifted up with Δz, seen in Fig. 2(b). Another PA image is acquired at the new position of the transducer array, Fig. 2(d). An OPA is present in these two acquired images.

 figure: Fig. 2

Fig. 2 Displacing the transducer array. (a) Initial, and (b) displaced configuration. (c) Acquired PA image of configuration (a). (d) Acquired PA image of configuration (b).

Download Full Size | PDF

At the initial position of the transducer array, Fig. 2(a), the distance between the out-of-plane absorber and the transducer is r. The OPA, therefore, appears at a depth of r in the acquired image, seen in Fig. 2(c). After the transducer array being shifted up, the distance between the out-of-plane absorber and the transducer array is r'. Since r'<r+Δz, the OPA appears at a smaller depth than in the previous image. However, the in-plane feature stays at the same position. Exploiting these different behaviors, the OPA can therefore be identified and potentially removed.

Denote s as the shift of the OPA. It is determined as s=r+Δzr', where r=xo2+zo2 and r'=xo2+(zo+Δz)2. s, therefore, is a function of Δz and can be rewritten as:

s(Δz)=Δz+xo2+zo2xo2+(zo+Δz)2.

Images acquired along the displacement are cropped out the extra part from the displacement to have the same size. Assume that they are perfectly segmented to separate features and background. The behavior along transducer array displacement, Δz, of in-plane and out-of-plane features on the dashed and dotted lines, Fig. 3(a), is depicted in Fig. 3(b) and Fig. 3(c) respectively. Figure 3(b) and Fig. 3(c) are a sequence of segmented acquired images where blue and yellow represent background and foreground (features). It can be seen that the in-plane feature remains at the same position while out-of-plane feature moves up.

 figure: Fig. 3

Fig. 3 In-plane and out-of-plane features behavior, (b) and (c) on the dotted and dashed lines respectively in (a), as an effect of the transducer array displacement. (d) OPA correcting along the displacement.

Download Full Size | PDF

In the segmented images, background and foreground pixels are set as 0 and 1 respectively. Since the OPA moves along Δz into the background of the previous images, seen in Fig. 3(c), if we multiply segmented images all together the OPA is gradually removed, seen in Fig. 3(d). The in-plane feature, in contrast, is not affected. To be able to remove completely the OPA in this way, it is required that the OPA completely moves out of its initial position. At Δz=ΔzT and beyond, the entire OPA is in the background of the initial image, therefore it is completely removed.

To estimate ΔzT, we assume that the axial size of the OPA, a (seen in Fig. 3(c)), is a constant during the displacement. The shift of the OPA at Δz=ΔzT must then fulfill s(ΔzT)=a. Substituting Eq. (1) into this gives:

ΔzT+xo2+zo2xo2+(zo+ΔzT)2=a.

Solving this equation, ΔzT is determined as:

ΔzT=2axo2+zo2a22(xo2+zo2zoa).

Equation (3) holds when the denominator is positive. This equation shows that ΔzT is dependent on the OPA’s axial size, a, the distance between the OPA and the imaging plane, xo, and the axial distance between the OPA and the transducer array, zo.

Figure 4 illustrates the dependence of ΔzT on the 3 parameters. It is worth noting that the larger zo is, the larger ΔzT is; the larger a is, the larger ΔzT is, seen in Fig. 4(a); and the larger xo is, the smaller ΔzT is, seen in Fig. 4(b).

 figure: Fig. 4

Fig. 4 Dependence of ΔzTon the 3 parameters zo, a, and xo.

Download Full Size | PDF

Above is a study for a single OPA. Two images are sufficient for this method (one at the initial position and another one at ΔzT or beyond) to remove the OPA. In a scenario that there are various OPAs, ΔzT is then different for different OPAs. In that case, two images might be insufficient. On the other hand, the larger Δz is, the more OPAs can be removed. However, at a certain Δz when all OPAs are removed, the corrected image does not change despite further displacement since there is no more OPA. At this point, the process can be finished giving the corrected image.

The method is summarized in the following flow chart, Fig. 5. The segmentation algorithm, which is based on the Sobel edge detection algorithm [20], and the de-segmentation step are the same as presented in [14].

 figure: Fig. 5

Fig. 5 Flow chart of the transducer array displacement method.

Download Full Size | PDF

To estimate ΔzT, we have assumed that acquired images are perfectly segmented and the axial size of the OPA does not change during the displacement. However, this is not the case in reality. This will be further discussed in the Discussion section. In addition, the displacement distance, Δz, might be limited, this will also be addressed in the Discussion section.

3.2. Combining two methods

We have previously reported a method for identifying IPAs (reflection artifacts) using multi-wavelength excitation [14]. By imaging with multiple wavelengths, spectral responses of features can be obtained. The spectral responses are then correlated to each other using the Pearson correlation coefficient [21]. Based on the correlation coefficients, IPAs can be revealed. However, this method does not work for OPAs. In this work, we combine the previous method with the transducer array displacement method described above as a complete method to remove all artifacts in PAI.

The previous method for IPAs can work with or without segmentation. The approach without segmentation does not rely on the segmentation algorithm, however, it requires significantly more data processing time and expensive calculation. On the other hand, image segmentation is needed in the method for OPAs described above. Therefore, we use segmentation in this work.

It is necessary to remind that in the previous method, of the acquired images with different wavelengths, the image that had the highest signal was selected for segmentation. However, since we combine the two methods, the image acquired at the mutual wavelength is selected for segmentation. The two methods are combined as the following steps:

  • 1. Image with multiple wavelengths (λ1...λn).
  • 2. Image along the transducer array displacement, Δz, at a fixed wavelength λn.
  • 3. Segment the image acquired at λn, Δz = 0 mm.
  • 4. Correct IPAs.
  • 5. Correct OPAs giving the final corrected image.

4. Setup

Figure 6 shows the experimental setup. A handheld US array probe is connected to a commercial US scanner MyLabOne (Esaote Europe BV, The Netherlands) which is presented in [14]. The US transducer array in the handheld probe comprises 128 elements with a pitch of 0.24 mm. It has a center frequency of 7.5 MHz with a bandwidth of 66%. In our study, the central 64 elements were used.

 figure: Fig. 6

Fig. 6 Schematic drawing of the setup with the probe movable in the vertical position and the optical fiber in a fixed position.

Download Full Size | PDF

The handheld probe is mounted on a motorized translation stage (MTS50A-Z8, Thorlabs, Germany) which can translate along the axial axis. The light source is an Opolette 532 (Laser2000, The Netherlands) which can emit laser light at a tunable wavelength in two ranges (680-960 nm or 1200-2400 nm) at a repetition rate of 20 Hz. Laser light is then coupled into a custom-made multi-mode optical fiber bundle (LightGuideOptics Germany GmbH, Germany) with a core diameter of 6.5 mm. The fiber bundle is fixated. A function generator (AFG 3102, Tektronix, Germany) is used to synchronize the system and to externally trigger the laser. The hardware components are controlled by a LabVIEW program.

In our experiments, laser light at wavelengths in the range of 680-960 nm was used. We used the multi-wavelength method with 8 wavelengths rather than 4 as in [14]. The pulse energy at the output of the fiber was 4.4 ± 0.4 mJ.

5. Experimental results

We performed experiments in both phantoms and in vivo to demonstrate the method. In each experiment, a total number of 13 images were acquired: 1 US image, 8 PA images at 8 different wavelengths (720:10:790 nm), and 5 PA images at a wavelength of 790 nm (1 image at every 1 mm of total 5 mm displacement of the transducer array). The US image was reconstructed from 61 angles (−30:1:30°) of plane waves. The PA images were averaged over 100 pulses and reconstructed using a Fourier transform based reconstruction algorithm [22].

Since PA images during the transducer array displacement were acquired at a wavelength of 790 nm. The image selected for segmentation in correcting IPAs using multiple wavelengths was at 790 nm instead of the image giving the highest signal as presented in [14].

Compared to images in our previous study [14], the signal-to-noise ratio is improved due to higher laser pulse energy. In order to present the results, x, y, and z axes are used in figures to represent elevation, lateral and axial axes respectively.

5.1. Phantom 1

A phantom was made of two black absorbers with a thickness of ~0.7 mm cut out from a black thread embedded in agarose (1.5%) in a petri dish lid (Greiner Bio-One GmbH, Germany), as shown in Fig. 7(a). The petri dish lid of 750 μm thickness was used as an acoustic reflector. Figure 7(b) is a schematic elevation view of the experiment configuration. One absorber was placed underneath the probe and the other one was positioned ~3-4 mm off the probe’s edge to make sure it was outside of the imaging plane. The coupling medium was a suspension of 2% Intralipid 20% in demi-water with estimated μs' = 4.2 cm−1 at a wavelength of 750 nm [23] mimicking scattering in human soft tissue [24]. A combined PA and US image is shown in Fig. 7(c). The gray color part represents the US image showing surfaces and reverberations of the petri dish lid. The hot color part is the PA image where the in-plane absorber (upward blue arrow) is visualized at an expected position relative to the lid. Underneath the lid are some more image features (upward dashed blue arrow), which are IPAs of the in-plane absorber. A direct OPA (downward yellow arrow) is also visible at an expected position relative to the lid and the in-plane absorber. At the bottom left corner of the image are some features (downward dashed yellow arrow) which are probably indirect OPAs of the out-of-plane absorber.

 figure: Fig. 7

Fig. 7 phantom 1. (a) Two absorbers embedded in agarose in a petri dish lid. (b) Schematic elevation view of the experiment configuration. (c) Combined PA and US image.

Download Full Size | PDF

Figure 8 shows the result of correcting IPAs using 8 wavelengths. Most of the IPAs (reflections of the in-plane absorber) are removed. However, some parts of the reflections are retained (dashed blue arrows, seen in Fig. 8(b)). The reason would be that the spectral response of these features is strongly affected by noise which was also pointed out in [14].

 figure: Fig. 8

Fig. 8 Correcting IPAs. (a) Acquired PA image. (b) IPA corrected image.

Download Full Size | PDF

The acquired image was then OPA corrected with displacing the transducer array. Figure 9(b) and Fig. 9(c) respectively show the behavior of the out-of-plane and in-plane features marked with a dotted line and a dashed line in Fig. 9(a) during the displacement, seen in Visualization 1. The out-of-plane features clearly shift up, especially the direct OPA. The shift of indirect OPAs is less pronounced since they have a larger axial distance to the transducer array. The size of in-plane features can be seen changing due to the axial resolution of the transducer array along depth. In addition, the position of in-plane features also slightly changes. The reason might be that the in-plane absorber was slightly off the imaging plane or the displacement was not perfectly along the axial axis or displacing the transducer array produced some positioning error.

 figure: Fig. 9

Fig. 9 In-plane and out-of-plane features behavior, (b) and (c) on the dotted and dashed lines respectively in (a), as an effect of the transducer array displacement.

Download Full Size | PDF

Figure 10 presents the result of correcting OPAs with 5 mm transducer array displacement, see also in Visualization 2. An OPA (dashed yellow arrow, seen in Fig. 10(b)) is not completely removed showing that ΔzT should be larger than 5 mm for this OPA to be completely removed. Additionally, as there is some slight movement of in-plane features, part of them is consequently overcorrected.

 figure: Fig. 10

Fig. 10 Correcting OPAs. (a) Acquired PA image. (b) OPA corrected image.

Download Full Size | PDF

Figure 11 shows the result of combining the two methods for IPA and OPA. Compared to the acquired PA image, Fig. 11(a), most of all artifacts are identified and removed giving the final corrected image with only one true in-plane absorber, seen in Fig. 11(b).

 figure: Fig. 11

Fig. 11 Final corrected image.

Download Full Size | PDF

The displacing transducer array method does not work for IPAs. In this experiment, a few more IPAs are removed using this method due to their slight movement during the displacement.

5.2. Phantom 2

Another phantom was made to mimic a situation that an OPA appears at the same position with an in-plane feature. Two black absorbers (the same as in phantom 1) were embedded in agarose (1.5%) in a petri dish, Fig. 12(a). A solution of 2% Intralipid 20% in demi-water was used as the coupling medium. The two absorbers were placed under the probe as in Fig. 12(b) where one absorber was in the imaging plane and the other one was outside of the imaging plane and the distance of these absorbers to the probe was the same. As a consequence, in the acquired PA image, the two absorbers appeared at the same position.

 figure: Fig. 12

Fig. 12 Phantom 2. (a) Two black absorbers embedded in agarose in a petri dish. (b) Schematic elevation view of the experiment configuration. (c) Acquired images along the displacement.

Download Full Size | PDF

Figure 12(c) shows the PA images during the transducer array displacement. At the initial position, Δz = 0 mm, only one feature is visible. The curve at the same position of this feature is a reconstruction artifact. When the probe is lifted up, Δz > 0 mm, another feature starts to appear and move up along with the displacement while the in-plane feature remains at the same position, seen in Visualization 3.

Figure 13(a) presents the result of this experiment where the OPA is completely removed. The OPA, as expected, moves up away from the real in-plane feature which stays at the same position. These behaviors can be seen clearly in Fig. 13(b) which shows the dashed blue line in the acquired image. Similarly, Fig. 13(c) illustrates the behavior of the reconstruction artifact marked in the dotted blue line in the acquired image. It is worth noting that this type of reconstruction artifact is dependent on the distance to the probe. Since it becomes less curved, its part in the dotted line moves down. As a result, this reconstruction artifact is partly removed, seen in Fig. 13(a).

 figure: Fig. 13

Fig. 13 OPA corrected image. (a) Acquired and OPA corrected images. (b) and (c) Behavior of features in dashed and dotted lines in (a) respectively.

Download Full Size | PDF

It is notable that when an OPA appears at the same position with a real in-plane feature as a single feature, the recorded amplitude of that overlapped feature is a sum of the amplitudes of the two features. Though the OPA can be removed, the true amplitude of the real in-plane feature cannot be recovered. This will be discussed further in the Discussion section.

5.3. In vivo

We also assess our method with in vivo experiments. In these in vivo experiments, we aimed at fingers which would give clear IPAs. In addition, we put a black ink mark on the skin of the imaged finger to mimic a human spot which would give strong OPAs [25].

Figure 14 shows the configuration of the in vivo experiments. A volunteer’s finger with an ink mark was placed in a water tank filled with demi-water as a coupling medium, Fig. 14(a). The white line in Fig. 14(b) depicts approximately the imaging plane in the experiments. The ink mark was a few millimeters outside of the imaging plane.

 figure: Fig. 14

Fig. 14 In vivo experiment. (a) Experiment configuration. (b) Ink mark mimicking a human spot. (c) Acquired PA image. (d) Acquired US image.

Download Full Size | PDF

Acquired PA and US images are presented in Fig. 14(c) and Fig. 14(d) respectively. In principle, they show similar structures as observed in [14] where the US image shows the skin and bone layers while the PA image shows the skin, superficial blood vessels and IPAs. However, in this PA image, there are a few more features (dashed blue circle) appearing at the position in agreement with the ink mark’s position on the skin surface. These are probably OPAs.

Figure 15 shows IPA and OPA corrected images. In the IPA corrected image, Fig. 15(b), most of the IPAs caused by the bone layers are removed. This matches with the results reported in [14]. Two features, f1 and f2 in Fig. 15(a), are of interest. Observation from correcting OPA shows that f1 is an OPA and f2 is an IPA (seen in Visualization 4, f1 moves up and f2 remains at the same position). f1 could be an indirect OPA of the ink mark reflected by the skin, while f2 could be a reflection of the skin signal. However, f1 is removed while f2 is not, in the IPA corrected image. The reason can be that the intensity of these two features is close to the noise. On the other hand, there are some OPAs at the same position with f2. This can be seen in Visualization 4 in which some features move up away from f2 during the transducer array displacement. Therefore, the true intensity of f2 is affected by these features, as mentioned in the phantom 2 experiment, and thus it is not identified as an IPA. The reconstruction artifact curve of the skin is also removed with the OPA correcting method. In addition, some real in-plane features are party removed in the OPA corrected image. The reason is that there was some slight movement of the finger during the transducer array displacement.

 figure: Fig. 15

Fig. 15 IPA and OPA corrected images.

Download Full Size | PDF

Combing correcting IPA and OPA gives the final corrected image, Fig. 16.

 figure: Fig. 16

Fig. 16 Final corrected image.

Download Full Size | PDF

6. Discussion

Correcting artifacts is of importance for reliable imaging. Our new method for OPAs offers various advantages over previously reported methods. First of all, no motion tracking is needed as in the case of LOVIT and DCA [16–18]. Second of all, these existing methods require a large number of US images while no US image is needed in the proposed method. Third of all, deforming tissue by applying force might affect the optical properties of the tissue as in DCA. As a consequence, the detected signals might not represent truly the source. In the proposed method, correcting OPAs can be done without tissue deformation. Lastly, deforming tissue by focusing strong pressure, as is done in LOVIT, might violate US safety.

The proposed method for correcting OPA relies on segmentation. In this work, a simple segmentation approach was used for a low calculation cost and time consumption. However, it might not segment images properly giving inaccurate axial dimension of OPAs. As a result, OPAs might not be correctly removed. Over-segmentation also might happen as pointed out in [14]. A more effective segmentation algorithm should be considered for a better performance.

In our experiments, while the probe was displaced, the fiber bundle remained fixated. The purpose of this was to maintain the signal strength of image features. However, if the light source is displaced with the probe, the laser beam is also repositioned and thus excites different tissue volumes. Acquired images along the displacement might show different structures resulting in miscorrecting.

In clinical applications, the displacement distance, Δz, might be limited. Depending on the location of out-of-plane absorbers, ΔzT might not be achieved, as discussed in section 3.1. As a result, OPAs are not completely removed, in which case another approach is needed. However, our results show that within 5 mm displacement, OPAs can be completely removed for a large range of locations and axial dimensions.

In this work, out-of-plane absorbers were positioned outside of the imaging plane elevationally. Lateral out-of-plane absorbers can also cause OPAs. The principle of the proposed method still holds for these OPAs. The quantity xo, used to describe out-of-plane absorbers in section 3.1, in this situation will be the lateral distance between the out-of-plane absorber and the imaging plane. Therefore, lateral OPAs can be identified and removed.

Axially displacing the probe in essence is to adjust the distance between the probe and in-plane and out-of-plane absorbers. Displacing the probe in other directions might also be able to de-correlate in-plane and out-of-plane image features. In a configuration as shown in phantom 1, if the probe is elevationally displaced in the direction to the out-of-plane absorber, real in-plane features will move down and OPAs will move up. However, in a scenario that there is another out-of-plane absorber in the other side of the imaging plane. OPAs of this out-of-plane absorber will also move down. Therefore, elevationally displacing the probe in both directions is required to identify all OPAs. The amount of work is double compared to using axial displacement. Nevertheless, displacing the probe in other manners and comparing with the proposed method will be investigated in our future work.

In a situation that an OPA appears at the same position with a real in-plane feature as a single feature, the image value is a sum of the OPA and the in-plane features. Displacing the transducer array can separate these two and remove the OPA. However, true image value of the real in-plane feature cannot be recovered. Interpolating or extrapolating image values of the OPA along the displacement might be able to estimate its value at the superposition. True image value of the real in-plane feature can, therefore, be recovered. Additionally, in the proposed method, OPAs are removed by setting their pixel values to 0. This might also remove the background information behind the OPAs. If the image value of the OPAs can be estimated, the background information can be retained while removing OPAs by subtracting the recorded value by the estimated one. This will be investigated in our future work.

In this work, the volunteer had to keep the finger still for ~5 minutes. Slight movements were inevitable resulting in some miscorrection. This is not ideal for clinical applications. However, the long experiment time was due to technical limitations. In particular, the translating stage was slow. It took ~2 minutes to acquire in total 5 PA images along 5 mm of the probe displacement. Using a higher speed translating stage will significantly reduce the experiment time. The acquiring data process with 8 wavelengths took ~2 minutes. This was due to the laser pulse repetition rate of 20 Hz. Using a high repetition rate laser would potentially achieve real-time artifact correction as shown in [14].

7. Conclusion

We have proposed a new method to remove out-of-plane artifacts exploiting different behaviors of out-of-plane artifacts and in-plane image features by axially displacing the transducer array. Combining this new method with our previous method for in-plane artifacts using multiple wavelengths [14], in-plane and out-of-plane artifacts in photoacoustic imaging can be identified and thus removed. Experiments in phantoms and in vivo were carried out to evaluate the combination of the two methods as a proof of concept. Results show the potential of this combined method for providing true photoacoustic images with no ultrasound images needed. In addition, a handheld probe suitable for clinical applications was used in the experiments bringing this method a step forwards to clinical translation.

Funding

European Union’s Horizon 2020 research and innovation programme (CVENT, H2020-731771)

Acknowledgements

The authors would like to thank Altaf Hussain for his discussions. The authors are also grateful to Johan van Hespen and Tom Knop, University of Twente, for their help with experiment setup.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. K. S. Valluru, K. E. Wilson, and J. K. Willmann, “Photoacoustic Imaging in oncology: translational preclinical and early clinical experience,” Radiology 280(2), 332–349 (2016). [CrossRef]   [PubMed]  

2. M. Heijblom, D. Piras, F. M. van den Engh, M. van der Schaaf, J. M. Klaase, W. Steenbergen, and S. Manohar, “The state of the art in breast imaging using the Twente Photoacoustic Mammoscope: results from 31 measurements on malignancies,” Eur. Radiol. 26(11), 3874–3887 (2016). [CrossRef]   [PubMed]  

3. M. Heijblom, W. Steenbergen, and S. Manohar, “Clinical photoacoustic breast imaging: the Twente experience,” IEEE Pulse 6(3), 42–46 (2015). [CrossRef]   [PubMed]  

4. P. J. van den Berg, K. Daoudi, H. J. Bernelot Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017). [CrossRef]   [PubMed]  

5. M. Toi, Y. Asao, Y. Matsumoto, H. Sekiguchi, A. Yoshikawa, M. Takada, M. Kataoka, T. Endo, N. Kawaguchi-Sakita, M. Kawashima, E. Fakhrejahani, S. Kanao, I. Yamaga, Y. Nakayama, M. Tokiwa, M. Torii, T. Yagi, T. Sakurai, K. Togashi, and T. Shiina, “Visualization of tumor-related blood vessels in human breast by photoacoustic imaging system with a hemispherical detector array,” Sci. Rep. 7(1), 41970 (2017). [CrossRef]   [PubMed]  

6. J. Jo, G. Xu, M. Cao, A. Marquardt, S. Francis, G. Gandikota, and X. Wang, “A Functional Study of Human Inflammatory Arthritis Using Photoacoustic Imaging,” Sci. Rep. 7(1), 15026 (2017). [CrossRef]   [PubMed]  

7. B. W. Drinkwater and P. D. Wilcox, “Ultrasonic arrays for non-destructive evaluation: A review,” NDT Int. 39(7), 525–541 (2006). [CrossRef]  

8. C. D. Herickhoff, M. R. Morgan, J. S. Broder, and J. J. Dahl, “Low-cost volumetric ultrasound by augmentation of 2D systems: Design and prototype,” Ultrason. Imaging 40(1), 35–48 (2018). [CrossRef]   [PubMed]  

9. M. K. A. Singh, W. Steenbergen, and S. Manohar, “Handheld probe-based dual mode ultrasound/photoacoustics for biomedical imaging,” in Frontiers in Biophotonics for Translational Medicine(Springer, 2016), pp. 209–247.

10. K. Daoudi, P. J. van den Berg, O. Rabot, A. Kohl, S. Tisserand, P. Brands, and W. Steenbergen, “Handheld probe integrating laser diode and ultrasound transducer array for ultrasound/photoacoustic dual modality imaging,” Opt. Express 22(21), 26365–26374 (2014). [CrossRef]   [PubMed]  

11. C. Kim, T. N. Erpelding, L. Jankovic, M. D. Pashley, and L. V. Wang, “Deeply penetrating in vivo photoacoustic imaging using a clinical ultrasound array system,” Biomed. Opt. Express 1(1), 278–284 (2010). [CrossRef]   [PubMed]  

12. C. Haisch, K. Eilert-Zell, M. M. Vogel, P. Menzenbach, and R. Niessner, “Combined optoacoustic/ultrasound system for tomographic absorption measurements: possibilities and limitations,” Anal. Bioanal. Chem. 397(4), 1503–1510 (2010). [CrossRef]   [PubMed]  

13. M. K. A. Singh and W. Steenbergen, “Photoacoustic-guided focused ultrasound (PAFUSion) for identifying reflection artifacts in photoacoustic imaging,” Photoacoustics 3(4), 123–131 (2015). [CrossRef]  

14. H. N. Y. Nguyen, A. Hussain, and W. Steenbergen, “Reflection artifact identification in photoacoustic imaging using multi-wavelength excitation,” Biomed. Opt. Express 9(10), 4613–4630 (2018). [CrossRef]   [PubMed]  

15. M. K. A. Singh, M. Jaeger, M. Frenz, and W. Steenbergen, “In vivo demonstration of reflection artifact reduction in photoacoustic imaging using synthetic aperture photoacoustic-guided focused ultrasound (PAFUSion),” Biomed. Opt. Express 7(8), 2955–2972 (2016). [CrossRef]   [PubMed]  

16. M. Jaeger, L. Siegenthaler, M. Kitz, and M. Frenz, “Reduction of background in optoacoustic image sequences obtained under tissue deformation,” J. Biomed. Opt. 14(5), 054011 (2009). [CrossRef]   [PubMed]  

17. T. Petrosyan, M. Theodorou, J. Bamber, M. Frenz, and M. Jaeger, “Fast scanning wide-field clutter elimination in epi-optoacoustic imaging using comb-LOVIT,” in Ultrasonics Symposium (IUS),2017IEEE International (IEEE, 2017), p. 1. [CrossRef]  

18. M. Jaeger, J. C. Bamber, and M. Frenz, “Clutter elimination for deep clinical optoacoustic imaging using localised vibration tagging (LOVIT),” Photoacoustics 1(2), 19–29 (2013). [CrossRef]   [PubMed]  

19. M. F. Beckmann, H.-M. Schwab, and G. Schmitz, “Optimizing a single-sided reflection mode photoacoustic setup for clinical imaging,” in Ultrasonics Symposium (IUS),2015IEEE International (IEEE, 2015), pp. 1–4. [CrossRef]  

20. I. Sobel, and G. Feldman, “A 3x3 isotropic gradient operator for image processing,” a talk at the Stanford Artificial Project in, 271–272 (1968).

21. J. Benesty, J. Chen, Y. Huang, and I. Cohen, “Pearson correlation coefficient,” in Noise Reduction in Speech Processing(Springer, 2009), pp. 1–4.

22. M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Probl. 23(6), S51–S63 (2007). [CrossRef]  

23. R. Michels, F. Foschum, and A. Kienle, “Optical properties of fat emulsions,” Opt. Express 16(8), 5907–5925 (2008). [CrossRef]   [PubMed]  

24. S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013). [CrossRef]   [PubMed]  

25. S. Preisser, G. Held, H. G. Akarçay, M. Jaeger, and M. Frenz, “Study of clutter origin in in-vivo epi-optoacoustic imaging of human forearms,” J. Opt. 18(9), 094003 (2016). [CrossRef]  

Supplementary Material (4)

NameDescription
Visualization 1       acquired PA images along the transducer array displacement
Visualization 2       acquired images vs out-of-plane artifact corrected images
Visualization 3       an out-of-plane artifact appear at the same position with an in-plane image feature
Visualization 4       out-of-plane artifacts in in-vivo PA images

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1
Fig. 1 Artifacts in PAI. (a) Configuration resulting in artifacts. (b) Acquired PA image.
Fig. 2
Fig. 2 Displacing the transducer array. (a) Initial, and (b) displaced configuration. (c) Acquired PA image of configuration (a). (d) Acquired PA image of configuration (b).
Fig. 3
Fig. 3 In-plane and out-of-plane features behavior, (b) and (c) on the dotted and dashed lines respectively in (a), as an effect of the transducer array displacement. (d) OPA correcting along the displacement.
Fig. 4
Fig. 4 Dependence of Δ z T on the 3 parameters z o , a, and x o .
Fig. 5
Fig. 5 Flow chart of the transducer array displacement method.
Fig. 6
Fig. 6 Schematic drawing of the setup with the probe movable in the vertical position and the optical fiber in a fixed position.
Fig. 7
Fig. 7 phantom 1. (a) Two absorbers embedded in agarose in a petri dish lid. (b) Schematic elevation view of the experiment configuration. (c) Combined PA and US image.
Fig. 8
Fig. 8 Correcting IPAs. (a) Acquired PA image. (b) IPA corrected image.
Fig. 9
Fig. 9 In-plane and out-of-plane features behavior, (b) and (c) on the dotted and dashed lines respectively in (a), as an effect of the transducer array displacement.
Fig. 10
Fig. 10 Correcting OPAs. (a) Acquired PA image. (b) OPA corrected image.
Fig. 11
Fig. 11 Final corrected image.
Fig. 12
Fig. 12 Phantom 2. (a) Two black absorbers embedded in agarose in a petri dish. (b) Schematic elevation view of the experiment configuration. (c) Acquired images along the displacement.
Fig. 13
Fig. 13 OPA corrected image. (a) Acquired and OPA corrected images. (b) and (c) Behavior of features in dashed and dotted lines in (a) respectively.
Fig. 14
Fig. 14 In vivo experiment. (a) Experiment configuration. (b) Ink mark mimicking a human spot. (c) Acquired PA image. (d) Acquired US image.
Fig. 15
Fig. 15 IPA and OPA corrected images.
Fig. 16
Fig. 16 Final corrected image.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

s(Δz)=Δz+ x o 2 + z o 2 x o 2 + ( z o +Δz) 2 .
Δ z T + x o 2 + z o 2 x o 2 + ( z o +Δ z T ) 2 =a.
Δ z T = 2a x o 2 + z o 2 a 2 2( x o 2 + z o 2 z o a ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.