A photoacoustic tomograph based on optical ultrasound detection is demonstrated, which is capable of high resolution real-time projection imaging and fast three-dimensional (3D) imaging. Snapshots of the pressure field outside the imaged object are taken at defined delay times after photoacoustic excitation by use of a charge coupled device (CCD) camera in combination with an optical phase contrast method. From the obtained wave patterns photoacoustic projection images are reconstructed using a back propagation Fourier domain reconstruction algorithm. Applying the inverse Radon transform to a set of projections recorded over a half rotation of the sample provides 3D photoacoustic tomography images in less than one minute with a resolution below 100 µm. The sensitivity of the device was experimentally determined to be 5.1 kPa over a projection length of 1 mm. In vivo images of the vasculature of a mouse demonstrate the potential of the developed method for biomedical applications.
© 2014 Optical Society of America
Photoacoustic (or optoacoustic) tomography (PAT) is a non-invasive imaging technique that can provide images with ultrasound resolution related to the distribution of optical absorption within biological tissue. Its increasing significance in medical and biological research arises from the possibility to visualize intrinsic absorbers like hemoglobin. With multispectral PAT also functional information can be extracted such as blood oxygenation. Hence, the field of application is very broad, including preclinical studies of tumor growth, breast imaging, imaging of the microvasculature and small animal imaging [1–3].
In general, PAT relies on the thermoelastic effect, the generation of acoustic waves due to the absorption of pulsed laser light. Local absorption and rapid heating result in generation of broadband ultrasound waves. By recording these waves, the initial absorbed energy distribution can be reconstructed. Thus, photoacoustic imaging is a hybrid technique making use of optical absorption and ultrasonic wave propagation, leading to high contrast and high spatial resolution.
The recording of the broadband, time resolved signals can be done with piezoelectric, optical, or capacitive micromachined ultrasound detectors. Due to the high ultrasound detection sensitivity even for small sensing areas, the uniform amplitude response over a wide frequency range and their optical transparency, optical detection (OD) methods are ideally suited for high resolution, three dimensional PAT [4–10].
Despite the advantages of OD compared to piezoelectric detection, OD will have difficulties entering wide field of applications as long as it is not possible to implement compact optical parallel detection for fast in vivo imaging. However, in the last decade efforts have been made to overcome this drawback by using a charge coupled device (CCD) camera as optical array detector for ultrasound. The CCD array has parallel detection capabilities, but the very limited frame rate precludes recordings of single shot time series. An alternative approach for fast data acquisition with a CCD camera is to take snap shots of the acoustic field and use the spatial information content of a single captured image at a certain time. This information is equivalent to taking time resolved signals recorded at defined detector positions. The proof of principle using a CCD camera in fast three-dimensional (3D) or 2D PAT has been already demonstrated in two different optical arrangements recording either a specific slice or the projection of the acoustic field. While Koestli et. al.  and Paltauf et. al.  used a camera in combination with a Fresnel reflectance sensor, Lamont et. al.  used a camera in combination with a planar polymer film Fabry-Perot interferometer (FPI) sensor to record slices of the acoustic field. The sensitivity of about one percent reflectance change per bar of acoustic pressure of the former robust and simple approach is not sufficient for PAT. The polymer film FPI sensor can achieve in principle an about two orders of magnitude higher sensitivity compared to the reflectance sensor, however the lacking homogeneity of the sensitivity over a large area for a certain interrogation wavelength and film thickness makes the applicability of this method difficult.
Niederhauser et. al. [14,15] used a camera in a Schlieren and a dark field setup to visualize the projections of the acoustic field in real time. We have shown the applicability of an optical phase contrast detection (OPCD) method in PAT . In acoustics, this method has been already used to measure ultrasonic fields [17,18]. However, to our knowledge it had never been used before in PAT. The main advantage compared to Schlieren or dark field techniques is the contrast of the recorded images, which is proportional to the pressure integrated along the light path. Therefore the recorded images are well suited for image reconstruction algorithms.
The information gained with the OPCD method about the optical phase shift is therefore a representation of the pressure field at a given time, projected or integrated along a certain direction. This method is related to the concept of using integrating line detectors in PAT [19–21]. Instead of a single detector that measures time resolved signals from a single line detector (a free or guided light beam), full field detection uses a detector array, interrogating the integrated pressure at a given time from a 2D array of line detectors. To obtain the initial 3D pressure distribution, first of all, the acoustic wave pattern has to be captured from several directions. Each individual pattern is used to calculate a 2D projection of the pressure distribution at time zero. The second step is to apply the inverse Radon transform to the projection data set calculated in the previous step, to obtain the initial 3D pressure distribution. While our earlier work focused on the 2D reconstruction of phantoms from projections of the pressure field , we now demonstrate the full 3D reconstruction, provide a thorough characterization of the imaging setup and show first in vivo images of mouse vasculature.
The work is organized as follows: The method section contains information about the working principle of the OPCD setup, the reconstruction method, the data acquisition procedure and the experimental estimation of sensitivity. The results section provides information about the achievable resolution and the applicability of the setup for in-vivo imaging. Finally, the results are discussed and an outlook is given for future developments.
2.1 Experimental setup
The OPCD setup for capturing pictures of acoustic wave patterns (the phase object) is shown in Fig. 1. Laser pulses with 10 ns pulse duration coming from the frequency doubled output (532 nm) of a 10 Hz Nd:YAG laser system were used for photoacoustic excitation. All PAT experiments shown in this work were performed in backward mode by illuminating the sample from below with a radiant exposure below the American National Standards Institute (ANSI) limit of 20 mJ/cm2. The sample was arranged in the center of the sample holder (petri dish) filled with water or ultrasound gel. The sample holder was mounted on a rotary table and only slightly dipped in water for acoustic coupling and positioned in the rear focal plane (object plane) of the Fourier transforming lens (L1, f1 = 200 mm). The pulsed probe laser beam coming from a diode pumped solid state laser system (wavelength 527 nm, pulse duration 8 ns) was collimated and expanded to a beam diameter of about 35 mm, matched to the size of the field of view (FOV) of the detection system. This ensured that the detection aperture was sufficiently large to capture the outgoing acoustic waves coming from the optically absorbing regions inside the sample and that the total area of the CCD element was used.
The working principle of the detection system is based on the interaction of the acoustic and optical fields (elasto-optic interaction). The latter attains a phase variation proportional to the pressure integrated along the probe beam path. This phase variation is converted into a measurable intensity modulation by arranging an optical signal processing element (PP: phase plate) in the Fourier plane of L1. Like in optical phase contrast microscopy an additional relative phase shift of either plus or minus π/2 between the non-diffracted and diffracted portions of the probe laser beam leads to constructive or destructive interference between both parts. This results in a so called positive or negative phase contrast image. The lens L2 (f2 = 90 mm) is arranged in a way that its rear focal plane and the phase plate coincide, producing a reversed image of the phase object on the CCD element of the camera. A narrow bandpass filter (NBPF) is placed directly in front of the camera to avoid exposure with stray light from the excitation laser pulse. Snapshots of the pressure distribution are taken with a CCD camera (pco.2000s, 14 bit dynamic range) at certain time delay with respect to the excitation laser pulse. The time delay can be adjusted by using a delay generator.
The pressure induced optical phase variation is given by the integral of the pressure field over the probe beam path in y direction ,17,18,22]
In principle, the resolution is adjustable with MO. The natural limit is the optical diffraction limit determined by the wavelength and the numerical aperture of imaging system. Note that the acoustic field is integrated along the propagation direction of the probe laser beam. Therefore, also the depth of field (~8 mm) of the optical imaging system is an important parameter.
A recorded camera image reveals a map of the acoustic field around the object, integrated or projected in direction of the detection beam for a certain orientation of the sample. A very fast and easy to implement reconstruction algorithm is the Fourier algorithm. This algorithm is used to obtain projection images by back propagating the waves to time zero, when they were produced by pulsed laser excitation. The 2D Fourier reconstruction algorithm is given by
2.3 Data acquisition method
The measurement system is sensitive to any phase object along the optical path of the probe beam. Besides the desired phase object, the acoustic field, also undesired phase objects like impurities on and in optical components, temperature fluctuations and floating particles in water appear on recorded images. Hence, it is almost impossible to record an image with homogenous background. To solve this problem a “background image” (BG, image without wave pattern) and a “foreground image” (FG, image with wave pattern) have to be recorded for each orientation α of the sample (see Fig. 2(a) and Fig. 2(b)). The subsequent subtraction of those images yields wave pattern images with almost constant background. To improve the contrast to noise ratio (CNR), this procedure can be repeated several times for averaging. In the final preprocessed image (Fig. 2(c)) the background pixel values fluctuate around zero due to the noise level of the CCD camera. On closer examination of Fig. 2(c) it can be seen that the time varying part of the background, like floating particles (bright spots in Fig. 2(a) and Fig. 2(b)), is not completely removed due to the movement of the particles during the time delay of 0.1s (determined by the frame rate of the camera) between the recordings. The stripe with negative pixel values along x direction at position z = 7.5 mm also results from a time varying background that could not be removed by the subtraction method. The reconstructed projection image is shown in Fig. 2(d). Spurious bright spots and stripes in the preprocessed image cause arc and line shaped artifacts in the reconstructed projection image due to the divergence after reconstruction. This underlines the necessity of preventing artifacts by using optically clean water as acoustic propagation medium and by avoiding turbulences in water so that the background subtraction method can work efficiently.
For both, the BG and the FG image, the sample is irradiated with the excitation laser pulse. The difference is the chosen delay time between excitation and probe laser pulse. The delay time, set by the delay generator, is switched for each exposure between the values t = 0 µs and t = T0 > 0µs. The traveling time T0 should be smaller than 10 µs, which is the time acoustic waves need to cross half of the FOV with its size of about 30 mm. The traveling time and the resulting traveling distance c·T0 are directly linked to the aperture γ of the visible wave pattern from a point source. If γ approaches 180°, a half circular pattern is visible in the image, and the horizontal resolution is comparable to the vertical resolution. However, smaller angles will lead to a deterioration of lateral resolution (limited view problem). Since such large detection angles are only possible for superficial structures a compromise has to be found concerning optimum lateral resolution and imaging depth.
2.4 Sensitivity and dynamic range
The sensitivity of the OPCD system was estimated using a well-defined photoacoustic source. This source was generated by coupling the excitation laser pulses into a 1 mm glass fiber. The flat fiber exit was submerged in a cuvette filled with an absorbing dye solution (OrangeG/water mixture). Knowing the excitation pulse energy, the absorption coefficient of the dye solution and the shape of the acoustic source the pressure field could be simulated for a certain propagation distance. For a distance of 15 mm, matched to half the size of the FOV, the simulated pressure fields were integrated along y-direction, yielding maximum pressure-length-product-values for six different laser pulse energies used in the experiment. By plotting the corresponding CNR values from the recorded preprocessed wave pattern images against the values and applying a linear regression to the data points, the noise equivalent pressure length product (NEPLP) of 5.1 kPa mm could be obtained from the inverse of the slope k of the regression line. The CNR values of the preprocessed images were estimated using the relation:Figure 3 shows one of the recorded preprocessed wave pattern images, a profile along z-direction taken from that image at x = 0, indicating that positive and negative pressure can be detected. The linear dependency of the CNR on the value, which was used to estimate the NEPLP value, is shown in part (c) of the image.
The theoretically achievable sensitivity of the imaging system can be derived via the CNR of the CCD camera. It is given by the ratio of generated signal charge carriers (signal electrons) to the number of unwanted charge carriers (noise electrons).Eq. (2). Since the phase contrast method assumes that the contribution of the signal noise to the total noise small and can be therefore be also neglected, leading to. With all these simplifications the CNR is now given byEq. (1) the theoretically achievable sensitivity expressed by the NEPLP is given by
The upper limit of the dynamic range can be estimated from the requirement of a linear relation between the phase shift caused by the integrated pressure and the intensity modulation. Assuming that the linearity holds up to a phase shift of π/10, the related pressure value is 200 kPa (from Eq. (1)), integrated over a length of 1 mm. With the NEPLP from above, this gives a theoretical dynamic range of 56 dB.
3.1 Phantom imaging experiment
To determine the spatial resolution and to test whether the proposed method is suitable for fast imaging an experiment was conducted using a phantom sample containing black human hair loops with diameters ranging from 55 to 70 µm and black polystyrene microspheres with 110 µm diameter embedded in a mixture of 2% agar and 5% intralipid in water. The sample was arranged in the setup as described in section 2.1. The chosen radiant exposure was 12 mJ/cm2, about two times below the ANSI limit for 10 ns laser pulses with a wavelength of 532 nm. A delay time of T0 = 7 µs was chosen, corresponding to a wave propagation distance of 10.5 mm, to maximize the detection aperture.
Before reconstruction, the recorded wave pattern images were preprocessed as described in 2.3. As the number of averages was set to four, we recorded eight images (four FG and four BG images) for one projection image, requiring less than one second with the 10 Hz pulse repetition and camera frame rate. To obtain the 3D image, two hundred averaged projection images were recorded over a half rotation. Consequently, the recording of the 3D data took just about 3 minutes. From this experimental result (Fig. 4) we estimated the vertical/axial (z-direction) and horizontal/lateral resolution (xy-planes) to be better than 73 µm and 80 µm, respectively.
3.2 In-vivo imaging experiment of mouse vasculature
The experiment started with the preparation of a 14 week old female CD1 mouse. The mouse was anesthetized by a single intraperitoneal injection of ketamine (70 mg/kg; Pharmacia & Upjohn, Germany) and xylazin (12 mg/kg; Bayer, Leverkusen, Germany) to avoid unwanted extensive movement of the mouse during the experiment over a time period of about 30 minutes to avoid motion artifacts. The musculus Biceps femoris was defined as the region of interest (ROI) and all hair was removed from the hindlimb using depilation cream prior to scanning. Then the mouse was carefully placed in an upright position in the petri-dish filled with water and gently fixed to the sample holder with a soft elastic tape. The sample holder was mounted to the rotary table and slightly tipped in water for acoustic coupling (see Fig. 1).
The chosen radiant exposure was 15 mJ/cm2, which is still below the ANSI limit. The used excitation wavelength in the visible range (532 nm) limited the imaging depth to a few millimeters. The wave pattern images were recorded at delay time T0 = 9 µs and due to eight times averaging the recording time of each projection image shown in Fig. 5 was 1.6 seconds. The projection images can be obtained in principle in real-time (without averaging) and reveal already the complex vessel structure. However, a single projection image does not directly show the absorbed energy density. For instance, a bright region in a photoacoustic projection image (few of them are indicated by arrows in Fig. 5) can either mean an enhanced local light absorption or that an absorbing structure is oriented along the projection direction. In applications where the projection images are sufficient, like the monitoring of biopsies, it is a fast alternative to full 3D imaging.
The 3D absorbed energy distribution in the hind leg of the mouse was obtained by recording one hundred projection images over a half rotation. Consequently, the recording time of the 3D data took again 3 minutes. In Fig. 6 maximum amplitude projection images are depicted from the top and the side view showing the vasculature network in the muscle of the hind leg of the mouse. The appearance of the blood vessels visible in the zoom image (Fig. 6(c)) clearly indicates a lateral resolution below 100 µm.
Approval to this study by the Animal Protocol Review Board of the City of Vienna was obtained prior to conducting experiments. They were carried out in accordance with the guideline for “Care and Use of Laboratory Animals” of the National Institute of Health.
The presented photoacoustic tomography technique takes advantage of the high detector density and number offered by a CCD array. We could demonstrate that it can generate 3D photoacoustic images with high resolution in a short time. Its working principle is similar to a high density line detector array, but contrary to an arrangement of single ultrasound detectors it captures snapshots of the acoustic wave pattern at given delay time after the excitation laser pulse, rather than time dependent pressure signals at the predefined detector positions. Consequently, the information on the acoustic field depends on the field of view of the camera and on the chosen delay time between excitation and probe laser pulses.
In the setup the sample was separated from the coupling liquid by a thin plastic film (base of the petri-dish). This ensured that no impurities of the sample penetrated the optical imaging volume. Other than in a previous study where the full acoustic field was detected after leaving the object , in this setup only the acoustic waves entering the half space adjacent to the sample surface are captured. This gives rise to some limited data artifacts and reduced lateral resolution with increasing imaging depth and normal distance to the rotation axis but more closely corresponds to a real imaging situation where only one side of an object is accessible. This effect can be clearly seen in the MAP image shown in Fig. 6(b). Blood vessels in the central region are clearly resolved but those located towards the periphery of the imaging area become increasingly blurred.
3D imaging with the integrating detection approach needs to record projection images from different orientations of the sample. Hence, the detection aperture varies for off-axis absorbing structures due to varying normal distances to the rotation axis while rotating the sample. This circumstance improves the off-axis 3D image resolution compared to the resolution in a single projection image since all projection images from the different orientations of the sample contribute to the 3D image. This is an advantage of the integrating line detection concept compared to approaches where a point detector scans over a plane [4,7–9], where the detection aperture of a certain reconstructed image voxel is fixed. The estimated horizontal and vertical 3D-image resolution from the phantom experiment (see Fig. 4) in the central region of the imaging volume is better than 80 µm and 73 µm, respectively, in accordance with the theoretical limit of 66 µm. Currently, only the PAT system based on the thin film Fabry-Perot sensor achieves a comparable resolution <100µm .
The imaging time of the proposed OPCD tomograph depends on the operating mode. The system can operate in the projection or 3D imaging mode. The projection mode without averaging allows recording of projection images in real-time. The currently achievable frame rate for projection imaging is 10Hz, limited by the CCD-camera and the repetition rate of the pulse laser system. This practically real-time mode could be used for applications where a full 3D image is not needed, such as guided needle biopsy. The 3D imaging mode requires the rotation of the sample while recoding the projection images. For a required resolution Λmin within a section of the image with side length SL, the angular step size should not exceed , derived from the Fourier-Slice-Theorem and the Nyquist-criterium . Assuming a resolution of 100 µm and a cubic imaging volume with side length of 10 mm the maximum angular step size is 0.9°. Hence, the data acquisition for 3D imaging without averaging takes less than 1 minute with the 10 Hz frame rate considering that 200 projection images must be recorded over an angular range of 180°.
The experimentally and theoretically estimated sensitivity, the NEPLP value, is 5.1 kPa mm and 0.31 kPa mm, respectively. The large discrepancy of these values is a clear hint that still many things can be optimized. One is the improvement of the background subtraction method by measuring the probe pulse energy and using it to normalize the FG and BG image before image subtraction, the improvement of the optical phase plate by adding an anti-reflection coating avoiding ghost images and the reduction of unwanted phase objects (floating particles) that cause a limitation of the dynamic range due to the saturation of the CCD-camera.
The experimentally estimated NEPLP value of 5.1 kPa mm for acoustic wave monitoring is about 10 times larger (less sensitive) compared to the free beam optical line detection system (0.5 kPa mm) based on two beam interference (Mach-Zehnder interferometer) . Compared to a 1 mm diameter piezoelectric PVDF (28µm) sensor terminated with a high quality preamplifier (0.66 kPa)  the NEPLP is about 8 times worse. Despite the somewhat poorer performance of the sensitivity of the OPCD method in comparison with other detection methods, high quality images (see Fig. 4 and Fig. 6) can be obtained without exceeding the permissible exposure levels. The reason is the large number of detection positions (pixels) contributing to the reconstruction compared to other detection approaches. For instance, the wave pattern of a point like source recorded 7 µs after the photoacoutic excitation appears as a semicircle in the recorded image. Dividing the arc length of the semicircle (~33 mm) by the pixel size in the object plane (~33 µm) the number of pixels (detection positions) is 1000 and all these pixels with corresponding pixel value contribute to one specific voxel in the reconstructed projection image improving the CNR in the final image.
The presented technique provides real time 2D and fast 3D photoacoustic imaging using a purely optical, parallel detection method working in backward imaging mode. It combines the advantages of optical detection with the possibility to acquire the required high amount of data within a relatively short time using parallel detection with a CCD-camera. Next steps will be multispectral in-vivo imaging to answer scientific questions from biological and medical research.
The minimum ultrasound detection frequency of the OPCD system can be derived from the equation describing the light diffraction caused by acousto-optic interaction, the refraction of the transmitted light at the water-air interface, the properties of the imaging lens and the diameter of the central spot of the phase plate. The light diffraction is given by Fig. 1) with focal length f1 at normal distance DPP/2 to the optical axis. Assuming that DPP is the diameter of the central spot of the phase plate the minimum detectable acoustic frequency can be derived from the relations outlined above to
This work has been supported by the Austrian Science Fund (FWF), Project No. S10508-N20.
References and links
1. M. H. Xu and L. V. Wang, “Photoacoustic Imaging in Biomedicine,” Rev. Sci. Instrum. 77(4), 041101 (2006). [CrossRef]
4. E. Z. Zhang, J. Laufer, and P. Beard, “Backward-mode multiwavelength photoacoustic scanner using a planar Fabry-Perot polymer film ultrasound sensor for high-resolution three-dimensional imaging of biological tissues,” Appl. Opt. 47(4), 561–577 (2008). [CrossRef] [PubMed]
5. J. Laufer, P. Johnson, E. Z. Zhang, B. Treeby, B. Cox, B. Pedley, and P. Beard, “In vivo preclinical photoacoustic imaging of tumor vasculature development and therapy,” J. Biomed. Opt. 17(5), 056016 (2012). [CrossRef] [PubMed]
6. R. Nuster, M. Holotta, C. Kremser, H. Grossauer, P. Burgholzer, and G. Paltauf, “Photoacoustic microtomography using optical interferometric detection,” J. Biomed. Opt. 15(2), 021307 (2010). [CrossRef] [PubMed]
8. A. Hochreiner, J. Bauer-Marschallinger, P. Burgholzer, B. Jakoby, and T. Berer, “Non-contact photoacoustic imaging using a fiber based interferometer with optical amplification,” Biomed. Opt. Express 4(11), 2322–2331 (2013). [CrossRef] [PubMed]
10. A. Maxwell, S. Huang, T. Ling, J. S. Kim, S. Ashkenazi, and L. J. Guo, “Polymer Microring Resonators for High-Frequency Ultrasound Detection and Imaging,” IEEE J. Sel. Top. Quantum Electron. 14, 191–197 (2008).
11. K. P. Köstli, M. Frenz, H. P. Weber, G. Paltauf, and H. Schmidt-Kloiber, “Optoacoustic tomography: time-gated measurement of pressure distributions and image reconstruction,” Appl. Opt. 40(22), 3800–3809 (2001). [CrossRef] [PubMed]
12. G. Paltauf, H. Schmidt-Kloiber, K. P. Koestli, M. Frenz, and H. P. Weber, “Optoacoustic imaging using two-dimensional ultrasonic detection” in A. A. Oraevsky, ed. (SPIE, 2000).
13. M. Lamont and P. C. Beard, “2D imaging of ultrasound fields using a CCD array to detect the output of a Fabry Perot Polymer film sensor,” Electron. Lett. 42(3), 187–189 (2006). [CrossRef]
14. J. J. Niederhauser, D. Frauchiger, H. P. Weber, and M. Frenz, “Real-Time Optoacoustic Imaging Using a Schlieren Transducer,” Appl. Phys. Lett. 81(4), 571–573 (2002). [CrossRef]
15. J. J. Niederhauser, M. Jaeger, and M. Frenz, “Real-Time Three-Dimensional Optoacoustic Imaging Using an Acoustic Lens System,” Appl. Phys. Lett. 85(5), 846–848 (2004). [CrossRef]
17. T. A. Pitts, A. Sagers, and J. F. Greenleaf, “Optical Phase Contrast Measurement of Ultrasonic Fields,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 48(6), 1686–1694 (2001). [CrossRef] [PubMed]
18. E. K. Reichel and B. G. Zagar, “Phase contrast method for measuring ultrasonic fields,” IEEE Trans. Instrum. Meas. 55(4), 1356–1361 (2006). [CrossRef]
19. P. Burgholzer, C. Hofer, G. Paltauf, M. Haltmeier, and O. Scherzer, “Thermoacoustic tomography with integrating area and line detectors,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 52(9), 1577–1583 (2005). [CrossRef] [PubMed]
20. G. Paltauf, R. Nuster, M. Haltmeier, and P. Burgholzer, “Photoacoustic tomography using a Mach-Zehnder interferometer as an acoustic line detector,” Appl. Opt. 46(16), 3352–3358 (2007). [CrossRef] [PubMed]
21. G. Paltauf, R. Nuster, M. Haltmeier, and P. Burgholzer, “Photoacoustic Tomography with Integrating Area and Line Detectors,” in Photoacoustic Imaging and Spectroscopy, L. V. Wang, ed. (CRC Press Taylor & Francis Group, 2009), pp. 251–263.
22. G. S. Settles, Schlieren and Shadowgraph Techniques (Springer, 2001).
23. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE Press, 1988).
24. R. Nuster, S. Gratt, K. Passler, H. Grün, T. Berer, P. Burgholzer, and G. Paltauf, Comparison of optical and piezoelectric integrating line detectors” in Anonymous (SPIE, 2009).
25. E. Z. Zhang and P. C. Beard, “Ultra high sensitivity, wideband Fabry Perot ultrasound sensors as alternative to piezoelectric PVDF transducers for biomedical photoacoustic detection” in Proc. SPIE 5320, ed., 2004).