## Abstract

Optical diffusion tomography is an emerging technology that generates images of objects imbedded in turbid media using scattered light. To date, however, most demonstrations of this technology use a sphere or a collection of spheres as the imbedded object. Here we use a backpropagation algorithm and a planar geometry to reconstruct images of resolved objects (airplane models) imbedded in tissue phantoms. In addition, we show that we can locate the resolved objects in three dimensions in the turbid medium using only a single planar view. The imaging system uses diffuse photon density waves produced using kilohertz modulation (that is, essentially dc illumination).

© 2000 Optical Society of America

## 1. Introduction

Optical diffusion tomography is an emerging technology that uses optical radiation to probe a turbid medium in order to image the structure of the (generally inhomogeneous) medium. In many cases the optical depth of the medium is large enough to essentially scatter all of the incident illumination, leaving only scattered light to accomplish the desired imaging. Many different schemes have been devised to reconstruct images from the information contained in the scattered light such as finite element equation solvers [1,2], iterative reconstruction techniques [3], model fitting [4], backprojection [5], and least-squares-based [6,7] and wavelet-based [8] conjugate gradient descent methods. Recently we developed a diffraction tomographic approach to imaging objects imbedded in turbid media [9,10] and used the associated backpropagation algorithm [11,12] to obtain three-dimensional reconstructions of spherical objects. Spherical objects are commonly used to demonstrate image reconstruction algorithms because they are simple to model and relatively easy to reconstruct. A notable exception to using spherical objects can be found in Ref.[7]. However, most objects to be imaged in turbid media cannot be well-modeled as being spherical in shape. As a result, there is a need to test these algorithms on non-spherical objects that have structure that can be resolved by the imaging system. In this paper, we present image reconstructions from laboratory data of resolvable objects imbedded in turbid media using a transillumination planar geometry experimental setup as is common in mammography. Our algorithm reconstructs the detected scattered wave throughout the entire three-dimensional turbid media. We use the full three-dimensional reconstruction to locate the object inside the turbid medium by determining where in the reconstructed volume we obtain the highest image quality. The outline of the paper is as follows: in Section 2 we describe our image reconstruction algorithm, in Section 3 we describe the experimental setup used to obtain the data that were input to the reconstruction algorithm, in Section 4 we present reconstructed images, and in Section 5 we present conclusions and future research directions.

## 2. Reconstruction algorithm

We used a single-view backpropagation algorithm [11,12] to reconstruct images from data obtained using a planar experimental geometry. The backpropagation algorithm development assumes that the object to be imaged is imbedded in a homogeneous and infinite medium. However, the algorithm also works well for slab geometries. A conceptual diagram showing the laboratory data collection and reconstruction geometry is shown in Fig. 1. The turbid medium volume is illuminated on one side and the transmitted light is measured on the other side in a square grid pattern. The light scattered from the imbedded object is reconstructed in planes between the detection and illumination planes that are parallel to the detection plane and are separated by a distance Δz (the value for Δz is chosen by the user). For the data presented here, we used a scanning detector to measure the transmitted intensity; however, for unmodulated illumination, a CCD camera can be used to simultaneously measure the intensity at all grid points. Our algorithm is as follows: (1) obtain measured data for the medium that contains the object to be imaged (the inhomogeneous data), (2) obtain measured data from a medium with the same optical properties without any imbedded object (the homogeneous data), (3) subtract the homogeneous data from the inhomogeneous data to obtain the light scattered from the imbedded object, (4) Fourier transform the scattered data, (5) backpropagate the Fourier-transformed scattered data to the desired reconstruction plane using the appropriate transfer function, (6) inverse Fourier transform, and (7) repeat for all reconstruction planes. The backpropagation step requires the selection of a filter to regularize the reconstruction process where both the type of filter to be used and the cutoff frequency must be chosen [12]. We typically choose the cutoff frequency to be where the signal-to-noise ratio (SNR) in the Fourier transform of the measured data drops to one. Also, we obtain good reconstructions using either the pillbox filter or the modified Hamming filter [12]. Once the scattered wave is reconstructed throughout the volume, images can be visualized in any plane at any orientation. For the results here, all images are in planes parallel to the detection plane. Our backpropagation algorithm is written using the image processing software package Interactive Data Language (IDL) [13] and all reconstruction results shown herein were produced using the IDL software.

## 3. Experimental setup

Data were collected for our image reconstruction examples with a frequency-domain photon migration instrument which has been described previously in detail [14], so we just summarize the information here. Our experimental setup uses a transillumination geometry with a laser diode providing the necessary illumination. Because the objects are rather large, we illuminated the medium at several locations in order to fully illuminate the imbedded object. We had only one laser diode; therefore, we slightly modified the algorithm described in Section 2 in the following manner. We sequentially illuminated the inhomogeneous phantom at several locations, collected a dataset for each of the locations, and accomplished steps (1) through (3) for each dataset. Only one homogeneous dataset was needed to accomplish background subtraction for the multiple inhomogeneous datasets. We then added together all of the background-subtracted measurements and used the result for the remaining steps of the algorithm. The laser diode can be modulated at either kilohertz or megahertz rates, but for the data here we used only kilohertz modulation. A primary motivation for using kilohertz illumination is to test our ability to reconstruct images using only amplitude information. We had previously determined that image reconstructions of unresolved targets such as spheres and cubes were as good or better with kilohertz modulation when compared to megahertz modulation [14], as expected based upon theoretical predictions [10]. The results in this paper indicate that good quality image reconstructions also can be obtained with kilohertz illumination for objects which are resolvable. We detected the light using an avalanche photodiode that was scanned across a square grid with equally-spaced stops to facilitate digitally processing the data. We used heterodyne detection techniques with an intermediate frequency of 25kHz, detecting the data with a lock-in amplifier. The data was stored in a personal computer that also controlled the scanning photodiode and hosted the IDL image reconstruction software. It took a few seconds per data point to collect the data, so a 16×16 scan took approximately 20 minutes per sample to complete. Data reconstruction times for the full three-dimensional reconstruction, once the data were collected, were on the order of a few seconds for a 16×16 scan with 50 reconstruction planes.

## 4. Reconstructed Images

For the image reconstruction results presented here, we created phantoms made of plastic resin with Ti0_{2} powder added to provide the necessary scattering properties. Airplane models were imbedded in the phantoms approximately halfway between the illumination and detection planes. The absorption and reduced scattering coefficients for all the phantoms were 0.01 cm^{-1} and 18 cm^{-1}, respectively. Homogeneous phantoms were made at the same time as the inhomogeneous phantoms in order to facilitate background subtraction. In real life, of course, accurate material properties usually must be determined from the data itself. We are currently working on methods to accomplish this. The phantom depths for the results shown herein were either 4.5 cm or 5.5 cm and their widths were made sufficiently large to minimize edge effects in the reconstructions. All data were collected using a 16×16 scan covering a 11 cm by 11 cm area. Denser scans were not necessary since the highest spatial frequency measured in the data was approximately 0.5 cm^{-1} and thus a 16×16 sampling grid provides a sufficiently dense spatial sampling to satisfy the Nyquist theorem. The measured data were imbedded in a 128×128 array in the computer prior to reconstructing the scattered wave in order to produce denser frequency-domain sampling which enhances filter creation.

Our first image reconstruction result is of a Boeing 747 airplane model that is 7.5 cm in length from the noise to the tail and 6.5 cm in width from wingtip to wingtip. It was imbedded in a 5.5 cm deep phantom (optical depth of ~100) in the orientation shown in Fig. 2(a). Data on the inhomogeneous and the homogeneous phantoms were collected and the resulting data input into the image reconstruction algorithm. The SNRs in the Fourier transform of the scattered wave dropped down to one at a spatial frequency value of approximately 0.5 cm^{-1}, so we used that value for the cutoff frequency of our low-pass regularizing filter. We used a pillbox filter which, for that cutoff frequency, gives a resolution in the image domain of 1 cm. A movie showing the three-dimensional reconstruction of the wave scattered by the airplane model is shown in Fig. 2(b). The images in the movie are annotated with the corresponding depth in the phantom in units of optical depth. Notice that, in the detection plane (optical depth of 0), there is basically no detail that can be seen of the airplane. As the reconstructed scattered wave is viewed further and further back into the phantom, detail starts appearing. At approximately half-way through the phantom (optical depth of ~45) the image quality is maximum. For images reconstructed further into the phantom, the airplane image dissolves into a random speckle pattern. Because the airplane model is located at the half-way point in the phantom, we conclude that we can use the image quality as a way of locating the airplane; that is, the airplane model is located at the depth where the image quality is a maximum. We had noticed a similar phenomenon when reconstructing images of spherical objects imbedded in phantoms [11,12]. In that case, however, the object is basically unresolved and thus the location of the object was determined by finding the peak of the reconstructed scattered wave. The reason that image quality is a maximum at the true location of the object is because the backpropagation algorithm optimally inverts the blurring produced by propagation through the turbid medium when reconstructing an image at the object’s correct location. For shallower locations, not enough blurring is removed, and for deeper locations, the backpropagation algorithm overcompensates for turbid medium blurring and thus overemphasizes the high spatial frequency information, producing speckling because of noise at these frequencies. A detailed explanation of these issues can be found in Ref. [12].

We show another airplane model and a movie of the reconstructed scattered wave throughout the phantom in Fig. 3. The airplane model is of a Japanese Zero aircraft with a nose-to-tail length of 4.5 cm and wingtip-to-wingtip width of 5.5 cm. As for the Boeing 747 result, data from the inhomogeneous and homogeneous phantoms were collected and processed using the backpropagation algorithm. The phantoms in this case were 4.5 cm in depth (optical depth of ~80), but the SNRs in the Fourier transform of the scattered wave dropped below one at approximately the same point (0.5 cm^{-1}) as for the Boeing 747 model, even though one might normally expect the SNRs to be higher for thinner phantoms. We believe that this is due to the behavior of the lock-in amplifier, which integrates the measured signal until it achieves its desired SNR in the data and then generates data for the acquisition computer. Thus we do not see improvements in the data SNR for thinner phantoms; rather, we collect the data a little faster because the lock-in amplifier doesn’t have to integrate as long to achieve the preset SNR.

As for the Boeing 747 airplane model, we are able to locate the Japanese Zero airplane model in depth by determining the depth at which the optimum image quality is obtained. As can be seen from the movie, the maximal image quality occurs around an optical depth of 32. For optical depths smaller than this, the image is less resolved, and for optical depths greater than this, the image breaks up into a random speckle pattern.

To more quantitatively analyze the resolution improvements obtained by using a backpropagation algorithm to reconstruct images from the measured data, in Fig. 4 we plot the point-spread functions (PSFs) of the measured data for both airplane models as well as the reconstruction PSF corresponding to the regularizing filter. The PSFs for the measured data were calculated from the data itself while the reconstruction PSF was generated analytically by inverse Fourier transforming the filter used in the backpropagation algorithm. Using the width of a PSF as a metric to determine spatial resolution is a well-accepted method dating back at least to Lord Rayleigh. Although there are different ways to characterize the width of a PSF (full width half maximum, distance from the maximum to the first zero, etc.), each of which produces a slightly different answer, we choose to use the Rayleigh criteria (distance from the maximum to the first zero) because it is commonly used as an imaging resolution metric. We have to modify the Rayleigh criteria slightly because the PSFs due to scattering do not ever reach zero. Therefore, we define the resolution of a PSF as the distance from the maximum of the PSF to 10% of its maximum value. We can see in Figure 4 that the measured data PSFs drop to 10% of their maximum values at approximately 4 cm, indicating that the resolution in the measured data is 4 cm. Similarly, the reconstruction PSF indicates that the resolution in the reconstructed data is approximately 1 cm. This shows that we have quadrupled our resolution by postprocessing the data.

## 5. Discussions and future work

We have shown that detail can be resolved in images of objects imbedded in turbid media when their sizes are greater than the resolution limits imposed by the SNRs in the measured data, even when there is no discernable detail in the measured data. The amount of resolution increase is determined by where the SNRs in the Fourier transform of the measured data are greater than one. For the results in this paper, we were able to increase image resolution from 4 cm to 1 cm. It has been previously shown [15] that SNR considerations limit the resolvable detail in reconstructed images to approximately 0.5 cm or larger. Therefore, our results provide qualitative validation of this conclusion. In addition, we believe that we can improve our resolution by a factor of two or so with a different experiment setup. The noise levels in the data presented in this paper were several orders of magnitude larger than we expected for photon noise. Apparently the lock-in amplifier was integrating for too short of a time to reduce the noise to levels that we were expecting. To reduce the noise levels down to what we would expect given dynamic range limitations would meaningfully increase the data collection time with the current heterodyne setup. Because our theoretical results [10] have shown that dc illumination produces the highest SNR in the measured data, we have opted to develop an experimental setup based on using a CCD camera for data collection. We have taken some preliminary data using the CCD camera setup and have seen noise levels reduced several orders of magnitude with collection times that also have been reduced by orders of magnitude (depending upon the number of data samples). We will publishing these results in the near future.

We have also shown that we can locate an object in depth by determining which plane in the reconstructed scattered wave has the highest quality image reconstruction. Admittedly, this localization method is inherently qualitative in nature, unlike in our previous work [12] where we showed that unresolvable objects can be located in depth by determining the location of the peak amplitude in the reconstructed wave - a nicely quantitative measure. This quantitative method does not work as well for resolved objects because the object intensity pattern is spread out spatially. The visual determination of image quality is not as ad hoc as it might first appear, however, because images tend to have spatially-correlated structure while noise tends to be spatially uncorrelated. Therefore, the visual determination method could be extended to more quantitative methods by incorporating correlation metrics into the localization algorithm.

Our future work includes optimizing our CCD camera experimental setup for optimal image resolution and data collection speed. Optimal resolution depends upon maximizing the SNR in the data, which means minimizing all other noise sources so that photon noise dominates. We also plan on developing absolute SNR expressions based upon this experimental setup in order to validate the expressions with laboratory data. Our previous SNR work resulted in relative SNR expressions which gave insight into the relative importance of various system and medium parameters on achievable resolution. Extending these results to absolute SNR expressions will result in closed-form predictions of absolute resolution limits in optical diffusion tomography.

## References and Links

**1. **H.B. Jiang, K.D. Paulsen, U.L. Osterberg, and M.S. Patterson, “Frequency-domain optical image reconstruction in turbid media - an experimental study of single-target detectability,” Appl. Opt. **36**, 52–63 (1997) [CrossRef] [PubMed]

**2. **H.B. Jiang, K.D. Paulsen, U.L. Osterberg, and M.S. Patterson, “Frequency-domain optical image reconstruction in turbid media - an experimental study of single-target detectability: erratum,” Appl. Opt. **36**, 2995–2996 (1997) [CrossRef] [PubMed]

**3. **M.A. O’Leary, D.A. Boas, B. Chance, and A.G. Yodh, “Experimental images of heterogeneous turbid media by frequency-domain diffusing-photon tomography,” Opt. Lett. **20**, 426–428 (1995) [CrossRef]

**4. **S. Fantini, S.A. Walker, M.A. Franceschini, M. K, P.M. Schlag, and K.T. Moesta, “Assessment of the size, position, and optical properties of breast tumors in vivo by noninvasive optical methods,” Appl. Opt. **37**, 1982–1989 (1998) [CrossRef]

**5. **S.A. Walker, S. Fantini, and E. Gratton, “Image reconstruction by backprojection from frequency-domain optical measurements in highly scattering media,” Appl. Opt. **36**, 170–179 (1997) [CrossRef] [PubMed]

**6. **W. Zhu, Y. Wang, Y. Yao, J. Chang, H.L. Graber, and R.L. Barbour, “Iterative total least-squares image reconstruction algorithm for optical tomography by the conjugate gradient method,” J. Opt. Soc. Am. A **14**, 799–807 (1997) [CrossRef]

**7. **A.H. Hielscher, A.D. Klose, and K.M. Hanson, “Gradient-based iterative image reconstruction scheme for time-resolved optical tomography,” IEEE Trans. Med. Imag. **18**, 262–271 (1999) [CrossRef]

**8. **W. Zhu, Y. Wang, Y. Deng, Y. Yao, and R.L. Barbour, “A wavelet-based multiresolution regularized least squares reconstruction approach for optical tomography,” IEEE Trans. Med. Imag. **16**, 210–217 (1997) [CrossRef]

**9. **C.L. Matson, “A diffraction tomographic model of the forward problem using diffuse photon density waves,” Opt. Express **1**, 6–11 (1997) [CrossRef] [PubMed]

**10. **C.L. Matson and H. Liu, “Analysis of the forward problem with diffuse photon density waves in turbid media by use of a diffraction tomography model,” J. Opt. Soc. Am. A **16**, 455–466 (1999) [CrossRef]

**11. **C.L. Matson, N. Clark, L. McMackin, and J.S. Fender, “Three-dimensional tumor localization in thick tissue with the use of diffuse photon-density waves,” Appl. Opt. **36**, 214–220 (1997) [CrossRef] [PubMed]

**12. **C.L. Matson and H. Liu, “Backpropagation in turbid media,” J. Opt. Soc. Am. A **16**, 1254–1265 (1999) [CrossRef]

**13. **The Interactive Data Language software package is available from Research Systems in Boulder, CO USA

**14. **H. Liu, C.L. Matson, K. Lau, and R.R. Mapakshi, “Experimental validation of a backpropagation algorithm for three-dimensional breast tumor localization,” IEEE J. Select. Topics Quantum Electron. **5**, 1049–1057 (1999) [CrossRef]

**15. **D.A. Boas, M.A. O’Leary, B. Chance, and A.G. Yodh, “Detection and characterization of optical inhomogeneities with diffuse photon density waves: a signal-to-noise analysis,” Appl. Opt. **36**, 75–92 (1997) [CrossRef] [PubMed]