## Abstract

Fast acquisition and high axial resolution are two primary requirements for three-dimensional micros copy. However, they are sometimes conflicting: imaging modalities such as confocal imaging can deliver superior resolution at the expense of sequential acquisition at different axial planes, which is a time-consuming process. Optical scanning holography (OSH) promises to deliver a good trade-off between these two goals. With just a single scan, we can capture the entire three-dimensional volume in a digital hologram; the data can then be processed to obtain the individual sections. An accurate modeling of the imaging system is key to devising an appropriate image reconstruction algorithm, especially for real data where random noise and other imaging imperfections must be taken into account. In this paper we dem onstrate sectional image reconstruction by applying an inverse imaging sectioning technique to experimental OSH data of biological specimens and visualizing the sections using the OSA Interactive Science Publishing software.

© 2009 Optical Society of America

^{◊}Data sets associated with this article are available at http://hdl.handle.net/10376/1428. Links such as View 1 that appear in figure captions and elsewhere will launch custom data views if ISP software is present.

## 1. Introduction

A fast and high-resolution capturing of tiny three-dimensional objects—commonly biological specimens—can greatly facilitate a scientist’s understanding of the microscopic world. Toward this end, many three-dimensional microscopy techniques have been developed. On the one hand is confocal microscopy, which illuminates a single spot on the sample at any given time, through a pinhole. The light reflected from the sample is then imaged by the objective back to the pinhole [1]. This gives excellent rejection of out-of-focus information, leading to high-quality images. However, the three-dimensional image is formed by a three-dimensional raster scan of the entire object, which is a slow process. This is a major disadvantage of confocal microscopy in applications that demand fast processing or high throughput, such as in [2]. Similarly, optical coherence tomography (OCT) achieves such true-depth slicing by rejecting the out-of-focus information before detection, this time by interference between two beans with short temporal coherence. As commented in [3], OCT suffers from the same drawback of a time-consuming acquisition process.

On the other hand we have holographic imaging, which aims to capture the three-dimensional information in a two-dimensional hologram of complex data. Indeed, holograms were invented primarily for microscopy [4, 5], and their use in three-dimensional microscopy of living biological specimens has long been demonstrated [6]. The advent of digital holography [7, 8] has made the recording and the processing of the holograms much easier, as it is fueled by the exponential gain in speed as computer technologies ride on Moore’s Law. In this setting, data capture at only a single two-dimensional plane is necessary. It therefore holds a significant advantage in terms of capturing speed and is particularly appropriate for imaging fast-moving *in vivo* objects.

After the three-dimensional information has been acquired and stored, we often want to view individual two-dimensional planes of the object. This is called sectioning [9]. For confocal microscopy and OCT, this is a trivial process, as data are originally captured per individual sections. For holographic imaging, however, this often involves significant computation using techniques such as deconvolution, which is challenging because of the ill-posed nature of the problem [10]. Furthermore, in reconstructing any given plane, the hologram can be viewed as containing defocused light and many spurious structures from the other sections, reducing the signal-to-noise ratio of the resulting sectional image [3]. Depending on the architecture, holographic imaging may also suffer from speckle noise due to the coherent nature of its recording [11].

A specific form of digital holography that we will focus on in this paper is the optical scanning holography (OSH) [12]. Unlike other microscopes, it can take the holographic information of fluorescent specimens in three dimensions [13]. In fact, it has been demonstrated that resolution beyond $1\text{\hspace{0.17em}}\mathrm{\mu m}$ is possible in holographic fluorescent microscopy [14]. We will give the operating principles and mathematical models of OSH in Section 2. Suffice it to say that OSH is a two-pupil system [15] consisting of active optical scanning and optical heterodyning, where the former suggests that information is raster-scanned in a two-dimensional fashion, and the latter is the technique to preserve the phase information in the hologram. OSH has been applied to three-dimensional image recognition, display, and cryptography, but the main application remains with microscopy [16]. In this specific case it is also commonly called scanning holographic microscopy [17].

In what follows, we will first describe the OSH system in some detail to facilitate the subsequent discussions on the computational results. This includes both an overview of the physical system and the mathematical model used to represent it. The main computational task in the three-dimensional microscopy is the reconstruction of sectional images. We will therefore overview the algorithm we use, as well as some alternative approaches, in Section 3. Computational results using experimental OSH data are then given in Section 4, together with visualization of the sections using the OSA Interactive Science Publishing software.

## 2. Principles and Mathematical Models for Optical Scanning Holography

The detailed operating principle of OSH is given in [13]. Here, we intend to provide only a succinct description that is sufficient for the readers to appreciate the data-recording power of OSH and the challenge of sectional image reconstruction that nec essarily follows the processing of the hologram.

#### 2A. Physical System

Consider Fig. 1, which shows generically the OSH system architecture used in microscopy. Two coherent beams at different temporal frequencies, ${\omega}_{0}$ (or equivalently, at wavelength ${\lambda}_{0}$) and ${\omega}_{0}+\alpha $, where ${\omega}_{0}\gg \alpha $, are produced to illuminate the system. They pass through two pupils, ${p}_{1}(x,y)$ and ${p}_{2}(x,y)$, respectively. The lenses ${\text{L}}_{1}$ and ${\text{L}}_{2}$ following the two pupils both have focal length *f*. A beam splitter (BS) combines the light after the two lenses, and a two- dimensional scanner projects it onto an object, which is at a distance *z* away. Alternatively, the sample rather than the projected pattern can be scanned. A photodetector (PD) collects the transmitted and scattered light from the object and converts it to an electronic signal. The demodulation part that follows—including a bandpass filter (BPF) centered at frequency *α*, electronic multipliers, and low pass filters (LPF)—processes the signal and generates two quadrature electronic holograms in a computer. The capture of the hologram phase using on-line heterodyning methods offers the advantage of superior noise rejection capability but is relatively slow. The fast deep-memory data acquisition systems available today allow an alternative to phase-sensitive detection. The entire modulated signal can be captured on-line and processed (also on-line if needed) to extract the complex holographic data [14].

#### 2B. Mathematical Representation

The OSH system described in the previous section is considered as a linear space-invariant system. As such, it has an optical transfer function (OTF) $\mathcal{H}({k}_{x},{k}_{y};z)$, which governs its behavior for different signal inputs. Here, ${k}_{x}$ and ${k}_{y}$ are the transverse spatial frequency coordinates.

For brevity, we omit the detailed derivation of the OTF, which can be found in [13, 18]. The complete expression of the OTF of the two-pupil optical heterodyne scanner is, in the paraxial approximation,

In OSH, we choose specifically that ${p}_{1}(x,y)=1$ (omitting finite size effects) and ${p}_{2}(x,y)=\delta (x,y)$, i.e., a Dirac delta function. The purpose is to construct an impulse response whose phase is a qua dratic function of *x* and *y*, known as a Fresnel zone pattern impulse response. To see why, we note that with these choice of pupil functions, Eq. (1) becomes

*x*and

*y*.

When we have an object with complex amplitude $\mathcal{O}(x,y,z)$, the complex hologram is then given by [13]

*N*sections denoted by ${z}_{1},{z}_{2},\dots ,{z}_{N}$. The symbol * denotes a two-dimensional convolution. Note that we have dropped the designation “OSH” in the impulse response whenever no confusion should arise. Furthermore, the real part of the complex hologram is called the sine hologram, while the imaginary part is called the cosine hologram. In the OSH system architecture depicted in Fig. 1, we have indicated the recording of the cosine and sine holograms.

## 3. Sectional Image Reconstruction

Visualizing a two-dimensional section of an object entails reconstructing $|\mathcal{O}(x,y,z)|$, for a specific value of *z*, from the recorded complex hologram $g(x,y)$ (or equivalently, the sine and cosine holograms together). We know that this is not a trivial process; at least visually, it is nearly impossible to tell what a cross section of the three-dimensional image looks like from the holograms.

Mathematically, we notice from Eq. (5) that recovering $|\mathcal{O}(x,y,z)|$ (or $|\mathcal{O}(x,y,z){|}^{2}$) from $g(x,y)$ is an inverse problem, an observation that underpins our reconstruction technique in Subsection 3E. It is understood that many inverse problems are sensitive to random noise, which must not be overlooked [21]. In the techniques described below, some have proved successful with synthetic images. However, when applied to experimental data, which is the case of the present paper, we have to watch in particular the effect of random noise.

#### 3A. Identifying the Impulse Response

The free-space impulse response $h(x,y;z)$ given by Eq. (4) is dependent on the depth *z*. For experimental data, the depth locations where we can identify interesting objects may not be known *a priori*. In that case, the reconstruction of $|\mathcal{O}(x,y,z){|}^{2}$ given $g(x,y)$ in Eq. (5) is an instance of blind reconstruction because of the lack of information about the impulse response [22]. A separate identification method, such as the edge-based technique in [23], may be needed.

Even if we know the values of *z* where we would like to obtain a sectional image, another issue is that Eq. (4) is derived under an idealized scenario. For example, we have to take ${p}_{1}(x,y)=1$ and ${p}_{2}(x,y)=\delta (x,y)$ in Eq. (1), which are not strictly possible in practice. Instead, we often choose a broad and narrow Gaussian expressions for these pupil functions [24].

In imaging, one practical way to estimate the impulse response is, in fact, to measure the output for various inputs that resemble an impulse. In our case, the input would be basically a point in the three- dimensional space. This is the approach we will take in Section 4.

#### 3B. Conventional Method in Sectioning

Assume out of the *N* sections in the object $\mathcal{O}(x,y,{z}_{i})$, we would like to reconstruct ${z}_{1}$ from $g(x,y)$ given Eq. (6). We can show, with simple algebraic manipulations, that

*defocus noise*, as it is contributed by signals at sections away from our target. We call this reconstruction by convolving with a matched filter the conventional method, as this is the first method suggested for the sectional image reconstruction [13]. Note also that the first term above is real, while the second term is complex. We typically discard the imaginary part of the reconstructed signal as there is no signal content.

#### 3C. Sectioning using a Wiener filter

The Wiener filter approach is designed as an improvement to the conventional method [3]. Instead of discarding the imaginary part of Eq. (8), both the real and the imaginary parts of $g(x,y)*{h}^{*}(x,y;{z}_{1})$ are recorded. The squares of their magnitudes, after smoothing with rectangular pulses, are labeled ${\mathcal{P}}_{\text{real}}({k}_{x},{k}_{y})$ and ${\mathcal{P}}_{\text{imag}}({k}_{x},{k}_{y})$ respectively, which are used to estimate the power spectra of the signal and noise. A Wiener filter using the OTF $\mathcal{H}({k}_{x},{k}_{y};{z}_{1})$ and these power spectra are given by

This method has been demonstrated on synthetic data with two sections. We note, however, that the computation of the power spectra relies on idealized assumptions (such as the absence of random noise) and would incur significant errors in practical scenarios, degrading the resulting reconstruction quality.

#### 3D. Sectioning using the Wigner distribution

Time-frequency analysis is used as another approach to separate the signal and the defocus noise in Eq. (8). Consider for simplicity that there are only two sections. The Wigner distribution function (WDF), which is a fundamental time-frequency analysis tool, of the complex hologram after a fractional Fourier transform [25] consists of three terms: the WDF of the focused signal, the WDF of the defocus signal, and a cross term [26]. Properly chosen filters can then be designed to get rid of the latter two.

There are a couple of drawbacks related to the WDF approach, however. First, the computation load is significant, as the WDF is a four-dimensional function (twice the dimension of the original signal). Second, the cross term causes degradation to the reconstruction result. Cross term is a common problem in time-frequency analysis, especially for WDF, although there are alternative time-frequency analysis tools that can suppress it better than WDF [27]. In any case, however, the number of cross terms grows with the number of sections, making them more and more difficult to suppress or ignore. Third, as with the Wiener filter technique, random noise is ignored in the problem formulation. This presents another important source of error in practical use of this technique for optical sectioning. So far, WDF has only been demonstrated feasible for synthetic two-section objects under ideal conditions.

#### 3E. Sectioning using Inverse Imaging

The inverse imaging approach was first reported in [28] and later refined in [29]. It attempts to recover *all* sections together by considering them as the “input” to the forward problem, and the observed hologram as the “output,” under a known system.

Referring back to Eq. (6), we convert the two- dimensional hologram $g(x,y)$, of size $N\times N$, into a length ${N}^{2}$ vector ** g** by lexicographic ordering. Similarly, $|\mathcal{O}(x,y,{z}_{i}){|}^{2}$ becomes ${\varphi}_{i}$. An ${N}^{2}\times {N}^{2}$ matrix ${H}_{i}$ is formed from $h(x,y;{z}_{i})$ such that ${H}_{i}{\varphi}_{i}$ is equivalent to $h(x,y;{z}_{i})*|\mathcal{O}(x,y,{z}_{i}){|}^{2}$ after lexicographic ordering.

With these quantities, Eq. (6) can be written as

where*η*is a length-${N}^{2}$ vector representing the Gaussian random noise. This can be further simplified to where [30]

The inverse problem of finding *ϕ* in Eq. (11) can be solved by

*μ*is called the Lagrangian. This minimization, which results from an underdetermined system of equations, has an analytic solution [31] where I is the identity matrix.

It is also possible to use other norms, such as the ${\ell}_{1}$ norm, in lieu of the square of the ${\ell}_{2}$ norm in Eq. (12) in one or both places, as demonstrated by the vast digital image restoration literature. Solving such problems is an instance of convex programming [32], with fast computational tools. For example, a conjugate-gradient based algorithm is given in [28], which the readers can refer to for details in implementation.

## 4. Experiments

#### 4A. Holograms of Point Sources

We construct a sequence of objects $|{\mathcal{O}}_{i}(x,y;z){|}^{2}$ experimentally, where

*N*complex holograms, which can be used to find the impulse responses $h(x,y;{z}_{i})$ experimentally.

We measure the holograms of a point object (pinhole of $0.5\text{\hspace{0.17em}}\mathrm{\mu m}$ in diameter) located at every $5\text{\hspace{0.17em}}\mathrm{\mu m}$ from $70\text{\hspace{0.17em}}\mathrm{\mu m}$ to $125\text{\hspace{0.17em}}\mathrm{\mu m}$, with a numerical aperture $\mathrm{NA}=0.4$ in the system. Figure 2 shows the holograms of the pinhole at $z=125\text{\hspace{0.17em}}\mathrm{\mu m}$, with a field of view equal to $100\text{\hspace{0.17em}}\mathrm{\mu m}\times 100\text{\hspace{0.17em}}\mathrm{\mu m}$. Both consist of concentric rings. The fringes are at a frequency commensurate with the depth of the object. As it increases, the spatial frequency of the fringes decreases, which is the same in both the real and imaginary parts. (Online readers can view the datasets using the OSA Interactive Science Publishing software by clicking on View 1 and View 2 in the caption of Fig. 2.)

The holograms of a point source can then be reconstructed to confirm that the necessary information of the object is indeed embedded in them. Since each object only has one depth, the conventional method suffices as there is no defocus noise. On the other hand, we can compare the reconstruction results using the experimental impulse response and those from the theoretical values given by Eq. (4). The reconstructed images by the two functions are accessible online by seeing View 3 and View 4, respectively. Not surprisingly, the reconstruction using the experimental impulse response shown in View 3 gives a single point in the object. This is because the matched filter, being the complex conjugate of the experimental impulse response, has taken into account the imperfections in the imaging so that the reconstructed image is free from such aberrations. On the other hand, if we reconstruct the object using the theoretical impulse response, we get the images in View 4, one of which is shown in Fig. 3. On the left we have the reconstructed image in full scale, which appears quite good as it seems to contain only a single point. However, if we magnify the area marked by a yellow square, we have the magnified image shown in Fig. 3b. We can clearly see that it is blurred with some ringing, resembling an Airy disc.

#### 4B. Holograms of a Biological Specimen

We also obtain the complex hologram of a three- dimensional object consisting of a slide of fluorescent beads (DukeR0200, $2\text{\hspace{0.17em}}\mathrm{\mu m}$ in diameter, excitation around $542\text{\hspace{0.17em}}\mathrm{nm}$, and emission around $612\text{\hspace{0.17em}}\mathrm{nm}$). The beads tend to stick either to the top surface of the mounting slide or to the bottom surface of the coverslip, giving us a simple three-dimensional test sample with two dominant sections. In the sample used, the distance between the two planes was around $35\text{\hspace{0.17em}}\mathrm{\mu m}$ [33]. An emission filter and a fluorescence detector are attached to the system to measure the emitted light from the beads.

Experimentally, we have to estimate the values ${z}_{1}$ and ${z}_{2}$ that correspond to the two sections. This is done by reconstructing the object using all the impulse responses from $70\text{\hspace{0.17em}}\mathrm{\mu m}$ to $125\text{\hspace{0.17em}}\mathrm{\mu m}$. For speed considerations, we use the conventional method, i.e., we convolve the hologram with the conjugate of the various impulse responses measured earlier. Figure 4 shows the image for two particular depths, $85\text{\hspace{0.17em}}\mathrm{\mu m}$ and $120\text{\hspace{0.17em}}\mathrm{\mu m}$, which have the more visible signals despite the presence of the defocus noise. View 5 provides the full set of the reconstruction. We then determine that these are the planes we should reconstruct the images, and we apply the inverse imaging technique with these values of ${z}_{1}$ and ${z}_{2}$. The results are shown in Fig. 5, with the full set of reconstruction given online at View 6. We note that defocus noise is suppressed in the reconstruction by inverse imaging, and we can observe the beads with good resolution. Referring to View 6, beads are not just at the $z=85\text{\hspace{0.17em}}\mathrm{\mu m}$ and $z=120\text{\hspace{0.17em}}\mathrm{\mu m}$ sections but are recovered in several others as well. The first reason is that the beads are spherical, not flat. In adjacent sections, we also can recover parts of them. The other reason is that the beads are not completely assembled at the top and bottom surfaces. A few beads may be moving to the surfaces or just staying at certain sections.

## 5. Concluding Remarks

In this paper we have demonstrated the principle and application of optical scanning holography in three-dimensional microscopy. We focus in particular on sectional image reconstruction and show that inverse imaging is a powerful technique that can be used on experimental data with rich content. This is an important milestone in OSH development in delivering high-resolution sectional images for practical applications.

This work was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region, China under Project 713408E and by the University Research Committee of the University of Hong Kong under Project 10208291.

**1. **T. R. Corle and G. S. Kino, *Confocal Scanning Optical Microscopy and Related Imaging Systems* (Academic, 1996).

**2. **Y. Shu, R. Chung, Z. Tan, J. Cheng, E. Y. Lam, K. S. Fung, and F. Wang, “Projection optics design for tilted projection of fringe patterns,” Opt. Eng. **47**, 053002 (2008).

**3. **T. Kim, “Optical sectioning by optical scanning holography and a Wiener filter,” Appl. Opt. **45**, 872–879 (2006). [CrossRef]

**4. **D. Gabor, “A new microscope principle,” Nature **161**, 777–778 (1948).

**5. **J. W. Goodman, *Introduction to Fourier Optics* (Roberts & Company, 2004), 3rd ed.

**6. **C. Knox, “Holographic microscopy as a technique for recording dynamic microscopic subjects,” Science **153**, 989–990 (1966). [CrossRef]

**7. **J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms ,” Appl. Phys. Lett. **11**, 77–79 (1967).

**8. **I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. **22**, 1268–1270 (1997). [CrossRef]

**9. **E. N. Leith, W.-C. Chien, K. D. Mills, B. D. Athey, and D. S. Dilworth, “Optical sectioning by holographic coherence imag ing: A generalized analysis,” J. Opt. Soc. Am. A **20**, 380–387 (2003). [CrossRef]

**10. **E. Y. Lam and J. W. Goodman, “Iterative statistical approach to blind image deconvolution,” J. Opt. Soc. Am. A **17**, 1177–1184 (2000). [CrossRef]

**11. **J. W. Goodman, *Speckle Phenomena in Optics: Theory and Applications* (Roberts & Company, 2007).

**12. **T.-C. Poon, K. Doh, B. Schilling, M. Wu, K. Shinoda, and Y. Suzuki, “Three-dimensional microscopy by optical scanning holography,” Opt. Eng. **34**, 1338–1344 (1995).

**13. **T.-C. Poon, *Optical Scanning Holography with MATLAB* (Springer, 2007).

**14. **G. Indebetouw and W. Zhong, “Scanning holographic microscopy of three-dimensional fluorescent specimens,” J. Opt. Soc. Am. A **23**, 1699–1707 (2006). [CrossRef]

**15. **A. W. Lohmann and W. T. Rhodes, “Two-pupil synthesisof optical transfer functions,” Appl. Opt. **17**, 1141–1151 (1978). [CrossRef]

**16. **J. Swoger, M. Martínez-Corral, J. Huisken, and E. H. K. Stelzer, “Optical scanning holography as a technique for high-resolution three-dimensional biological microscopy,” J. Opt. Soc. Am. A **19**, 1910–1918 (2002). [CrossRef]

**17. **G. Indebetouw, P. Klysubun, T. Kim, and T.-C. Poon, “Imaging properties of scanning holographic microscopy,” J. Opt. Soc. Am. A **17**, 380–390 (2000). [CrossRef]

**18. **T.-C. Poon, “Scanning holography and two-dimensional imageprocessing by acousto-optic two-pupil synthesis,” J. Opt. Soc. Am. A **2**, 521–527 (1985). [CrossRef]

**19. **M. Born and E. Wolf, *Principles of Optics* (Cambridge University Press, 1999), seventh ed.

**20. **R. N. Bracewell, *The Fourier Transform and Its Applications* (McGraw-Hill, 2000), 3rd ed.

**21. **E. Y. Lam, “Noise in superresolution reconstruction,” Opt. Lett. **28**, 2234–2236 (2003). [CrossRef]

**22. **Z. Xu and E. Y. Lam, “Maximum a posteriori blind image deconvolution with Huber-Markov random-field regularization,” Opt. Lett. **34**, 1453–1455 (2009). [CrossRef]

**23. **X. Zhang, E. Y. Lam, T.-C. Poon, T. Kim, and Y. S. Kim, “Blind sectional image reconstruction for optical scanning holography,” submitted to Opt. Lett. .

**24. **B. D. Duncan and T.-C. Poon, “Gaussian beam analysis of optical scanning holography,” J. Opt. Soc. Am. A **9**, 229–236 (1992). [CrossRef]

**25. **H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay, *The Fractional Fourier Transform: with Applications in Optics and SignalProcessing*, 1st ed. (Wiley, 2001).

**26. **H. Kim, S.-W. Min, B. Lee, and T.-C. Poon, “Optical sectioning for optical scanning holography using phase-space filter ing with wigner distribution functions,” Appl. Opt. **47**, D164–D175 (2008). [CrossRef]

**27. **L. Cohen, *Time Frequency Analysis: Theory and Applications* (Springer-Verlag, 2007), 1st ed.

**28. **X. Zhang, E. Y. Lam, and T.-C. Poon, “Reconstruction of sectional images in holography using inverse imaging,” Opt. Express **16**, 17215–17226 (2008). [CrossRef]

**29. **X. Zhang, E. Y. Lam, and T.-C. Poon, “Fast iterative sectional image reconstruction in optical scanning holography,” in OSA Topical Meeting in Digital Holography and Three- Dimensional Imaging (2009).

**30. **X. Zhang, T.-C. Poon, and E. Y. Lam, “An inverse imaging approach to sectional image reconstruction in optical scanning holography,” in International Topical Meeting on Information Photonics (2008).

**31. **A. Ribés and F. Schmitt, “Linear inverse problems in imaging,” IEEE Sig.Proc. Mag. **25**, 84–99 (2008).

**32. **Y. Shen, E. Y. Lam, and N. Wong, “Binary image restoration by positive semidefinite programming,” Opt. Lett. **32**, 121–123 (2007). [CrossRef]

**33. **G. Indebetouw, “Scanning holographic microscopy with spatially incoherent sources: Reconciling the holographic advantage with the sectioning advantage,” J. Opt. Soc. Am. A **26**, 252–258 (2009). [CrossRef]