## Abstract

Digital Holographic Microscopy allows to numerically retrieve three dimensional information encoded in a single 2D snapshot of the coherent superposition of a reference and a scattered beam. Since no mechanical scans are involved, holographic techniques have a superior performance in terms of achievable frame rates. Unfortunately, numerical reconstructions of scattered field by back-propagation leads to a poor axial resolution. Here we show that overlapping the three numerical reconstructions obtained by tilted red, green and blue beams results in a great improvement over the axial resolution and sectioning capabilities of holographic microscopy. A strong reduction in the coherent background noise is also observed when combining the volumetric reconstructions of the light fields at the three different wavelengths. We discuss the performance of our technique with two test objects: an array of four glass beads that are stacked along the optical axis and a freely diffusing rod shaped *E.coli* bacterium.

© 2014 Optical Society of America

## 1. Introduction

Obtaining 3D microscopy images over a large three dimensional volume and with a high frame rate is of fundamental importance for the study of dynamical processes in physics and biology. Most of the existing techniques, such as confocal microscopy [1], 2-photon femtosecond laser scanning microscopy [2] or light sheet microscopy [3], make use of mechanical or optical scanning methods to retrieve a complete 3D image of a sample. However, the need for scans imposes strong constraints on the achievable frame rates. Recent improvements on confocal microscopy have led to acquisition speeds as high as 200 frames/s for a 512×512 image [4]. Adding a further scan on the axial direction quickly drops the frame rate below video rate for thick volumes.

When a very high frame rate is desired, techniques that encode the required 3D information in a 2D snapshot are the most effective ones. Most of those techniques are so far limited to position tracking of small objects and do not provide accurate 3D morphological information. For example, the 3D position of small particles can be retrieved using specially designed point spread functions that encode in different ways the information about the axial coordinate [5]. Much of the work on 3D tracking of micro-particles has been developed in the field of micro-fluidics and nano-fluidics where tracer particles are used to measure the local velocity field throughout the liquid sample [5].

Another approach to 3D tracking is provided by stereoscopic techniques where the axial coordinate of the object is obtained from the relative lateral displacement of stereoscopic images. Structured two colors illumination from a digital projector in a low tilt angle geometry allowed to gather a stereoscopic view of trapped particles on an RGB camera [6]. Alternatively, the two stereoscopic images produced by white light illumination from two tilted fibers can be sorted using a spatial filter and two wedge prisms placed in the Fourier plane of the objective [7]. The spatial filter and the wedge prisms can be also replaced by an SLM allowing to implement several microscopy techniques at the same time [8, 9].

More recently, refractive index tomography has been shown to provide detailed information on the morphology and internal structure of cells. Those techniques are based on the interferometric recording of the scattered field in both amplitude and phase. A 3D map of the refractive index is then obtained by numerically solving an inverse problem. The ill-conditioned nature of the numerical problem usually requires to acquire multiple and independent images by scanning the sample angle [10, 11], the direction of incident field [12–14], or the focal plane [15]. Although the obtained structural information can be very detailed, the need for scan and the complexity of the reconstruction algorithms may make those techniques unpractical when one is interested in tracking position and orientation of a large number of simple objects with a very high frame-rate.

Digital holographic microscopy (DHM) has been proved to be one of the most effective techniques for 3D tracking of micrometer sized objects over a large depth of field [16]. In on-axis DHM, the coherent illumination beam interferes with scattered light on the image plane, giving rise to a fringe pattern called hologram, that encodes the phase information of scattered light in the intensity. By direct back-propagation one can numerically reconstruct the 3D structure of the scattered light from a single two dimensional image, so that the acquisition speed is only limited by the frame rate and the sensitivity of the camera [17–20]. The basic assumption behind DHM is that the reconstructed scattered field will have intensity maxima in correspondence of the location of scattering particles. Dilute samples of spherical particles and solutions of various swimming microorganisms have been 3D tracked at hundreds of Hz frame rate and at micrometer resolution in thick samples (up to mm) [18, 21, 22]. Notwithstanding these advantages, DHM reconstructions are usually poor in terms of optical axis resolution [23], unless prior knowledge on the scatterer geometry is known [24,25]. This is due to the fact that the light scattered by micron-sized objects has a small angular divergence that results in poor refocusing power upon back-propagation. Moreover, when multiple objects are stacked along the optical axis, direct numerical back-propagation, that assumes an optically homogeneous medium, gives rise to poor volumetric reconstructions [23]. When using lasers as the coherent source, the coherent superposition of scattered light by unwanted objects leads to speckled noise and artifacts in the reconstruction. Using partially coherent sources like LED suppresses coherent noise [26] but at the same time also reduces fringe visibility deteriorating the axial resolution in numerical reconstructions.

Here we show that the combination of DHM and simultaneous multi-axis illumination allows to obtain accurate volumetric reconstructions with an increased axial resolution and sectioning capability. Using a color camera we can simultaneously record the three on-axis holograms produced by tilted red, green and blue beams and obtain three independent volumetric reconstructions of scattered fields. As in standard DHM, the individual reconstructed fields are affected by a poor resolution along their corresponding axes. However their high intensity regions only overlap in small volumes centred at the location of scattering objects. The coherent background noise, which appears to be strongly sensitive to beam wavelength and direction, is also suppressed when overlapping the three reconstructions. We demonstrate the performance of our technique with two 3D test objects: an array of four glass beads that are stacked along the optical axis and a freely diffusing rod shaped *E.coli* bacterium.

## 2. Methods

In standard on-axis DHM, the condenser stage of a conventional microscope is replaced by a collimated coherent source directed along the optical axis of the objective. As shown in Fig. 1 we used three collimated light beams from LED sources of different wavelengths (Thorlabs M455L3, M530L3, and M625L3). The three beams form an angle of approximately 45 degrees with respect to the optical axis and are equally spaced along the azimuthal coordinate. The LEDs central wavelengths (455±9 nm, 530±16 nm and 625±9 nm) are chosen in order to match the spectral response of the three color channels of the RGB color camera (Basler avA1000-100gm). Using partially coherent sources such as our LEDs (coherence length 20 μm) removes most of the speckle noise [27, 28]. However an extended transverse coherence is still required for a good fringe visibility. Transverse coherence can be easily increased, at the cost of illumination intensity, by using iris diaphragms to reduce the numerical aperture of the condenser lenses as shown in Fig. 1.

For each wavelength we can decompose the complex amplitude of the light field on the image plane into an incident wave *E _{i}* and a scattered wave

*E*. In a coordinate system centered in the image plane, the incident field will be given by

_{s}*E*(

_{i}**r**) = |

*E*|

_{i}*e*

^{jk·r}, with

**k**= 2

*πn/λ*[sin

*θ*cos

*ϕ*, sin

*θ*sin

*ϕ*, cos

*θ*],

*ϕ*and

*θ*are the direction angles of the illumination beam,

*n*is the refractive index of the medium inside the sample, and

*λ*is the LED peak wavelength. The incident and scattered fields will interfere in the image plane giving rise to the intensity pattern:

*I*(

*x*,

*y*, 0) is a progressive reduction of fringe visibility as the distance from the illumination axis is increased. This results in an apparent reduction of scattering at large angles [29]. Since |

*E*|

_{i}^{2}is the flat intensity pattern that one would obtain with an empty field of view, it can be easily subtracted from Eq. (1). As the starting point for back-propagation we then take the field:

The field on a generic plane *E*(*x*, *y*, *z*) is given by the convolution between *E*(*x*, *y*, 0) and the free space propagation kernel *G*(*x*, *y*, *z*) (calculated as in [30]):

*k*= 2

*πn/λ*. Using the convolution theorem we can efficiently evaluate the back-propagated fields using 2D FFT on a CUDA capable GPU (NVIDIA Tesla C2075). The time required for back-propagation of a 512 × 512 complex field is only 1.3 ms.

Although LED central wavelengths are chosen to match the sensitivity peaks of the camera color channels, the tails of the LED spectra will always leak in the other channels. After an initial calibration procedure the three independent holograms can be obtained as a simple linear combination of the camera’s RGB channels. The intensities of each scattered field *I ^{R}*,

*I*, and

^{G}*I*are then reconstructed in a cubic volume of linear size 56 μm. Each of the three independent intensities represents a standard DHM reconstruction showing an extended axial focal region that is maximal at a point that appears to be shifted downstream along the beam propagation direction. The position of this maximum represents the focal point of the microsphere acting as a ball lens while a better estimate of the true particle position is given by the maximum in the overlapped intensities. Therefore we choose to calculate the final volumetric image

^{B}*V*as the overlap:

*V*has the dimensions of an intensity which we could directly compare to standard DHM, without any gamma correction.

Our setup is also equipped with an infrared (*λ* =1064 nm) holographic trapping system based on a LCOS Spatial Light Modulator (Hamamatsu X10468) (Fig. 1). The holographic traps are used to construct 3D arrangements of beads to test our volumetric imaging technique.

## 3. Results

Figure 2(a) shows the RGB hologram of a 2 μm silica bead in water trapped at a distance of 12 μm above the objective focal plane. The volumetric reconstruction of the separate intensities *I ^{R}*,

*I*, and

^{G}*I*is shown in Fig. 2(b). Each color channel, which can be thought as a standard DHM reconstruction, refocuses in an elongated blob directed along the incident beam axis. Figure 2(c) plots the intensity

^{B}*I*of the green channel along the incident beam axis (red solid line) together with a transverse section of the beam waist (red dashed line). Due to the high tilt angle of the beams, their overlap

^{G}*V*(represented in white in Fig. 2(b)) is only mildly elongated along the axial direction. The axial and transverse profiles of

*V*are plotted in Fig. 2(d) respectively as a solid and a dashed black line. The ratio between the standard deviations of axial and transverse profiles is 1.6. If we assume that, in the neighborhood of the overlap region, each beam has a Gaussian transverse profile then the overlap

*V*will be a 3D Gaussian having a ratio between the axial and transverse standard deviations

*w*= (1/sin

_{z}/w_{x}^{2}

*θ*− 1/2)

^{1/2}. In our experiment we directly measure a tilt angle of the beams in water of

*θ*= 33° leading to

*w*= 1.7 that is close to measured one.

_{z}/w_{x}Another limitation of standard DHM is the “shadowing” effect produced by particles that are stacked along the optical axis. Using a 3-axis illumination avoids this problem and results in a perfect volume sectioning even in the presence of multiple stacked particles. As a demonstration, we holographically trapped [31] four beads along the *z* axis. The RGB hologram of these four trapped beads is shown in Fig. 3(a). As the distance of the bead from the image plane increases, the corresponding set of fringes is also laterally shifted reducing the overlap with the other colors. The 3D volumetric reconstruction of the hologram, calculated with Eq. (4), is shown in Fig. 3(b) together with the profile of *V* (0, 0, *z*) (Fig. 3(c)). A standard DHM hologram of the same beads of Figs. 3(a)–3(c) was also obtained using a single LED (*λ* =530±16 nm) to vertically illuminate the sample (Fig. 3(d)). The volumetric reconstruction of the hologram, shown in Fig. 3(e), has been obtained taking the intensity of the back-propagated field. The plot in Fig. 3(f), where the intensity is plotted along the vertical axis, clearly shows that the reconstruction fails to resolve the particles (see also [23]).

For objects with more complex shapes we expect that our method still provides accurate morphological information whenever a Born approximation for scattering is fulfilled. This is the case for weak scatterers and for thin or slender bodies. To this aim we tested our technique on a much more challenging 3D object. *Escherichia coli* is a rod shaped bacterium measuring 0.5 μm in thickness and a few microns in length. We acquired RGB holograms of a freely diffusing bacterium with a frame rate of 20 Hz. Figure 4 shows few snapshots of the video containing the three orthogonal projections of the volumetric reconstruction of our bacterium along with the raw RGB holograms. The position, orientation and the shape of the bacterium’s prolate body are clearly visible. For a more quantitative analysis we first apply a threshold to convert the image into a binary volumetric image. The center of mass of the cell body is then obtained from the first order moments. The central second order moments can be arranged in a 3 × 3 covariance matrix whose eigenvectors represent the three principal axes of the cell body. The cell body length is extracted as the total extent of the binary image along the eigenvector corresponding to the largest eigenvalue. Cell thickness is estimated as the diameter of the equivalent circular cross-section. In Figure 5 we report histograms for cell length and thickness for a total of about 600 frames. In particular cell lengths have an average of 2.1 μm with a relative standard deviation of 5% while cell thickness is distributed around the mean value of 0.59 μm with a relative standard deviation of 5%. The relatively narrow distributions demonstrate how well cell morphology is preserved across frames corresponding to various cell positions and orientations in the imaging volume. An alternative approach for accurate morphological characterization would be Quantitative Phase Imaging [32, 33] that provides precise topographic maps of phase objects lying on a flat substrate, and it’s therefore not suited for tracking over a large depth of field.

In our experiment, the frame rate of the acquisition is limited by the camera’s exposure time (50 ms) that must be large in order to compensate our weak illumination intensity. Using brighter LEDs (which are now commercially available) would allow us to reduce the exposure time, so that the only true limitation of the frame rate is given by the camera data transmission bandwidth.

## 4. Conclusions

We have shown that DHM can be used for accurate and detailed volumetric reconstructions when combining information from the on-axis holograms produced by three red, green and blue LED beams traveling along independent directions. The reconstructed 3D images have comparable resolutions along the transverse and axial directions, low coherent noise and a high sectioning capability. Our technique will be particularly suited to study the full 3D motions of highly motile cells providing a unique combination of high resolution and frame rate.

## Acknowledgments

The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme ( FP7/2007–2013) / ERC grant agreement n. 307940. We also acknowledge funding from MIUR-FIRB Project No. RBFR08WDBE and NVIDIA for support with GPU hardware.

## References and links

**1. **J. B. Pawley, *Handbook of Biological Confocal Microscopy*,
3 (Springer,
2006). [CrossRef]

**2. **W. R. Zipfel, R. M. Williams, and W. W. Webb, “Nonlinear magic: multiphoton microscopy in the biosciences,” Nat. Biotechnol. **21**, 1369–1377 (2003). [CrossRef] [PubMed]

**3. **P. Santi, “Light sheet fluorescence microscopy: a review,” J. Histochem. Cytochem. **59**, 129–138 (2011). [CrossRef] [PubMed]

**4. **K. B. Im, S. Han, H. Park, D. Kim, and B. M. Kim, “Simple high-speed confocal line-scanning microscope,” Opt. Express **13**, 5151–5156 (2005). [CrossRef] [PubMed]

**5. **S. J. Lee and S. Kim, “Advanced particle-based velocimetry techniques for microscale flows,” Microfluid. Nanofluid. **6**, 577–588 (2009).

**6. **J. S. Dam, I. R. Perch-Nielsen, D. Palima, and J. Glückstad, “Three-dimensional imaging in three-dimensional optical multi-beam micromanipulation,” Opt. Express **16**, 7244–7250 (2008). [CrossRef] [PubMed]

**7. **R. Bowman, G. Gibson, and M. Padgett, “Particle tracking in stereomicroscopy in optical tweezers: Control of trap shape,” Opt. Express **18**, 11785–11790 (2010). [CrossRef] [PubMed]

**8. **M. P. Lee, G. M. Gibson, R. Bowman, S. Bernet, M. Ritsch-Marte, D. B. Phillips, and M. J. Padgett, “A multi-modal stereo microscope based on a spatial light modulator,” Opt. Express **21**, 16541–16551 (2013). [CrossRef] [PubMed]

**9. **P. Memmolo, A. Finizio, M. Paturzo, L. Miccio, and P. Ferraro, “Twin-beams digital holography for 3D tracking and quantitative phase-contrast microscopy in microfluidics,” Opt. Express **19**, 25833–25842 (2011). [CrossRef]

**10. **F. Charrière, A. Marian, F. Montfort, J. Kuehn, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, “Cell refractive index tomography by digital holographic microscopy,” Opt. Lett. **31**, 178–180 (2006). [CrossRef] [PubMed]

**11. **F. Charrière, N. Pavillon, T. Colomb, C. Depeursinge, T. J. Heger, E. A. D. Mitchell, P. Marquet, and B. Rappaz, “Living specimen tomography by digital holographic microscopy: morphometry of testate amoeba,” Opt. Express **14**, 7005–7013 (2006). [CrossRef] [PubMed]

**12. **W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods **4**, 717–719 (2007). [CrossRef] [PubMed]

**13. **Y. Cotte, F. Toy, P. Jourdain, N. Pavillon, D. Boss, P. Magistretti, P. Marquet, and C. Depeursinge, “Marker-free phase nanoscopy,” Nat. Photon. **7**, 113–117 (2013). [CrossRef]

**14. **K. Kim, K. S. Kim, H. Park, J. C. Ye, and Y. Park, “Real-time visualization of 3-D dynamic microscopic objects using optical diffraction tomography,” Opt. Express **21**, 32269–32278 (2013). [CrossRef]

**15. **T. Kim, R. Zhou, M. Mir, S. D. Babacan, P. S. Carney, L. L. Goddard, and G. Popescu, “White-light diffraction tomography of unlabeled live cells,” Nat. Photon. **8**, 256–263 (2014). [CrossRef]

**16. **M. K. Kim, *Digital Holography and Microscopy: Principles, Techniques, and Applications* (Springer, 2011). [CrossRef]

**17. **W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography of microspheres,” Appl. Opt. **41**, 5367–5375 (2002). [CrossRef] [PubMed]

**18. **J. Sheng, E. Malkiel, and J. Katz, “Digital holographic microscope for measuring three-dimensional particle distributions and motions,” Appl. Opt. **45**, 3893–3901 (2006). [CrossRef] [PubMed]

**19. **Y. Park, G. Popescu, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Fresnel particle tracing in three dimensions using diffraction phase microscopy,” Opt. Lett. **32**, 811–813 (2007). [CrossRef] [PubMed]

**20. **L. Cavallini, G. Bolognesi, and R. Di Leonardo, “Real-time digital holographic microscopy of multiple and arbitrarily oriented planes,” Opt. Lett. **36**, 3491–3493 (2011). [CrossRef] [PubMed]

**21. **J. Sheng, E. Malkiel, J. Katz, J. Adolf, R. Belas, and A. R. Place, “Digital holographic microscopy reveals prey-induced changes in swimming behavior of predatory dinoflagellates,” Proc. Natl. Acad. Sci. U. S. A. **104**, 17512–17517 (2007). [CrossRef] [PubMed]

**22. **J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. **45**, 836–850 (2006). [CrossRef] [PubMed]

**23. **S. Lee and D. G. Grier, “Holographic microscopy of holographically trapped three-dimensional structures,” Opt. Express **15**, 1505–1512 (2007). [CrossRef] [PubMed]

**24. **S. Lee, Y. Roichman, G. Yi, S. Kim, S. Yang, A. van Blaaderen, P. van Oostrum, and D. G. Grier, “Characterizing and tracking single colloidal particles with video holographic microscopy,” Opt. Express **15**, 18275–18282 (2007). [CrossRef] [PubMed]

**25. **F. Saglimbeni, S. Bianchi, G. Bolognesi, G. Paradossi, and R. Di Leonardo, “Optical characterization of an individual polymer-shelled microbubble structure via digital holography,” Soft Matter **8**, 8822–8825 (2012). [CrossRef]

**26. **L. Repetto, E. Piano, and C. Pontiggia, “Lensless digital holographic microscope with light-emitting diode illumination,” Opt. Lett. **29**, 1132–1134 (2004). [CrossRef] [PubMed]

**27. **F. Dubois, L. Joannes, and J. Legros, “Improved three-dimensional imaging with a digital holography microscope with a source of partial spatial coherence,” Appl. Opt. **38**, 7085–7094 (1999). [CrossRef]

**28. **D. Tseng, O. Mudanyali, C. Oztoprak, S. O. Isikman, I. Sencan, O. Yagliderea, and A. Ozcan, “Lensfree microscopy on a cellphone,” Lab Chip **10**, 1787–1792 (2010). [CrossRef] [PubMed]

**29. **M. Born and E. Wolf, *Principles of Optics* (Pergamon, 1993).

**30. **F. Shen and A. Wang, “Fast-Fourier-transform based numerical integration method for the Rayleigh-Sommerfeld diffraction formula,” Appl. Opt. **45**, 1102–1110 (2006). [CrossRef] [PubMed]

**31. **S. Bianchi and R. Di Leonardo, “Real-time optical micro-manipulation using optimized holograms generated on the GPU,” Comput. Phys. Commun. **181**, 1444–1448 (2010). [CrossRef]

**32. **E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. **38**, 6994–7001 (1999). [CrossRef]

**33. **P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett. **30**, 468–470 (2005). [CrossRef] [PubMed]