## Abstract

Digital in-line holographic microscopy (DIHM) has attracted attention because of its simple but powerful three-dimensional (3D) imaging capability. To improve the spatial resolution, 3D image reconstruction algorithms use numerical magnification, which generates distortions in the generated images. We propose a method to overcome this problem by using the simple relation between the object and image positions in 3D space. Several holograms were taken while translating a resolution target at different axial positions by a motorized stage. We demonstrated the effectiveness of our method by reconstructing the 3D positions of 3-μm-diameter polymer beads on a tilted slide glass from a single measured hologram.

© 2017 Optical Society of America

## 1. Introduction

In digital holography, the phase as well as the amplitude information of a diffracted object wave from a sample is recorded as an interference pattern with a digital image sensor, such as a charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) detector. Because the interference pattern includes mixed information of both the object and the reference waves, a pre-designed simple reference wave such as a collimated plane wave or a spherical wave is normally used to extract only the object wave information from a measured interference pattern. Once the phase and the amplitude of the object wave are known, the object wave can be numerically traced backward to form a series of two-dimensional (2D) images with different depths of focus by using a diffraction formula [1–5]. It is known that the intensity contrast of a calculated image for a scattering object becomes a maximum near its focused axial position, whereas that of a phase object is a minimum around its focused axial position [6–8]. The 3D structure of a sample can be obtained from a shape-from-focus algorithm, where the axial location of each pixel in a 3D volume is calculated with a certain criterion of focus measure [9, 10].

Since the first pioneering idea of holography was introduced by Gabor [11], many different holographic imaging systems have been proposed and demonstrated. Depending on the configuration of the experimental setup, optical holographic systems are categorized as either off-axis or in-line holography. In off-axis holography, the propagation direction of a reference wave and that of an object wave has an offset angle [12–14]. The major advantage of off-axis holography is that the DC terms and the conjugate mirror image term in an hologram can be completely removed by simple frequency-domain filtering without an iterative algorithm [15, 16] or multiple phase-shifted measurements [14, 17]. Because the frequency components of a conjugate mirror image and the DC term at the center of a hologram must be excluded in the frequency domain, less than one-quarter of the measured information is used for the real image reconstruction. This results in poor utilization of the available pixel count of an image sensor.

The original setup proposed by Gabor was an in-line holographic system, where a collimated light beam passing a transparent part of a semitransparent sample works as a reference wave, and the light scattered by the sample becomes an object wave. Even though in-line holography has a major problem related to the coexistence of a twin image and a DC term, digital in-line holographic microscopy (DIHM) or on-chip microscopy has attracted much attention. Disadvantages of these holography based imaging systems include low spatial resolution, heavy computational load, and coexistence of twin images. The important advantages of in-line holography are its simple structure, compact size, imaging capability without a lens or a beam splitter, effective usage of the pixel count of an imaging sensor with little redundancy, and robustness against external vibrations. Because of these technical and practical advantages, it is considered one of the most promising tools for applications such as cell imaging [18, 19], field portable microscopy [20, 21], telemedicine [22, 23], and optofluidic applications [24, 25]. These lensfree on-chip microscopy systems often use raw DIHM images without numerical 3D reconstruction processes [26, 27]. For most of these applications it is highly important to obtain accurate 3D position information of a sample.

A point source is used as a reference wave to obtain a hologram in DIHM, and there exists a certain spherical field curvature in a reference wave on the detector plane. If a plane wave is used as a reference wave during an image reconstruction process, this spherical field curvature on the detector plane by the point source works as a magnifying lens [28, 29]. This generates a magnified image located further apart from the original object position. When the lateral magnification is *M*, the axial magnification becomes *M*^{2} because of the lens equation [30, 31]. Because of this discrepancy between the magnification along the transverse and the axial directions, the 3D shape of a sample reconstructed by a numerical back propagation algorithm always results in image distortion unless the transverse magnification is one (*M* = 1). By adjusting the field curvature of a reference wave in the reconstruction process, we can obtain a variable magnification. In conventional film-based analog holography, a focused light whose field curvature is exactly the same as that of a reference wave is used to reproduce the realistic undistorted image of unit magnification [31]. In digital holography, we can adjust the magnification of a measured interferogram with the numerical parametric lens method by multiplying a quadratic phase term with a measured interferogram before performing numerical back propagation [28, 29, 32, 33]. If we know the field curvature of a reference wave on the detector plane, we can satisfy the *M* = 1 condition. In this case, we cannot take advantage of the resolution enhancement associated with the transverse magnification in DIHM, which is one of the most important positive aspects of DIHM.

Another way to obtain unit magnification is to use collimated light for a reference wave. When a sample is almost in contact with an image sensor, the separation between the sample and the image sensor is negligible compared to the distance from a point source to the image sensor. This kind of DIHM is known as contact-mode lensfree imaging. Because the radius of curvature of an object wave is much shorter than that of a reference wave on the detector plane, the reference wave on the detector plane can be considered as collimated light in this case. Most compact DIHM systems designed for portable microscopy, flow cytometry, and microfluidic applications use this configuration [20, 22, 23, 34]. The transverse spatial resolution of contact-mode lensfree imaging becomes twice the pixel pitch of a detector array, which is on the order of around 3 µm. Here, the transverse resolution of a system is defined as the minimum lateral distance at which two neighboring points can be distinguished. Various super-resolution techniques have been proposed to enhance the resolution of contact-mode lensfree imaging [20, 21]. Because the conjugate or mirror image of an object is located very close to the real image, erroneous intrusion by the closely located mirror image is a major problem in contact-mode lensfree imaging. Many clever algorithms and experimental results have been realized to overcome this twin image problem in contact-mode lensfree systems [15, 35]. In order to obtain the 3D information of a sample in contact-mode lensfree imaging, a complicated setup with multiple point sources has been proposed lately [36].In this paper, we propose a simple scheme to obtain an undistorted 3D image of an object in DIHM with its transverse magnification larger than one (*M* > 1). Our method consists of three steps. First, we calculate the 3D image position of an object by using a conventional numerical reconstruction algorithm. This would generate a series of magnified 2D images with different transverse magnifications. Second, the transverse magnification *M* of each 2D image is calculated on the basis of the geometry of a measurement setup. Third, we reduce each 2D image to the real object scale by using the magnification information *M*. The axial position of each 2D image is also recalculated to its object scale by using a simple lens equation. An undistorted 3D image of an object is obtained by using the series of recalculated 2D images. We have simulated distortions of a calculated 3D image along the axial position depending on the axial position of an object and the geometry of the experimental setup. The validity of our proposed method was demonstrated experimentally.

## 2. 3D image reconstruction procedure

In DIHM, the 3D structure of a sample can be reconstructed from a single measured hologram. Figure 1 shows a standard DIHM configuration composed of a point source and an imaging sensor array [37, 38]. A point source is located at *z* = −*z _{s}*, while a half-transparent sample and a detector array are located at

*z*=

*z*and

_{o}*z*= 0, respectively.

#### 2.1 Numerical reconstruction algorithm

Many different algorithms have been proposed and demonstrated for numerical beam propagation of an object wave and image reconstruction in digital holography. The Fresnel transform method (FTM) [30, 39–41] and angular spectrum method (ASM) [42–44] are the two most popular algorithms. FTM is based on the Fresnel diffraction formula [30, 31]. When the paraxial approximation is used, the Fresnel diffraction formula, which is a complex 2D convolution integral, takes the form of a simple Fourier transform. This specific Fourier transform is called the Fresnel transform [45]. FTM is fast because only a single 2D numerical Fourier transform is needed to calculate the output electric field from an input electric field after a certain propagation distance. However, it can be applicable only when the propagation distance is large enough to satisfy the Fresnel approximation. The field of view or the physical size of a produced 2D image increases as the propagation depth increases in FTM, which in turn decreases the resolution of an image. This property enables us to generate an image whose size is larger than that of an image sensor. The problem of decreased image resolution in FTM can be overcome by using the zero padding method [33, 39] at the expense of increased computational load.

ASM is based on the linear decomposition of an interference pattern with many plane waves or 2D spatial frequencies. A measured interference pattern is decomposed into its spatial frequency components by using a 2D Fourier transform. The output of each spectral component after propagating a certain distance is calculated simply by multiplying the transfer function of that spatial frequency component for a given propagation distance. The total output electric field is calculated by taking the inverse Fourier transformation of the output spectral components. ASM is slower than FTM because of its two numerical Fourier transforms. The major advantage of ASM is that it can be applied for a short propagation distance where the Fresnel approximation may not be valid. Unlike the case of FTM, the field of view of a produced 2D image is constant and independent of the propagation distance in ASM, which makes the pixel pitch of a calculated image invariant with respect to the longitudinal propagation distance. This restricts that the physical size of an object to be smaller than that of the image sensor. This serious handicap of ASM can be overcome by deliberately padding null data around a measured interference pattern, which behaves as if the physical size of the image sensor were larger [17, 39].

We used ASM for numerical 3D image reconstruction in this study. In ASM, we multiplied the kernel *G*, defined in Eq. (1), by the spectral components of the input spectrum to obtain the output spectrum after propagating a certain distance z along the z-axis.

*k*is the wave number for a given center wavelength of a source

**λ**. Spatial frequencies along the

*x*and

*y*axes are

*f*and

_{x}*f*, respectively. The reconstructed image

_{y}*R(z,x,y)*is computed by the inverse Fourier transform of the output spectrum with

*H*(0,

*x*,

*y*) is a measured hologram by a detector array. Note that because the hologram image is discrete data, we should use a discretized kernel

*G*whose number of data points is the same as that of the detector array.

The axial position of each pixel on a measured hologram is calculated using a shape-from-focus algorithm. The reconstructed image of a scattering point has its maximum contrast near its focused axial position, and we can calculate the focused axial position of each pixel with this property. In this study we used the Sobel operator to calculate the variance of Tenebaum gradient intensity and used it as the focus measure to find the axial positions of an object [6, 10]. We tested our Sobel operator-based depth-from-focus algorithm by using a scattering target (USAF 1951). Figure 2(a) shows a hologram of the USAF 1951 target measured by a standard DIHM setup shown in Fig. 1. The separation between a point source and the detector array was 15 cm, and the distance from the resolution target to the detector was 2 cm. A series of 2D images with different axial positions were calculated with the ASM explained in Eq. (2). Figure 2(b) is the best-focused image with the maximum focus measure value calculated by the Sobel operator. The magnified view of the small central area in Fig. 2(b) clearly shows the smallest structures and characters of the USAF 1951 target. No effort was made to eliminate the twin image of the sample during the numerical reproduction procedure in this case. Blurring on the edges of rectangular blocks and non-uniform background in Fig. 2(b) are typical artifacts of closely located twin images in DIHM.

#### 2.2 Errors due to nonlinear axial scale in a reconstructed 3D image

Here we consider the numerical 3D reconstruction of a measured object from a single hologram measured by the standard DIHM setup shown in Fig. 1. If we have a half-transparent sample, for example the USAF 1951 target shown in Fig. 2, then light passing through the sample without scattering works as a reference wave. The interference pattern between the scattered light by a sample and the reference light is measured by the detector array. The radius of curvature for a reference wave on the detector plane is *z _{s}* centered at the origin, and the radius of curvature for a scattering light on the detector plane is

*z*centered at the transverse position of each scattering point. The details of the mathematical expressions for a measured hologram and its measurement setup are described in [14,26]. Figure 3 shows the geometrical interpretation of the relation among the transverse magnification

_{o}*M*, the axial positions of an object, a point source, an image, and the detection plane.

When the reference light is a diverging spherical wave from a point (−*z _{s}*, 0, 0) to the positive

*z*direction, the electric field on the transverse plane at

*z*= 0 can be written as

*z*,

_{o}*x*,

_{o}*y*) propagating toward the positive

_{o}*z*direction. The electric field of the object light on a detector plane at

*z*= 0 can be written with the paraxial approximation as

*z*,

_{i}*x*, and

_{i}*M*are defined as The first exponential phase term in Eq. (5)′) is not a function of (

*x*,

*y*). The second exponential term in Eq. (5)′) is nothing but the expression for a spherical wave diverging from a point (

*−z*). Note that Eq. (6) is the same as the lens equation to calculate the image distance when the focal length of a lens and the object is given.

_{i}, x_{i}, y_{i}*z*,

_{i}*z*, and

_{o}*z*correspond to the image distance, object distance, and focal length of a lens, respectively.

_{s}*M*as defined in Eq. (7) is the transverse magnification. Similarly, it can be shown that ${E}_{s}{E}_{o}^{*}$ is a spherical wave converging to a point (

*z*). ${E}_{s}{E}_{o}^{*}$ is the mirror image of ${E}_{s}^{*}{E}_{o}$ placed on the opposite side of the detector plane of

_{i}, x_{i}, y_{i}*z*= 0.

From the preceding arguments, we can conclude that a hologram taken in the *z* = 0 plane includes four terms: two DC terms, a term for the real image, and another for the conjugate image. When an object is located on the *z* = −*z _{o}* plane, the real image term can be traced backward to form a focused image on the

*z*= −

*z*plane, and the conjugate term can be traced forward to form another focused image on the

_{i}*z*=

*z*plane. Both images have the same transverse magnification

_{i}*M*=

*z*/

_{i}*z*. In order to make the system compact, the distance between a point light source to the detector array (

_{o}*z*) is set to be less than 20 mm in most DIHM systems. The distance from an object to the detector array (

_{s}*z*) ranges from 0 to

_{o}*z*. A contact-mode DIHM is used to enable a larger field of view, where

_{s}*z*is set as small as possible:

_{o}*z*≈0. If a higher spatial resolution is needed,

_{o}*z*is set as large as possible:

_{o}*z*≈

_{o}*z*. When

_{s}*z*≈0, we have a unit magnification

_{o}*M*= 1.

*M*becomes larger than 1 as

*z*increases. If

_{o}*M*is off from 1, the calculated image position

*z*obtained by using a numerical image reconstruction algorithm in DIHM is off from the actual object position

_{i}*z*even if the numerical reconstruction algorithm is perfect. Because

_{o}*z*and

_{i}*z*are related by the nonlinear expression of Eq. (6), we can estimate distortions in a generated 3D image by defining a relative percentage error of

_{o}*z*compared to the actual object position

_{i}*z*with

_{o}*Error =*(

*M*− 1) × 100 [%]. Figure 4(a) shows the relative percentage error defined in Eq. (8) for various values of

*z*and

_{i}*z*when

_{o}*z*is varied from 1 to 10 mm, while

_{o}*z*is varied from 15 to 30 mm. Figure 4(b) shows the variation of relative percentage error as a function of the object distance

_{s}*z*for four specific values of

_{o}*z*. Black, red, blue, and green lines show the relative errors between

_{s}*z*and

_{i}*z*when the distance between the light source and detector

_{o}*z*has fixed values of 15, 20, 25, and 30 mm, respectively. When

_{s}*z*is 30 mm, the relative error increases almost linearly with

_{s}*z*. As

_{o}*z*becomes smaller, the relative error increases rapidly and varies nonlinearly as a function of

_{s}*z*.

_{o}## 3. Experiments

In order to demonstrate the effectiveness of our proposed distortion correction scheme in DIHM, we built a simple DIHM system. Figure 5(a) shows our DIHM setup, consisting of only three components: a point light source, half-transparent sample, and a detector array. A fiber-coupled super-luminescent diode (SLD; 261-HP) was used as a point light source, with a 683-nm center wavelength and 8-nm full-width at half maximum spectral bandwidth. The output of the SLD was delivered through a single-mode fiber (Thorlabs SM600) whose core diameter and numerical aperture were 4.5 μm and 0.12, respectively. Holograms were acquired by a CMOS image sensor (DMK 24UJ003), with 3856 × 2764 pixels and a 1.67 μm pixel pitch. Measured optical intensities were digitized and stored as 8-bit binary data.

In our first experiment, a USAF 1951 resolution target was used as a half-transparent scattering object and was placed between the point light source and the detector array. The aim was to experimentally demonstrate our proposed scheme for obtaining corrected axial positions and undistorted transverse scales in DIHM. The USAF 1951 resolution target was placed parallel with the image sensor array. The distance between the point light source and the detector array (*z _{s}*) was 20 mm. The distance from the USAF 1951 resolution target to the detector array (

*z*) was changed from 2 to 7 mm with a 100-µm step size by using a linear translation stage. A hologram was taken at each object position. For a given object distance

_{o}*z*, a series of 2D images with different axial positions were calculated from a measured hologram by using the ASM explained in Section 2.1. The best-focused 2D image was selected among these 2D images on the basis of the focus measure calculated with the Sobel operator. We took the axial position for this best-focused image as the image distance

_{o}*z*corresponding to the given object distance

_{i}*z*. The black dotted line in Fig. 5(b) shows the axial positions of the focused reconstructed images (

_{o}*z*) obtained by the ASM and the Sobel operator to calculate Tenenbaum gradient. The green dotted line indicates the actual scanned object positions from 2 to 7 mm. When the object position is small, near 2 mm, the offset between the reconstructed image position and the object position is small, whereas the offset becomes large when the object position is large, near 7 mm. These offsets must be corrected to realize the 3D information of an object. Each calculated image position can be mapped to the real object position by using the simple relation given in Eq. (6). The distance

_{i}*z*between the point light source and the detector array was set to 20 mm. The red dotted line is the corrected image position calculated with Eq. (6). It clearly shows that the offset between the corrected object positions and the actual axial positions becomes zero. For each given axial position

_{s}*z*, the transverse scale of the calculated 2D image should be corrected by the transverse magnification

_{i}*M*defined in Eq. (7).

In order to experimentally demonstrate the distortions in a reconstructed 3D image from a measured hologram, we performed another experiment for a tilted planner object with a known tilt angle. From a single measured hologram for the tilted 2D planner object, we reconstructed its 3D shape. The distortions in the reconstructed 3D shape of the tilted 2D sample was corrected with our proposed scheme, and the corrected 3D positions of beads and the tilted angle of the slide glass plate were compared with the real object positions. Polystyrene beads with a 3-μm diameter (Polysciences Inc.) were placed on a microscope slide glass and covered with a cover glass. As shown in Fig. 6(a), the slide glass with microbeads was tilted by 20° from the horizontal, and was placed between the fiber-coupled SLD light source and the CMOS image sensor. We used a CMOS sensor (DMK 24UJ003) with 3856 × 2764 pixels covering an imaging area of 6.4 × 4.6 mm^{2}. A measured hologram of the microbeads on the tilted slide glass is shown in Fig. 6(b). A 2D image at any axial position *z _{i}* can be generated from the measured hologram by using the ASM algorithm. Figure 6(c) shows the central part of a reconstructed 2D image calculated from the hologram shown in Fig. 6(b) on a specific axial image plane at

*z*= 6.2 mm. An enlarged square image at the upper center shows two focused beads, and another enlarged square image shows three off-focused beads. A thick red blurred vertical line is the focused region in the 2D reconstructed image of

*z*= 6.2 mm. The corresponding focused area is highlighted with a blue spot located at the intersection of the thick horizontal gray line and the white line in Fig. 6(a). The thick gray line in Fig. 6(a) represents the calculated 2D image at

*z*= 6.2 mm, and the tilted thick white line is the reconstructed 3D profile of the microspheres on the slide glass plate.

We can obtain the depth information of an object by using a series of 2D images numerically generated from a single measured hologram. We made 501 equally spaced images along the *z*-axis from *z* = 5.0 mm to *z* = 7.5 mm. These images form a 3D intensity function *I*(*x*,*y*,*z*), which covers a 3D volume of 1.7 × 1.7 × 2.0 mm^{3} and consists of 1024 × 1024 × 501 pixels. A typical example of *I*(*x*,*y*,*z*) can be seen in Fig. 6(c), which is a sliced 2D intensity function of *I*(*x*,*y*,*z*) at *z* = 6.2 mm. Because we used the ASM algorithm to generate 2D images at different axial positions, each generated image has the same physical dimensions and number of pixels as that of the initial hologram used to generate the 3D volume image.

In order to find the 3D structures of an object from the 3D volume image *I*(*x*,*y*,*z*), another 3D function *F*(*x*,*y*,*z*) is generated that can represent the focus measure of *I*(*x*,*y*,*z*). For a given axial position *z*, a 2D function *F*(*x*,*y*,*z*) is calculated by applying the Sobel operator to a sliced 2D intensity image *I*(*x*,*y*,*z*). The Sobel operator calculates the amplitude of the gradient of a 2D image [6, 10]. For given transverse coordinates (*x*,*y*) within the 3D image volume, we assume there is a scattering point if there exists a local maximum of *F*(*x*,*y*,*z*) along the *z*-axis. We kept the *I*(*x*,*y*,*z*) for that local maximum point and reset *I*(*x*,*y*,*z*) to zero for the rest of the *z* values. We define this newly generated 3D intensity function as *I*′(*x*,*y*,*z*). If there is no local maximum of *F*(*x*,*y*,*z*) along the *z* axis, we assume that there is no scattering point or a bead for the given transverse coordinates (*x*,*y*). Figure 7(a) shows a 3D distribution of the reconstructed microbeads on the slide glass. Beads on a red plate at the top were calculated from a measured hologram. It was made by drawing a sphere with a 3-μm diameter on each (*x*,*y*,*z*) coordinate where *I*′(*x*,*y*,*z*) has a non-zero value. However, this image has distortion along the *z* axis due to the nonlinear relation between the object and the image distances described in Eq. (6). It has distortion along the transverse direction as well because of the axial position-dependent transverse magnification in Eq. (7). We corrected these distortions by using Eq. (6) and Eq. (7). Microbeads on a tilted slide glass shown in the bottom of Fig. 7(a) show the corrected image of microbeads calculated. The blue and red plates in Fig. 7(a) represent the slide glasses that hold the microbeads. Figure 7(b) is a side view of Fig. 7(a). The two red lines in Fig. 7(b) are the two best-fitting lines for all the microbeads in the two glass slides. The best-fitting line for the upper distorted beads is 33.32° degrees from the horizontal, and the best-fitting line for the lower position-corrected beads has a tilting angle of 19.39°. The actual tilting angle of the slide glass in the experimental setup was 20°. Because *z _{s}* = 20 mm and

*z*= 4.4–4.9 mm, we cannot see a clear nonlinear relation between the

_{o}*x*axis and

*z*axis for the upper distribution of beads in Fig. 7(b). This result is consistent with the two graphs shown in Figs. 4(b) and 5(b).

## 4. Conclusions

Because DIHM can be simply built by using a point source, a half-transparent object, and a detector array, it has broad biomedical and industrial applications. One of the most important advantages of DIHM is that it can freely adjust the field of view and magnification simply by changing the position of an object between a point source and the detector array. When the diverging spherical reference wave from a point source is canceled exactly during the numerical reconstruction process of DIHM by using a phase conjugated reference beam, we can obtain an undistorted 3D image that has the same 3D structure as the original object. In this case, the transverse magnification becomes one, and the spatial resolution of DIHM is determined by the pixel pitch of a detector array, which is much larger than that of a conventional microscope. This can be a critical limitation of DIHM. If the spherical phase of the reference wave is not canceled exactly during the DIHM reconstruction procedure, we can obtain a magnified reconstructed 3D image. A reconstructed 3D image has distortion in this case, because the transverse magnification is not constant but varies depending on the axial position of the reconstruction plane. We provide a simple relation that can compensate for distortions in axial and transverse distortions created during the 3D image reconstruction procedure of a hologram. We verified our proposed distortion compensation method by taking a hologram and reconstructing its 3D structure. A series of holograms were taken for a plane USAF 1951 resolution target with different axial positions. The nonlinear relation between the object position and reconstructed image position was experimentally verified and compared with the predicted formula. By using a sample consisting of hundreds of 3-μm-diameter microbeads on a tilted microscope slide glass, we verified the effectiveness of our proposed method for correcting distortions in the transverse and the axial directions. Because our proposed distortion correction scheme is quite simple and easy, it can be used in many 3D image reconstruction procedures in DIHM and lensfree imaging applications.

## Acknowledgment

This work was financially supported by the MEST through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and future Planning (2017R1A2B4003950), the Basic Science Research Program through the National Research Foundation of Korea (NRF-2013R1A1A2062448), the Center for Advanced Meta-Materials (CAMM) funded by the Ministry of Science, ICT and Future Planning as a Global Frontier Project (CAMM-2014M3A6B3063712), the Technology Innovation Program (10062417) funded by the Ministry of Trade, Industry and Energy (MI), and the Ministry of Education, Science and Technology of Korea through the BK21 program.

## References and links

**1. **X. Yu, J. Hong, C. Liu, and M. K. Kim, “Review of digital holographic microscopy for three-dimensional profiling and tracking,” Opt. Eng. **53**(11), 112306 (2014). [CrossRef]

**2. **J. Sheng, E. Malkiel, and J. Katz, “Digital holographic microscope for measuring three-dimensional particle distributions and motions,” Appl. Opt. **45**(16), 3893–3901 (2006). [CrossRef] [PubMed]

**3. **U. Schnars and W. P. Jüptner, “Digital recording and numerical reconstruction of holograms,” Meas. Sci. Technol. **13**(9), R85–R101 (2002). [CrossRef]

**4. **L. Ma, H. Wang, Y. Li, and H. Jin, “Numerical reconstruction of digital holograms for three-dimensional shape measurement,” J. Opt. A **6**(4), 396–400 (2004). [CrossRef]

**5. **A. Ayoub, P. Divós, S. Tóth, and S. Tõkés, “Software Algorithm to Reconstruct 2D Images from Recorded 3D In-Line DHM Holograms,” Computer and Automation Research Institute **5** (2006).

**6. **F. Dubois, C. Schockaert, N. Callens, and C. Yourassowsky, “Focus plane detection criteria in digital holography microscopy by amplitude analysis,” Opt. Express **14**(13), 5895–5908 (2006). [CrossRef] [PubMed]

**7. **P. Langehanenberg, B. Kemper, D. Dirksen, and G. von Bally, “Autofocusing in digital holographic phase contrast microscopy on pure phase objects for live cell imaging,” Appl. Opt. **47**(19), D176–D182 (2008). [CrossRef] [PubMed]

**8. **S. Lee, J. Y. Lee, W. Yang, and D. Y. Kim, “Autofocusing and edge detection schemes in cell volume measurements with quantitative phase microscopy,” Opt. Express **17**(8), 6476–6486 (2009). [CrossRef] [PubMed]

**9. **Y.-S. Bae, J.-I. Song, and D. Y. Kim, “Volumetric reconstruction of Brownian motion of a micrometer-size bead in water,” Opt. Commun. **309**, 291–297 (2013). [CrossRef]

**10. **Y. Sun, S. Duthaler, and B. J. Nelson, “Autofocusing in computer microscopy: selecting the optimal focus algorithm,” Microsc. Res. Tech. **65**(3), 139–149 (2004). [CrossRef] [PubMed]

**11. **D. Gabor, “A new microscopic principle,” Nature **161**(4098), 777–778 (1948). [CrossRef] [PubMed]

**12. **E. N. Leith and J. Upatnieks, “Reconstructed wavefronts and communication theory,” J. Opt. Soc. Am. **52**(10), 1123–1130 (1962). [CrossRef]

**13. **E. Cuche, F. Bevilacqua, and C. Depeursinge, “Digital holography for quantitative phase-contrast imaging,” Opt. Lett. **24**(5), 291–293 (1999). [CrossRef] [PubMed]

**14. **N. Verrier and M. Atlan, “Off-axis digital hologram reconstruction: some practical considerations,” Appl. Opt. **50**(34), H136–H146 (2011). [CrossRef] [PubMed]

**15. **B. K. Chen, T.-Y. Chen, S. G. Hung, S.-L. Huang, and J.-Y. Lin, “Twin image removal in digital in-line holography based on iterative inter-projections,” J. Opt. **18**(6), 065602 (2016). [CrossRef]

**16. **L. Rong, Y. Li, S. Liu, W. Xiao, F. Pan, and D. Wang, “Iterative solution to twin image problem in in-line digital holography,” Opt. Lasers Eng. **51**(5), 553–559 (2013). [CrossRef]

**17. **I. Yamaguchi, T. Matsumura, and J. Kato, “Phase-shifting color digital holography,” Opt. Lett. **27**(13), 1108–1110 (2002). [CrossRef] [PubMed]

**18. **G. Zheng, S. A. Lee, Y. Antebi, M. B. Elowitz, and C. Yang, “The ePetri dish, an on-chip cell imaging platform based on subpixel perspective sweeping microscopy (SPSM),” Proc. Natl. Acad. Sci. U.S.A. **108**(41), 16889–16894 (2011). [CrossRef] [PubMed]

**19. **V. Micó and Z. Zalevsky, “Superresolved digital in-line holographic microscopy for high-resolution lensless biological imaging,” J. Biomed. Opt. **15**(4), 046027 (2010). [CrossRef] [PubMed]

**20. **A. Greenbaum, U. Sikora, and A. Ozcan, “Field-portable wide-field microscopy of dense samples using multi-height pixel super-resolution based lensfree imaging,” Lab Chip **12**(7), 1242–1245 (2012). [CrossRef] [PubMed]

**21. **W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express **18**(11), 11181–11191 (2010). [CrossRef] [PubMed]

**22. **O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip **10**(11), 1417–1428 (2010). [CrossRef] [PubMed]

**23. **A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods **9**(9), 889–895 (2012). [CrossRef] [PubMed]

**24. **S. O. Isikman, W. Bishara, H. Zhu, and A. Ozcan, “Optofluidic tomography on a chip,” Appl. Phys. Lett. **98**(16), 161109 (2011). [CrossRef] [PubMed]

**25. **S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip **9**(6), 777–787 (2009). [CrossRef] [PubMed]

**26. **M. Roy, D. Seo, S. Oh, J.-W. Yang, and S. Seo, “A review of recent progress in lens-free imaging and sensing,” Biosens. Bioelectron. **88**, 130–143 (2017). [CrossRef] [PubMed]

**27. **M. Roy, G. Jin, D. Seo, M.-H. Nam, and S. Seo, “A simple and low-cost device performing blood cell counting based on lens-free shadow imaging technique,” Sens. Actuators B Chem. **201**, 321–328 (2014). [CrossRef]

**28. **W. Qu, O. C. Chee, Y. Yu, and A. Asundi, “Recording and reconstruction of digital Gabor hologram,” Optik-Int. J. Light Electron Opt. **121**(23), 2179–2184 (2010). [CrossRef]

**29. **T. Colomb, F. Montfort, J. Kühn, N. Aspert, E. Cuche, A. Marian, F. Charrière, S. Bourquin, P. Marquet, and C. Depeursinge, “Numerical parametric lens for shifting, magnification, and complete aberration compensation in digital holographic microscopy,” J. Opt. Soc. Am. A **23**(12), 3177–3190 (2006). [CrossRef] [PubMed]

**30. **U. Schnars and W. Jueptner, *Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques* (Springer Science & Business Media, 2005).

**31. **J. W. Goodman, *Introduction to Fourier Optics* (Roberts and Company Publishers, 2005).

**32. **A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics **6**(5), 283–292 (2012). [CrossRef]

**33. **F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett. **29**(14), 1668–1670 (2004). [CrossRef] [PubMed]

**34. **A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express **20**(3), 3129–3143 (2012). [CrossRef] [PubMed]

**35. **T. Latychevskaia and H.-W. Fink, “Solution to the twin image problem in holography,” Phys. Rev. Lett. **98**(23), 233901 (2007). [CrossRef] [PubMed]

**36. **T.-W. Su, S. O. Isikman, W. Bishara, D. Tseng, A. Erlinger, and A. Ozcan, “Multi-angle lensless digital holography for depth resolved imaging on a chip,” Opt. Express **18**(9), 9690–9711 (2010). [CrossRef] [PubMed]

**37. **J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. **45**(5), 836–850 (2006). [CrossRef] [PubMed]

**38. **L. Repetto, E. Piano, and C. Pontiggia, “Lensless digital holographic microscope with light-emitting diode illumination,” Opt. Lett. **29**(10), 1132–1134 (2004). [CrossRef] [PubMed]

**39. **T. M. Kreis, M. Adams, and W. P. Jüptner, “Methods of digital holography: a comparison,” in *Lasers and Optics in Manufacturing III* (International Society for Optics and Photonics, 1997), pp. 224–233.

**40. **U. Schnars and W. Jüptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. **33**(2), 179–181 (1994). [CrossRef] [PubMed]

**41. **E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. **38**(34), 6994–7001 (1999). [CrossRef] [PubMed]

**42. **É. Lalor, “Conditions for the validity of the angular spectrum of plane waves,” J. Opt. Soc. Am. **58**(9), 1235–1237 (1968). [CrossRef]

**43. **J. E. Harvey, “Fourier treatment of near‐field scalar diffraction theory,” Am. J. Phys. **47**(11), 974–980 (1979). [CrossRef]

**44. **J. J. Stamnes, “Focusing of two-dimensional waves,” J. Opt. Soc. Am. **71**(1), 15–31 (1981). [CrossRef]

**45. **M. K. Kim, “Principles and techniques of digital holographic microscopy,” J. Photonics Energy, 018005 (2010). [CrossRef]