## Abstract

Self-interference digital holography (SIDH) and Fresnel incoherent correlation holography (FINCH) are recently introduced holographic imaging schemes to record and reconstruct three-dimensional (3-D) information of objects by using incoherent light. Unlike conventional holography, a reference wave in incoherent holography is not predetermined by an experimental setup, but changes with target objects in incoherent holography. This makes the relation between the 3-D position information of an object and those stored in a measured hologram quite complicated. In this paper, we provide simple analytic equations for an effective 3D mapping between object space and the image space in incoherent holography. We have validated our proposed method with numerical simulations and off-axis SIDH experiments.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Ever since the first invention of holography by Gabor [1], a lot of efforts and studies have been made to use it for 3-D imaging and display. A hologram is a recorded interference pattern by an object wave and a reference wave. In conventional holography, the reference wave is either a fixed plane wave or a spherical wave predetermined by an experimental setup. The 3-D information of an object can be retrieved from a measured hologram with a known reference wave. Several significant advances and brilliant ideas have been achieved and suggested lately in optical holography [2–4]. In 1999, E. Cuche demonstrated quantitative phase microscopy with digital holography, which achieved measurements in the scale of one-tenth of a nanometer in the axial direction [5].

Moreover, it has been shown that the 3-D information of an object can be acquired from a single hologram by numerical reconstructions of wavefields from digitally recorded holograms by using diffraction formula [6,7]. However, digital holography requires a coherent light source to generate an interference pattern for an object. Unlike coherent laser light, natural lights, such as incandescent or fluorescent lights, have short coherence lengths and are hard to observe interference patterns.

Several ingenious methods have thus been proposed for incoherent holography: scanning holography [8,9], Fresnel incoherent correlation holography (FINCH) [10–12], and self-interference incoherent digital holography (SIDH) [13–16]. In scanning holography, interference patterns are obtained by using a heterodyne detection scheme with a single-pixel detector. A 2D hologram of a fluorescent sample is obtained with an incoherent light source after scanning a sample along the transverse plane. This method requires a complicated measurement system and heavy numerical post-processing. It does not have the advantage of single-shot data acquisition. FINCH and SIDH methods use self-interference, where object light which is scattered or transmitted through a sample is divided into two. In FINCH or SIDH, light diffracted from one object point can interfere only with itself. FINCH used a spatial light modulator (SLM), and SIDH used a Michelson interferometer to obtain self-interference holograms [10–15]. Additionally, there are several approaches to improve FINCH and SIDH [16–22]. A measured hologram can be expressed with the following equation.

It is an intensity of a measured interference pattern by a reference light $(R)$ and an object light $(O)$. The first two terms on the right side do not contain any phase information and are known as the DC-term of a hologram. The remaining two terms on the right are complex conjugates of each other. The third term is called the twin image term, and the last term has the object wave information we need to retrieve. A phase-shifting method is often used in in-line holography to remove the two DC terms and the twin image term, where three or four phase-shifted holograms are measured by using a precision phase-shifting device.[10,13,23] In off-axis holography, the propagation directions of the reference wave $(R)$ and the object wave $(O)$ are different. When the reference and the object waves are not parallel, the DC terms and the twin image term are placed at different positions in the Fourier domain and can be easily removed in off-axis holography [24–26]. In 2013, Kim et al. reported an off-axis SIDH with an LED source [16]. Muhammad et al. proposed another incoherent off-axis holography system without a spatial light modulator [17]A laser with a long temporal and spatial coherence is used in traditional digital holography to generate an interference pattern when the path length difference between the reference and object waves is significant. In this case, a reference wave is directly from a laser source and does not contain any information of an object. The reference wave is fixed on the detection plane of a holographic imaging system, and only an object wave changes with a sample and contains the 3D information of it. The 3D information of a sample can be retrieved from a hologram with numerical beam propagation, and we need to know the mathematical expression of a reference wave to do this. Therefore, accurate calibration of a reference wave is necessary for coherent holography. Unlike the case for coherent holography, the reference wave changes with a sample position, as well as the object wave, in incoherent holography such as FINCH or SIDH. When a reference wave changes with a sample position, the relation between the 3D position of an object and those stored in a hologram does not follow a conventional relation [27], and the 3D reconstruction of an object is a quite complicated procedure in FINCH and SIDH [28].

In this paper, we have investigated the relationship between the 3D position of an object and that of a stored hologram in incoherent holography. We provide simple analytic equations for an effective 3D mapping between the object and image spaces in incoherent holography. By using these equations, we have analyzed the axial and the transverse magnifications of an incoherent holographic imaging system with two lenses. The validity of our proposed equations and their usage in practical 3D mapping applications were demonstrated experimentally by using an off-axis SIDH system with an artificial point object. The artificial point object was made by guiding an LED light into a single-mode fiber. We took a series of holograms while scanning the artificial point object along with the x-y-z directions in the object space. The x-y-z coordinates of a point source were calculated from each measured hologram. A 3D volumetric mapping between the object and the image spaces of our SIDH system was obtained by comparing the actual 3D positions of a point source and the 3D positions calculated from measured holograms. We have demonstrated that the actual 3D position of an object can be retrieved well from a measured incoherent hologram.

## 2. Theoretical analysis of the three-dimensional mapping in incoherent holography

#### 2.1 Incoherent holography

The coherence of a light source determines the visibility of interference patterns in a measured hologram. The visibility of an interference pattern becomes maximum when a perfect coherent light source is used. The spatial coherence of an extended light source decreases as the solid angle of the light source observed from an object illuminated by the light source increases. For a point source, the solid angle of a light source is zero, and the spatial coherence and the visibility of a measured hologram become maximum. Coherence length is a parameter representing the degree of temporal coherence of a light source. Coherence length is inversely proportional to the spectral bandwidth of a light source. As the path length difference between a reference and an object wave increases in an interferometer, the fringe visibility of an interference pattern decreases and drops by half when the path length difference is equal to the coherence length. Since the coherence length of a typical incoherent light source, such as an LED, is only in the order of ten micrometers, it is hard to take a hologram image with incoherent light; the path lengths of two interferometer arms should be matched within the coherence length. Even when the two paths of an interferometer are perfectly matched, the maximum measurable distance of an object along the axial direction is limited to the coherence length of a light source for a conventional holographic imaging system. For avoiding this problem, a reference wave with a separate light path is not used in FINCH nor SIDH. Instead of using a separate reference light, interference patterns are formed by mixing two similar object lights in incoherent holography. Object light is split into two, and the two object lights are arranged to have little different radii of curvature. A hologram is obtained by overlapping them on a detection screen.

#### 2.2 Extracted image position in incoherent holography

When two spherical object waves with different radii of curvature $Z_1$ and $Z_2$ are overlapped on a detection screen, we obtain an interference pattern with a new radius of curvature $Z_c$ given by Eq. (2) [29].

$Z_1$ and $Z_2$ are relative axial positions from the detection plane to the focusing points of the two object waves. $Z_c$ is the retrieved axial position of an image by using incoherent holography. $Z_1$, $Z_2$, and $Z_c$ are positive when waves are converging on an observation plane, while these are negative when waves are diverging. When the axial position of an object changes, both $Z_1$ and $Z_2$ change. Since we can only obtain $Z_c$ from a measured hologram by using a numerical focusing method, retrieving the axial position of a point object from a measured hologram is not a straight forward process. Thorough calibration of an optical system is required to calculate the axial position of an object from a measured hologram. Since there exist multiple ($Z_1$, $Z_2$) pairs that satisfy Eq. (2) for a given $Z_c$, we should find proper constraints of an optical system that allow only a single solution of ($Z_1$, $Z_2$). The axial position of the point object can be obtained from $Z_1$ and $Z_2$ by using the thin-lens equation.Figure 1(a) shows a schematic diagram of a simple incoherent holographic imaging system, where an object is imaged by two colocated lenses whose focal lengths are $f_1$ and $f_2$. This can be realized by using a spatial light modulator (SLM) in FINCH or a Michelson interferometer in SIDH. “$O$” on the optical axis represents an object.“$L$” is the axial position of the two lenses. $P_1$ and $P_2$ are images produced by the two lenses. Light passing through these two lenses work as two object waves producing interference patterns. Pink, green, and red lines labeled with “$A$”, “$B$”, and “$C$” on the right side of the two lenses are three possible positions of an observation plane in incoherent holography. Figure 1(b) illustrates axial and transverse positions of two images formed by the two lenses in incoherent holography. A red arrow on the far left side of the diagram is an object. The distance between the object and the two lenses is $a$. The distance from the two lenses to the measurement plane is $S$. Both $a$ and $S$ are positive in this notation. The two lenses produce two different images on the right side of the lenses. $(Z_1+S)$ and $(Z_2+S)$ are the distances from the two lenses to the images. By using the thin-lens equation, we have the following relation.

$Z_1$ and $Z_2$ are the radii of curvature of the two object waves observed on the measurement plane. $Z_1$ and $Z_2$ are both positive because the two images are formed on the right sides of the screen. $Z_1$ or $Z_2$ becomes negative if its image is located on the left side of the measurement plane. Putting Eq. (3) into Eq. (2), we obtain the relation between the axial position of an object $a$ and the radius of curvature $Z_c$, which is the axial position of the image reconstructed from a measured hologram. $Z_c$ is a relative axial position with respect to a measurement plane; it is negative (positive) when the image is on the left (right) side of the measurement plane.

#### 2.3 The effect of a measurement plane

Unlike conventional coherent holography, the axial position of the measurement plane produces different results in reconstructed images in incoherent holography. When the measurement plane is at the “$A$” position in Fig. 1(a), a hologram is made with two converging object waves. Since “$P_1$” and “$P_2$” in Fig. 1(a) are both on the right side of the measurement plane, $Z_1$ and $Z_2$ are both positive. When the measurement plane is at the “$B$” position, $Z_1$ is negative, and $Z_2$ is positive. $Z_1$ and $Z_2$ are both negative when the screen is at the “$C$” position, and a hologram is made with two diverging waves.

Figure 2 shows simulation data for the relation between the object space and the image space of a simplified incoherent holographic imaging system for the three different positions of a measurement plane illustrated in Fig. 1(a). We considered an incoherent holographic imaging system composed of two colocated lenses whose focal lengths are $f_1 = 75mm$ and $f_2 = 100 mm$. We have calculated the axial position of an image, the axial and the transverse magnifications while the axial position of an object is varied from $190 mm$ to $350 mm$. The first case is when the observation plane is at $S = 80 mm$. $Z_1$ and $Z_2$ are both positive; this corresponds to the case when the measurement plane is at the “$A$” position. Figure 2(a) shows that the axial position of an image $Z_2$ is positive and decreases as the distance between the two lenses and the object increases from $190 mm$ to $350 mm$. Figure 2(b) shows the transverse and the axial magnifications of the incoherent imaging system as a function of the object position. The axial magnification $M_z$ ranges from 0.11 to 0.65, while the transverse magnification $M_t$ ranges from -0.77 to -0.29. The second case is when the measurement plane is located $140 mm$ ($S = 140 mm$) on the right side of the two lenses. $Z_1$ is positive, and $Z_2$ is negative; this corresponds to the case when the measurement plane is at the “$B$” position. Figure 2(c) shows that $Z_2$ is negative and is a U shaped function of $a$. Variations in $Z_2$ are only $16.3 mm$ while changes from $190 mm$ to $350 mm$. Since 3D positions in the image space do not have a one-to-one correspondence to those in the object space, this case should be avoided for practical 3D imaging. The axial magnification $M_z$ starts from 0.24 and ends at -0.16; it changes the sign from positive to negative. The transverse magnification $M_t$ ranges from -0.40 to -0.67. The last case we have considered is when the measurement plane is located at $S = 220 mm$. Figure 2(e) shows that $Z_2$ is positive and is almost a linear function of $a$. Since $Z_1$ and $Z_2$ are both negative, this corresponds to the case where the measurement plane is at the “$C$” position. Figure 2(f) shows that the axial and transverse magnifications are both negative and large in this case. Since the variation of $Z_2$ with respect to $a$ is large, this is the most preferred situation out of the three cases. Since there is a trade-off between the axial resolution and the lateral resolution in incoherent holographic imaging [10–16], this condition inevitably provides a very low transverse resolution.

Figure 3 shows the mapping of 3D volumes from the object space to the image space. We considered the same incoherent holographic imaging system that produces the results of Fig. 2. A cube with a side length of $40 mm$ is used as an object. Holographic images are calculated for the object when its center is located at $a = 220 mm$. Figure 3(a) shows 3D volume images obtained from the three different measurement planes, which produce the results of in Fig. 2. Three volume images are shown in a single figure by making the axial position of each measurement plane placed at the same position ($0 mm$). A cyan plane in the middle of the figure is the measurement plane. A green square frustum located very close to the screen on the left side is the image obtained when the measurement plane is at the “$B$” position with $S = 140 mm$. This corresponds to the results of Fig. 2(c). Since $Z_c$ is a surjective function of $a$, the 3D volume in the image space is shrunken very small. A large red square frustum on the right side of the cyan plane is the image obtained when the measurement plane is at the “$C$” position with $S = 220 mm$. It has the largest 3D volume because $Z_c$ is a fast varying and almost linear function of $a$ as shown in Fig. 2(e). A small yellow square frustum inside the red one is the 3D volume image when the measurement plane is located at the “$A$” position with $S = 80 mm$. This corresponds to the results shown in Fig. 2(a). Since $Z_c$ is a slowly varying function of a, it covers a little volume in the image space. Figure 3(b) shows more examples of volume mapping from the object space to the image space when the measurement plane is at the “$C$” position with $S = 220 mm$. A cube with a side length of $30 mm$ is used as an object. We have calculated four 3D image volumes, while the center of the object is shifted from $210 mm$ to $330 mm$ with a step size of $40 mm$. Four 3D volume images in Fig. 3(b) show that the transverse magnification, as well as the axial magnification of this holographic imaging system, decreases as a is increased. It shows that a large 3D volume in the object space is mapped almost linearly to another large 3D volume in the object space with little distortion in this case.

## 3. Method

#### 3.1 Off-axis SIDH setup

In order to verify the complex relations of 3D volume mapping from the object space to the image space in incoherent holography, we build a simple incoherent holographic imaging system shown in Fig. 4. It is an off-axis self-interference digital holography (SIDH) system based on a Michelson interferometer. We did a series of measurements while scanning a point object along with the axial and transverse directions in the object space. Light from an LED is butt-coupled into single-mode fiber (630-HP, Nufern). The center wavelength of the LED is 640 nm, and its spectral width is 14 nm. The core diameter of the fiber is 3.5 µm, and the numerical aperture is 0.13. The other end of the fiber is cleaved, and diverging light from the fiber end is used as a point object. The optical power of this artificial point object was measured to be about 90 nW. The light from the fiber end is collimated with an achromatic lens $L$ ($f_1 = 48 mm$). We used a non-polarizing beam splitter cube (CCM1-BS013 Thorlabs) to divide the object light into two parts with a 50:50 intensity ratio. The thickness of the beam splitter $\Delta$ is 25.4 mm. It is made of BK7 glass, which has a refractive index of 1.515 at 640 nm wavelength.

Two concave mirrors ($CM1$ and $CM2$) are used to make an interference pattern on a hologram plane. The focal lengths of them are $f_2 = 50 mm$ and $f_3 = 100 mm$. d2 is the distance from the lens $L$ to the center of the beam splitter cube. $d_3$ and $d_4$ are distances from the center of the beam splitter to the two concave mirrors $CM1$ and $CM2$, respectively. We made $d_3$ and $d_4$ equal by adjusting $d_4$ with a micrometer while monitoring the visibility of fringe patterns. We used an EMCCD (Luca-S, Andor) to obtain holograms. $d_5$ is the distance from the EMCCD to the center of the beam splitter cube. We have $d_2 = 105 mm, d_3 = d_4 = 28 mm,$ and $d_5 = 292.5 mm$. We tilted $CM1$ about 0.9 degrees to obtain an off-axis hologram. We used a 3-axis motorized stage (MX310/M, Thorlabs) to move the point object (fiber tip) along x-y-z space in the object space.

#### 3.2 Ray transfer matrix analysis for incoherent holography

Ray transfer matrix method uses the paraxial approximation to find the output ray vector of an optical system from an input ray vector [27]. A ray vector is defined as, where $\left(\begin {smallmatrix} \theta \\ y \end {smallmatrix}\right)$ is the angle of the ray with respect to an optical axis, and y is the transverse distance of the ray from the optical axis. We derived the ray transfer matrix of our optical system to obtain the analytic expressions of $Z_1$ and $Z_2$ by using the parameters of the experimental setup shown in Fig. 4. The output rays on the measurement plane through the two beam paths of the Michelson interferometer can be calculated by using two system matrices $M_{sys1}$ and $M_{sys1}$.

#### 3.3 Experimental 3D mapping from an image space onto an object space

We have experimentally verified the relation between the axial position $x (= d_1)$ of an object and that of a reconstructed image $Z_c$ by measuring multiple holograms of a point object. We used a fiber tip as a point object and calculated its 3D position from a measured incoherent hologram. A commercially available LED with 640 nm center wavelength, 14 nm spectral width, and 3 W average power was used as a light source. The output of a commercially available LED is butt-coupled into single-mode fiber (630-HP, Nufern) whose core diameter is 3.5 µm. The output power from the other end of the fiber was measured to be about 100 nW. The fiber was sandwiched with two slide glasses using double-sided tape, shown in Fig. 5. Figure 6 shows the reconstruction sequence of finding the axial position of a point object from a measured hologram in SIDH.

Figure 6(a) shows a typical hologram of a fiber tip measured by our SIDH system. About 10 thin curved lines along the diagonal direction from the top left corner to the bottom right corner are off-axis interference patterns by the two beam paths. Three thick circular fringe patterns at the center and the lower right corner are due to dust. Figure 6(b) shows the intensity of the Fourier transformed function in the frequency-domain for the hologram shown in Fig. 6(a). We used the spatial filtering method [26] for zero-order and twin-image elimination. All the frequency components are set to zero except for those within the red square in Fig. 6(b). These frequency components are cut and pasted to the center of the Fourier plane, before doing numerical propagation calculation to find the axial position of a focused image. We used the angular spectrum method (ASM) [30] to generate a series of propagated intensity images along the optical axis from the detection plane. Tamura’s coefficient is used to measure the focus level of each generated image [31]. Figure 6(c) shows the best-focused image of the fiber tip among the many generated images. Note that the full width at half maximum (FWHM) of the best-focused spot along the +45 degree direction is about three times larger than that along the -45 degree direction. This is because the interference patterns in Fig. 6(a) are not circular shapes but curved diagonal lines. Since the measurement plane of our SIDH setup is located too far from the two focused image points of a fiber tip, only a fraction of elliptic interference patterns are captured by the array sensor. In order to have a circular symmetric intensity profile for a focused image of a fiber tip, we need to put the array sensor close to one of the two focusing points such that the size of elliptic interference becomes small and its center is within the sensor area. Figure 6(d) shows the normalized Tamura’s coefficient as a function of propagation distance. The axial position of an image ($Z_c$) is the axial position at which Tamura’s coefficient becomes maximum.

Figure 7 shows the relation between the axial position $x (= d_1)$ of a point object (fiber tip) and that of a reconstructed image $Z_c$ measured by our SIDH setup. A series of holograms were taken while translating the point object along the axial direction with a stepper motor. Figure 7(a) shows 110 data points showing the relation between the object position $x$ and the reconstructed image position $Z_c$. They consist of 10 sets of data points covering 20 mm of axial distance in the object space. Each set has 11 equally spaced data points covering 300 µm object distance with a step size of 30 µm along the axial direction. Blue solid circles are reconstructed image position data calculated from measured holograms by the sequence illustrated in Fig. 6. The solid orange curve in the figure shows a theoretical curve calculated from the analytic expression of $Z_c$ from Eq. (10) with measured parameters of $f_1, f_2, f_3, d_2, d_3, d_4, d_5$, and $\Delta$. Figure 7(b) is an expanded view of Fig. 7(a) when the axial position of an object is near $x = 38 mm$. The solid green line is the least-square fitting of experimental data with Eq. (10) for $C_1, C_2, C_3$, and $C_4$. Note that the least-square fit (green line) has better matching with experimental data compared to the system analysis curve (red line). This is because some parameters ($d_2, d_3, d_4, d_5$) are measured by a ruler with 1 mm marks and are not very accurate. The half-length of an error bar represents the standard deviations of 11 repeated measurements for a given object position. These results show that the accuracy (or the mean of standard deviations) of the measured axial position of our method is about 0.5 mm. Since the axial magnification is about 43 at $x = 38 mm$ (Fig. 8), this corresponds to 12 µm in the object space.

For a given axial position, we measured the transverse magnification of our SIDH system. 11 holograms were measured while translating the fiber tip 300 µm distance along the transverse direction with a step size of 30 µm. Because of the low temporal coherence of the light source and tilting between two object lights in our off-axis SIDH system, we could only obtain holograms for objects close to the optical axis of the imaging system. We expect that this problem can be removed when a phase-shifting inline holographic imaging system is used. Figure 8 shows changes in the axial and the transverse magnifications of our SIDH system with respect to the axial object position. Each of the axial and the transverse magnifications are calculated by fitting 11 data points with straight lines along the axial and the transverse directions. The blue and orange curves are best-fitted curves with second-order polynomial functions. When the axial position of the point object is scanned from 26 mm to 44 mm, the transverse magnification varies little from 4.0 to 5.5, while the axial magnification changes much from 21.0 to 65.6. In a conventional imaging system, the axial magnification is the square of the transverse magnification. Unlike a conventional imaging system, our results clearly show that this relation does not hold in incoherent holography.

#### 3.4 Imaging an artificial 3D object

In order to demonstrate the 3D imaging capability of our SIDH system, we took hologram images of an artificial 3D object made of three fiber tips. A homemade fiber bundle was prepared by putting ten fibers into a capillary glass tube. One end of the fiber bundle was cut and polished for light coupling. It is directly butt-coupled into an LED whose center wavelength is 640 nm. The output powers of coupled lights into fibers range 60~300 nW. Three fibers with similar output powers around 150 nW were selected to make an artificial 3D object. The three fiber tips were sandwiched with two slide glasses using double-sided tape. Figure 9 shows the schematic diagram of the artificial 3D object composed of three fiber tips. Three fibers are aligned on a straight line along the transverse direction by the two slide glasses. The axial positions of the three fiber tips are all different.

We took a hologram of this artificial 3D objects with our SIDH setup and calculated their 3D positions in the object space. Figure 10(a) is the measured hologram. Intensity images of the 3D object were calculated for various propagation distances by using the ASM. Figure 10(b) shows reconstructed images of three fiber tips at their focused positions. Red squares represent areas within which Tamura’s coefficient was calculated. Each square consists of 60 by 60 image pixels. Figure 10(c) is an image of the three fiber tips viewed from the side. It is imaged by a microscope with 10X magnification. Transverse and axial distances between fibers were calibrated by the fact that the outer diameter of a commercial fiber (630-HP, Nufern) is 125 µm.

Figure 10(d) shows normalized Tamura’s coefficient as a function of the axial position for the three fiber tips. Calculated axial positions of the three fiber tips correspond to the peak positions of the three Tamura’s coefficient curves.

Table 1 shows the 3D positions of the three fiber tips both in the image space and the object space. The 3D coordinates of the three fiber tips ($x'_i, y'_i, z'_i$) were obtained by the numerical focusing method illustrated in Fig. 10 for $i = 1, 2, 3$. According to the axial position of each fiber tip ($z'_i$) in the image space, the axial and the transverse magnifications were obtained from Fig. 8 and used to find the corresponding 3D position in the object space($x_i, y_i, z_i$). We have calculated the distances between two point objects ($\Delta {}x_{ij}, \Delta {}y_{ij}, \Delta {}z_{ij}$)along the $x$, $y$, and $z$ directions, and results are given in Table 2. These distances were compared with the measured distance shown in Fig. 10(c). We have good agreements between the results by SIDH and those obtained by direct side-view imaging with a microscope. Axial distances agree well with each other within 0.2% of accuracy. However, the transverse distances show about 12% accuracy. The reason for these large errors along the transverse direction is because of large distortion due to the off-axis geometry of our SIDH setup and the large pixel pitch (10 µm) of the array sensor we have used.

## 4. Conclusion

In this study, we have investigated the 3D mapping problem in incoherent holography. We did theoretical analysis on a simple incoherent holographic imaging system with two lenses. It is demonstrated that the axial position of the observation plane is a critical parameter. In order to obtain a high axial resolution in incoherent holographic holography, the observation screen of a holographic imaging system should be located further away from two images of an object such that interference patterns are formed by two diverging waves. In Eq. (4) we provided a simple analytic expression with 5 unknown parameters for the relation between the axial object position $a$ and the axial image position $Z_c$ for the simple two-lens incoherent holographic imaging system. We also provide an analytic expression of Eq. (5) for the transverse magnification between a transverse plane in the object space and the corresponding transverse plane in the image space for a given axial position of an object. In theory, we can map the 3D object space and the 3D image space in a simple incoherent holographic imaging system using two lenses.

In order to demonstrate simple 3D mapping relations between the object and the image space in incoherent holography, we build an off-axis SIDH imaging system with three lenses. We provided the analytic expressions for the axial position of an object $a$ and that of a produced image $Z_c$ with ray transfer matrices in Eq. (10), which has four unknown parameters. By using a fiber tip as a point source, we have mapped the 3D positions in the object space to those in the image space for the SIDH system we built. As predicted in theory, we have demonstrated that only a few points along the axial direction are needed to fit four unknown parameters in the analytic expression for the 3D mapping between object positions and image positions. We have tested the precision of our 3D mapping method with an artificial 3D object composed of three fiber tips. The axial and transverse distances between the three fiber tips were measured both with our SIDH system and a microscope. We have verified that the results of the two different methods show good agreement with each other.

## Funding

National Research Foundation of Korea (2017R1A2B4003950); Korea Institute for Advancement of Technology (P0011925).

## Disclosures

The authors declare that there are no conflicts of interest related to this article.

## References

**1. **D. Gabor, “Microscopy by Reconstructed Wave-Fronts,” Proc. R. Soc. London, Ser. A **197**(1051), 454–487 (1949). [CrossRef]

**2. **J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. **11**(3), 77–79 (1967). [CrossRef]

**3. **U. Schnars, “Direct phase determination in hologram interferometry with use of digitally recorded holograms,” J. Opt. Soc. Am. A **11**(7), 2011–2015 (1994). [CrossRef]

**4. **U. Schnars and W. Jüptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. **33**(2), 179–181 (1994). [CrossRef]

**5. **E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. **38**(34), 6994–7001 (1999). [CrossRef]

**6. **S. Grilli, P. Ferraro, S. D. Nicola, A. Finizio, G. Pierattini, and R. Meucci, “Whole optical wavefields reconstruction by Digital Holography,” Opt. Express **9**(6), 294–302 (2001). [CrossRef]

**7. **C. J. Mann, L. Yu, and M. K. Kim, “Movies of cellular and sub-cellular motion by digital holographic microscopy,” BioMed Eng OnLine **5**(1), 21 (2006). [CrossRef]

**8. **T.-C. Poon, K. B. Doh, B. W. Schilling, M. H. Wu, K. K. Shinoda, and Y. Suzuki, “Three-dimensional microscopy by optical scanning holography,” Opt. Eng. **34**(5), 1338–1344 (1995). [CrossRef]

**9. **G. Indebetouw and W. Zhong, “Scanning holographic microscopy of three-dimensional fluorescent specimens,” J. Opt. Soc. Am. A **23**(7), 1699–1707 (2006). [CrossRef]

**10. **J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. **32**(8), 912–914 (2007). [CrossRef]

**11. **J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics **2**(3), 190–195 (2008). [CrossRef]

**12. **J. Rosen and G. Brooker, “Fluorescence incoherent color holography,” Opt. Express **15**(5), 2244–2250 (2007). [CrossRef]

**13. **M. K. Kim, “Full color natural light holographic camera,” Opt. Express **21**(8), 9636–9642 (2013). [CrossRef]

**14. **D. C. Clark and M. K. Kim, “Nonscanning three-dimensional differential holographic fluorescence microscopy,” J. Electron. Imaging **24**(4), 043014 (2015). [CrossRef]

**15. **J. Hong and M. Kim, “Overview of techniques applicable to self-interference incoherent digital holography,” JEOS:RP **8**, 13077 (2013). [CrossRef]

**16. **J. Hong and M. K. Kim, “Single-shot self-interference incoherent digital holography using off-axis configuration,” Opt. Lett. **38**(23), 5196–5199 (2013). [CrossRef]

**17. **D. Muhammad, C. M. Nguyen, J. Lee, and H. Kwon, “Spatially incoherent off-axis Fourier holography without using spatial light modulator (SLM),” Opt. Express **24**(19), 22097–22103 (2016). [CrossRef]

**18. **X. Quan, O. Matoba, and Y. Awatsuji, “Single-shot incoherent digital holography using a dual-focusing lens with diffraction gratings,” Opt. Lett. **42**(3), 383–386 (2017). [CrossRef]

**19. **D. Liang, D. Liang, Q. Zhang, Q. Zhang, J. Liu, and J. Liu, “Single-shot Fresnel incoherent digital holography based on geometric phase lens," in * Digital Holography and Three-Dimensional Imaging 2019 (2019), paper M5A.6*, (Optical Society of America, 2019), p. M5A.6.

**20. **T. Tahara, T. Kanno, Y. Arai, and T. Ozawa, “Single-shot phase-shifting incoherent digital holography,” J. Opt. **19**(6), 065705 (2017). [CrossRef]

**21. **T. Nobukawa, T. Muroi, Y. Katano, N. Kinoshita, and N. Ishii, “Single-shot phase-shifting incoherent digital holography with multiplexed checkerboard phase gratings,” Opt. Lett. **43**(8), 1698–1701 (2018). [CrossRef]

**22. **V. Anand, T. Katkus, S. Lundgaard, D. Linklater, E. P. Ivanova, S. H. Ng, and S. Juodkazis, “Fresnel incoherent correlation holography with single camera shot,” arXiv:1911.08291 [physics] (2019). ArXiv: 1911.08291.

**23. **I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. **22**(16), 1268–1270 (1997). [CrossRef]

**24. **E. N. Leith and J. Upatnieks, “Reconstructed wavefronts and communication theory,” J. Opt. Soc. Am. **52**(10), 1123–1130 (1962). [CrossRef]

**25. **E. N. Leith and J. Upatnieks, “Wavefront reconstruction with diffused illumination and three-dimensional objects,” J. Opt. Soc. Am. **54**(11), 1295–1301 (1964). [CrossRef]

**26. **E. Cuche, P. Marquet, and C. Depeursinge, “Spatial filtering for zero-order and twin-image elimination in digital off-axis holography,” Appl. Opt. **39**(23), 4070–4075 (2000). [CrossRef]

**27. **E. Hecht, * Optics* (Addision Wesley, 2002), 4th ed.

**28. **J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express **19**(27), 26249–26268 (2011). [CrossRef]

**29. **H. Lee, P. Jeon, and D. Kim, “3d image distortion problem in digital in-line holographic microscopy and its effective solution,” Opt. Express **25**(18), 21969–21980 (2017). [CrossRef]

**30. **M. K. Kim, “Basic Methods of Numerical Diffraction,” in * Digital Holographic Microscopy: Principles, Techniques, and Applications*, (Springer, 2011), pp. 43–54.

**31. **Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. **42**(19), 3824–3827 (2017). [CrossRef]