## Abstract

We propose a virtual phase conjugation (VPC) based optical tomography (VPC-OT) for realizing single-shot optical tomographic imaging systems. Using a computer-based numerical beam propagation, the VPC combines pre-modulation and post-demodulation of the probe beam’s wavefront, which provides an optical sectioning capability for resolving the depth coordinates. In VPC-OT, the physical optical microscope system and VPC are coupled using digital holography. Therefore, in contrast to conventional optical tomographic imaging (OTI) systems, this method does not require additional elements such as low-coherence light sources or confocal pinholes. It is challenging to obtain single-shot three-dimensional (3D) tomographic images using a conventional OTI system; however, this can be achieved using VPC-OT, which employs both digital holography and computer based numerical beam propagation. In addition, taking into account that VPC-OT is based on a complex amplitude detection using digital holography, this method allows us to simultaneously obtain quantitative phase contrast images. Using an objective lens with a numerical aperture (NA) of 0.8, we demonstrate a single-shot 3D imaging of frog blood cells with a depth resolution of 0.94 *μ*m.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Optical tomographic imaging (OTI) provides three-dimensional (3D) tomographic images as well as the surface profiles of objects. This technique is non-invasive and provides a high spatial resolution. OTI-based methods such as optical coherence tomography (OCT) [1–3] have been studied and employed in various applications such as testing industrial products [4] and biological imaging [5]. However, in order to obtain 3D tomographic images, these methods require a 3D scanning using a probe beam, hence the imaging speed decreases. In order to improve the imaging speed of OCT, Fourier domain OCT (FD-OCT) [6,7] and full-field OCT (FF-OCT) [8,9] techniques have been developed. FD-OCT simultaneously acquires the depth information of the object using a combination of spectral domain measurements (performed using a swept laser source or a grating) and an electrical Fourier transform. In contrast, FF-OCT simultaneously acquire the en-face image using a Linnik interferometer and a two-dimensional (2D) image sensor. However, it is challenging to simultaneously measure 3D images without a mechanical scanning, as FD-OCT or FF-OCT require scanning along the transverse or depth direction.

In the field of microscopy, as in the technique for 3D imaging, confocal laser scanning microscopy (CLSM) [10–12] has been developed. In CLSM, although cross-section scanning can be avoided by incorporating a spin disk (Nipkow disk) [13,14] or microlens and pinhole arrays [15,16], depth scanning is still necessary. Furthermore, optical diffraction tomography [17,18] and tomographic phase microscopy [19] are proposed for quantitative 3D mapping of the refractive index in live biological cells using a heterodyne Mach-Zehnder interferometer. In addition, a 3D optical transfer function (OTF) in the incoherent system of a microscope was presented by Streibl in 1985 [20]. In [20], a 3D postprocessing method for enhancing the higher frequencies was proposed. However, these microscopy methods are based on multiple measurements and processing. Therefore, in OTI techniques, achieving single-shot 3D tomographic imaging is extremely challenging.

In this article, we propose a virtual phase conjugation (VPC) based optical tomography (VPC-OT) for realizing a single-shot 3D tomographic imaging system. VPC is a method that uses computer based numerical phase-conjugated beam propagation, and it is coupled to the actual optics through digital holography. Combining digital holography with numerical beam propagation, VPC OT achieves single-shot 3D tomographic imaging with multi-focus imaging. In contrast to the conventional OTI, VPC-OT provides a capability for optical sectioning using the combination of two digital processes: a pre-encryption of the probe beam using non periodic phase modulation and a post-decryption of the probe beam using the time-reversal property of the optical phase conjugation. Unlike the above-mentioned OTI methods, when all the information of a 3D object is stored in an obtained hologram image, VPC-OT can reconstruct a 3D image via the optical sectioning properties yielded by processing a VPC, which resolves depth information. Therefore, for acquiring tomographic images, this method does not require low-coherence interferometry, confocal optical systems, or multiple interferometric detections. In addition, the complex-amplitude detection using digital holography allows us to obtain not only intensity contrast tomographic images but also quantitative phase-contrast tomographic images, simultaneously.

This article is organized as follows. In the section 2, the basic principles of VPC-OT, as well as the setup of our experiment are discussed. In the section 3, experimental results of the 3D structure of a cover glass and a commercially-prepared specimen of frog blood cells are reported. In addition, the imaging characteristics such as the spatial resolution and imaging speed are discussed. The conclusions of this study are presented in the section 4.

## 2. Virtual phase conjugation based optical tomography

#### 2.1. Basic operation

As illustrated in Fig. 1, VPC-OT can be divided into the following three steps: generation of a diffusive phase-conjugated probe beam (encryption step) (Fig. 1(a)), specimen measurements by digital holography (measurement step) (Fig. 1(b)), and reconstruction of a 3D tomographic image (decryption step) (Fig. 1(c)). Note that the optical measurement is performed after the encryption, before the decryption step. The encryption and decryption steps are digitally operated by a computer; they are numerical calculations that do not involve any additional optical elements. In the following, we assume and discuss objects where scattering and absorption can be neglected. In addition, the objects are regarded as multi-layered objects along the depth direction. In such a situation, the reflectance based on the refractive index of an object in each layer is measured.

During the encryption step (Fig. 1(a)), a window function *A*(*x*, *y*) is attributed to the input plane (hereafter this plane will be referred to as a Fourier plane with *z* = 0), defined by:

*A*and

_{x}*A*are the window sizes along the

_{y}*x*- and

*y*-axis. The complex amplitude field

*E*(

*x*,

*y*) is obtained by multiplying

*A*(

*x*,

*y*) with the transmission function of the non-periodic phase distribution exp[i

*h*(

_{d}*x*,

*y*)]: A two-dimensional inverse fast Fourier transform (2D IFFT) is applied to

*E*(

*x*,

*y*), and the phase-conjugated diffusive beam is obtained by inverting the sign of the phase term of

*IFFT*[

*E*(

*x*,

*y*)], where

*IFFT*[· · · ] defines the IFFT operator.

*IFFT*[

*E*(

*x*,

*y*)] has an arbitrary intensity, and only the phase distribution is important for the Fourier hologram [21,22]. Therefore, the hologram image

*H*(

_{dis}*x*,

*y*) incident on the spatial light modulator (SLM) can be expressed as: where * denotes complex conjugation.

At the measurement step, the reference beam *r _{slm}*(

*x*,

*y*), which is a plane wave reflected by the beam splitter (BS), is phase-modulated with

*H*(

_{dis}*x*,

*y*), displayed on the SLM. By focusing the modulated beam on the specimen through a 2

*f*system using an objective lens,

*E*

^{*′}(

*x*,

*y*) is replicated and incident on the specimen as an actual probe beam;

*E*

^{*′}(

*x*,

*y*) is the replica of

*E*

^{*}(

*x*,

*y*). The beam

*E*

^{*}

*(*

_{r}*x*,

*y*), which is reflected by the specimen, is collimated by the objective lens and interferes with the reference beam

*r*(

_{cam}*x*,

*y*) passed through the BS. The interference fringe pattern

*H*(

_{rep}*x*,

*y*) is detected by a camera as:

*ℱ*

^{−1}represents an inverse Fourier transform performed by the 2

*f*system using the objective lens, while

*r*(

_{cam}*x*,

*y*) is given by:

*k*is the wavenumber vector,

*θ*and

_{x}*θ*are the incident angles of

_{y}*r*(

_{cam}*x*,

*y*) with respect to the

*x*- and

*y*-directions, and

*I*(

_{r}*x*,

*y*) is the intensity distribution of

*r*(

_{cam}*x*,

*y*). The specimen is positioned at a plane that is equivalent to the input plane, and it is regarded as a multi-layered object along the depth direction. Note that

*E*

^{*}

*(*

_{r}*x*,

*y*) is affected by the 3D distribution of the complex refractive index

*S*(

*x*,

*y*,

*z*) of the specimen.

During the decryption step, *E*^{*}* _{r}*(

*x*,

*y*) is reproduced digitally by applying a fringe analysis to the measured

*H*(

_{rep}*x*,

*y*) using a fast Fourier transform (FFT) [23,24]:

*W*is the Nyquist aperture. The intensity distribution |

*r*(

_{cam}*x*,

*y*)|

^{2}can be obtained using the camera, before recording the

*H*(

_{rep}*x*,

*y*). Assuming the specimen contains multiple parallel reflecting surfaces (layers) along the depth dimension,

*E*

^{*}

*(*

_{r}*x*,

*y*) can be expressed with the sum of the components reflected from those surfaces: Here, the subscript

*j*is the layer’s position in the specimen;

*M*is the total number of layers; Δ

*z*is the distance between neighboring layers,

*E*

^{*}(

*x*,

*y*, 2

*j*Δ

*z*) is the propagated

*E*

^{*}(

*x*,

*y*) on a distance 2

*j*Δ

*z*, and

*S*(

_{j}*x*,

*y*) is the complex refractive index of the specimen at the

*l*th layer. Note that the propagation distance is 2

*j*Δ

*z*, rather than

*j*Δ

*z*, as it includes the propagation distance after the reflection from the

*l*th layer. Let us assume that the desired 2D tomographic image is the image of the

*l*th layer. Therefore, the complex amplitude field at the input plane for the encryption step is obtained by digitally defocusing by −2

*l*Δ

*z*, using the numerical beam propagation method [25,26]. This is followed by multiplication of

*E*

^{*}

*(*

_{r}*x*,

*y*) with the transmission function exp[i

*h*(

_{d}*x*,

*y*)], which has the phase distribution used in the encryption step:

*h*(

_{d}*x*,

*y*)], included in the encryption step, is cancelled out when

*j*=

*l*. When

*j*≠

*l*the phase distribution is not canceled out, due to the defocus term 2(

*j*−

*l*)Δ

*z*. Therefore, by applying the IFFT to the complex amplitude field, expressed in Eq. 8, the undesirable components are widely spread, due to the uncorrected phase modulation. Then, the virtual spatial filter (after the IFFT), transmits the desired components, while most of the undesired components are not transmitted. Following this step, the complex amplitude field at

*z*=

*l*Δ

*z*of a desired component is obtained by calculating the FFT and using the numerical beam propagation method. Simultaneously, the intensity and phase images can be obtained by calculating the norm and the argument of the obtained complex amplitude field. The 3D tomographic image is reconstructed as a volume data using all the 2D tomographic images (i.e. for each depth position) by iteratively implementing the above calculations for each value of

*l*. As discussed above, this discussion is based on scattering and the absence of absorption (i.e. Born approximation [20] is valid). If this assumption is not valid, the wavefront of the reflected signal is distorted by the influence of scattering and absorption, thus the phase distribution exp[i

*h*(

_{d}*x*,

*y*)] cannot be canceled out even with a reflected signal from the correct depth position (when

*j*=

*l*). Then, the distorted signal is diffused by the random diffuser and cut by the virtual spatial filter. In other words, VPC is an imaging gate that allows only the pure reflected (backscattered) beam.

As discussed, the VPC provides optical sectioning for achieving a depth resolution using the digital process of pre-encryption and post decryption. Therefore, this method provides a depth imaging without using a low-coherence interferometry or a confocal optical system. An important advantage of using VPC is that the 3D tomographic images can be obtained only by measuring a single digital hologram, as the complex amplitude contains both intensity and phase 3D information.

#### 2.2. Experimental optical setup

Figure 2 illustrates the optical setup of the experiment. A Mach-Zehnder interferometer was employed in order to simplify the adjustment of the incident angle of *r _{cam}*(

*x*,

*y*). The intensity distribution of the function

*A*(

*x*,

*y*), as well as the phase distributions

*h*(

_{d}*x*,

*y*) and

*H*(

_{dis}*x*,

*y*), used for the experiment, are shown in Figs. 3(a)–3(c), respectively. The output of the fiber coupled light source (diode-pumped solid state and continuous wave laser source,

*λ*= 532 nm) was collimated using a collimating lens (CL,

*f*= 300 mm). The collimated beam was divided into two components:

*r*(

_{slm}*x*,

*y*) and

*r*(

_{cam}*x*,

*y*) using a polarizing BS (PBS). The power ratio between these two beams was controlled using a half-wave plate (HWP) placed before the PBS. The phase of

*r*(

_{slm}*x*,

*y*) was modulated by a phase-only spatial light modulator (PSLM, Hamamatsu Photonics, x10468-04; pixel size = 20 × 20

*μ*m

^{2}; number of pixels = 800 × 600) which gives the

*H*(

_{dis}*x*,

*y*), and then the diffusive beam

*E*

^{*}(

*x*,

*y*) was generated. Note that

*H*(

_{dis}*x*,

*y*) was regarded as a grayscale image; the maximum gray level was set to 157, hence the maximum phase-modulation depth is 2

*π*. The probe beam (modulated by PSLM) was reduced and relayed to the back focal plane of the objective lens (OBJ, focal length

*f*= 2.0 mm, work distance = 2.0 mm) by a 4

*f*optical system (L1–L2 in Fig. 2). The undesired diffraction beam, which emerges from the PSLM, is removed using an iris positioned between L1 and L2. Then, the probe beam was focused on the specimen using the OBJ. The beam reflected by the specimen was collimated again and relayed to the complementary metal oxide semiconductor image sensor (CMOS, FLIR Integrated Imaging Solutions Inc., Grasshopper3 GS3-U3-41C6M-C; pixel size = 5.5 × 5.5

*μ*m

^{2}; number of pixels = 2048 × 2048) by a 4

*f*optical system (L2–L3 in Fig. 2). The reflected beam interferes with

*r*(

_{cam}*x*,

*y*) using the BS3 (Fig. 2), and the interference fringe pattern was recorded as a digital hologram

*H*(

_{rep}*x*,

*y*) using the CMOS image sensor. The contrast of the interference fringe pattern was maximized by adjusting the HWP placed before the PBS. In addition, the incident angles of the reference beam,

*θ*and

_{x}*θ*, were set within 2.77° (using M1 and M2 mirrors, as illustrated in Fig. 2), so that the interval of the interference fringe pattern is in agreement with the sampling theorem for the pixel size of the CMOS. The 3D tomographic image of the specimen was reconstructed using the measured

_{y}*H*(

_{rep}*x*,

*y*) by applying the VPC decryption step, as mentioned in the previous section.

The VPC requires an accurate alignment between the PSLM and CMOS, similar to the ordinary digital phase conjugation experiments [27,28]. In our VPC experiment, it is easier to perform such alignment, as PSLM and CMOS are fully relayed with lenses. An image with a chessboard pattern was displayed on the PSLM, in order to align the spatial positions of PSLM and CMOS. Then, the focal lengths of the relay lenses L1 and L3 were chosen so that one pixel of PSLM corresponds to 4 × 4 CMOS pixels. The focal length of L2 was chosen so that the diameter of the probe beam is approximately equal to the pupil size of the OBJ. Moreover, in our method, although the reconstructed 3D image can contain a speckle noise due to the diffused wavefront of the probe beam, it can be removed by averaging the reconstructed 3D image, as follows. First, multiple *H _{dis}*(

*x*,

*y*) are generated using different

*h*(

_{d}*x*,

*y*). Then, multiple

*H*(

_{rep}*x*,

*y*) are acquired while sequentially changing the

*H*(

_{dis}*x*,

*y*) displayed on the PSLM. Finally, the intensity and phase distributions of the 3D tomographic images obtained from the multiple

*H*(

_{rep}*x*,

*y*) are averaged using the digital decryption process. In our experiment, the number of averaging iterations was arbitrarily set to 50.

#### 2.3. Compensation of the optical aberration using measured actual phase-rotation factors

In the VPC decryption step, the 3D tomographic image is obtained from the measured single-shot hologram by refocusing the 2D tomographic images at each depth position using the numerical beam propagation, which is followed by a reconstruction of these images. The effect of the actual beam propagation has to be canceled out numerically using the beam propagation method. In other words, the actual phase rotation factor of the propagation has to be consistent with that of the numerically generated propagation. However, the actual phase rotation factor shows an incompleteness due to the aberrations of the objective lens, which causes a serious deterioration of the image quality. This can be addressed solved by measuring (in advance) the actual phase rotation factors for each depth position. In the optical setup shown in Fig. 2, the mirror is positioned at the focal plane of the objective lens in order to obtain the actual phase rotation factors. Then, the plane image is displayed on the PSLM. Therefore, using the objective lens, *r _{slm}*(

*x*,

*y*) directly irradiates the mirror, and then it is reflected from the mirror. The interference fringes of the reflected

*r*(

_{slm}*x*,

*y*) and

*r*(

_{cam}*x*,

*y*) are sequentially captured by the CMOS image sensor while sweeping the mirror position along the depth direction using a translation stage. The actual phase rotation factors can be obtained using a fringe analysis on the recorded interference fringe pattern. Figures 4(a)–4(c) show examples of the obtained actual phase rotation factors, where the mirror defocus distances are −50.0

*μ*m, 0.0

*μ*m, and 50.0

*μ*m, respectively. In contrast, Figs. 4(d)–4(f) show the phase rotation factors calculated by the angular spectrum method [25,26] using the same defocus distances as those in Figs. 4(a)–4(c). Comparing Figs. 4(a) and 4(d), it can be seen that the experimentally obtained phase image is different from the calculated one, due to the aberrations of the objective lens. As mentioned above, the deterioration of the reproduced tomographic images can be prevented by storing (in a computer) the obtained actual phase rotation factors for each depth position, and using them for the numerical beam propagation performed in the VPC decryption step. We will discuss the effect of the actual phase rotation factors in the section 3.

## 3. Results and discussion

#### 3.1. 3D and depth imaging

The first application of the VPC in this study is for the depth imaging of a cover glass (MATHUNAMI GLASS, C218181, with a thickness of 150 *μ*m, and a refractive index of 1.53) positioned at the OBJ focal plane. The optical thickness is approximately 98.0 *μ*m, obtained using the shift of the focal plane due to the refraction. The calculations in the decryption step were performed iteratively (256 times), using a numerical propagation distance of 0.5 *μ*m. A 3D tomographic image volume data of 256 × 256 × 256 voxels (along *x*, *y*, and *z* direction, respectively) was obtained using a computer. Figure 5(a) shows the intensity image at the *yz*-plane obtained from the 3D tomographic image, reconstructed by the digital decryption process. Two signals from both front and back surface of the cover glass were obtained. The distance along the *z* direction between these two signals was approximately 96.5 *μ*m, which is in a good agreement with the optical thickness of the cover glass. In order to confirm the effectiveness of the measured actual phase rotation factors (as discussed in Section 2.3), the result when the actual phase rotation factors are not used is shown in Fig. 5(b). Comparing the background noise of the images shown in Figs. 5(a) and 5(b), we confirm that the background noise around signals is considerably suppressed due to the calibration of the phase images. Further, in order to confirm the effectiveness of the VPC optical sectioning, we performed calculations for the case when *h _{d}*(

*x*,

*y*) is not canceled out in the digital compositing process. The obtained image is shown in Fig. 5(c) at the

*yz*-plane. Comparing Fig. 5(a) with Fig. 5(c), we can see that the optical sectioning property is obtained by canceling out

*h*(

_{d}*x*,

*y*) only at the depth positions of the signals. These results show that a VPC-based depth imaging is achievable. These results show that VPC is capable of resolving multiple objects along the depth direction.

Next, we demonstrate a 3D imaging of a commercially-prepared specimen of frog blood cells. The prepared specimen was positioned at the OBJ focal plane, as illustrated in Fig. 6. In this measurement, the calculations in the decryption step were performed iteratively (128 times) using a numerical propagation distance of 0.1 *μ*m, and using a computer a volume data of 256 × 256 × 128 voxels was obtained. Figures 7(a) and 7(b) show intensity images at several depth positions, sliced from the 3D tomographic image on the *xy*-plane as well as intensity images sliced on the *yz*-plane, without and with VPC, respectively. The structures of the nucleus and cytoplasm of the frog blood cell are not visible in Fig. 7(a), as the cell’s signal overlaps with that of the cover glass. The structure is clearly visible in Fig. 7(b), as outlined by the red dotted circle. This confirms that the VPC suppresses the blurring outside the focal plane. In addition, we can notice that the intensity signals coming from the surface of the cover glass and cell nucleus correspond to *z* = 0.0*μ*m and *z* = −1.0*μ*m, respectively (the cell is behind the cover glass, as shown in Fig. 6). In addition, the corresponding phase images are shown in Fig. 7(c). The refractive index difference between the nucleus and the cytoplasm was shown as a phase image, outlined by the black dotted circle in Fig. 7(c). Therefore, we verified that the VPC can simultaneously obtain the 3D reflectance and the refractive index distribution of the prepared specimen, using a single-shot measurement. Note that the signal of the cover glass behind the cell was not reflected and was broken because the cell and the cover glass were in close contact and the refractive indices of both were similar as shown in Figs. 7(b) and 7(c).

#### 3.2. Characterization of VPC-OT

In this section, we discuss the spatial resolution and imaging speed of VPC-OT. First, the depth resolution in an ideal object (i.e. not including scattering and absorption) was experimentally analyzed by estimating the point spread function (PSF) along the depth-direction. To estimate the PSF, we measured a flat mirror located at the OL focal plane. The reconstructed image at the *yz*-plane is shown in Fig. 8(a). The intensity profile along the orange dotted line, outlined in Fig. 8(a), is plotted as a *I*-*z* curve in Fig. 8(b). The intensity peak of the *I*-*z* curve is at *z* = 0.0 *μ*m, taking into account the VPC optical sectioning property. The full width at half-maximum (FWHM) of the intensity peak represents the PSF along the depth direction (same arguments as in the conventional CLSM). Using the curve fitting tool in the MATLAB software, we estimated that the FWHM for our experiment is approximately 0.94 *μ*m. For the presented method, the depth resolution depends on the depth of focus (DOF) of the objective lens. The DOF of the objective lens used in the experiment was 0.80 *μ*m. The estimated FWHM is a reasonable value, as it is close to the DOF value used in our experiment. The pro forma cross-section resolution was estimated by measuring a USAF 1951 resolution test target (RES-2, Newport), which was also located at the OBJ focal plane. The obtained image at the *xy*-plane is shown in Figs. 8(c), and the intensity profile along the orange dotted line (Fig. 8(c)) is plotted in Fig. 8(d). The line pair, outlined with the orange dotted line in Fig. 8(c), was the smallest observed pattern of the test target. Its spatial frequency and spatial resolution were 228 lines/mm and 2.19 *μ*m, respectively. Here, the size of the virtual spatial filter (see Fig. 1(c)) determines the cross-section because it removes not only most of the noise but also the high frequency signal. The virtual space filter used in this experiment was a 512.0 *μ*m circular aperture. Therefore, efforts to efficiently remove noise are required even if a large spatial filter is used. When the given random phase is combined with the defocus phase of each layer, it may be possible to separate the signal and noise without a spatial filter if they are not correlated. For example, when “defocus phase + random phase of layer A” and “defocus phase + random phase of layer B” are orthogonal, there is a possibility of completely separating each layer even if a spatial filter is not used.

The imaging speed of our method is characterized by the frame rates of both CMOS and PSLM. In our experiment, the measurement speed depends on the PSLM, as the frame rate and number of pixels of the PSLM (60 fps, 800 × 600 pixels) are lower than those of the CMOS (82 fps, 2048 × 2048 pixels). Therefore, a 3D image of size 256 × 256 × 256 voxels can be obtained in a period of approximately 833.33 ms, when the image averaging is performed using 50 iterations. However, an averaging is not necessary, as the speckle noise does not emerge when the optical system shown in Fig. 2 is optimized; it modulates not only the phase but also the intensity distribution of the probe beam. Furthermore, the difference between the discrete FFT and the continuous optical Fourier transform is considered to be a source of noise. It is necessary to optimize the phase distribution of the random diffuser so that the difference between these Fourier transforms is compensated. When independently optimizing the phase modulation distribution of the encoding process and that of the decoding process, the best results are obtained. In addition, the high-frequency term of the phase modulation distribution should be appropriately handled because it may yield more speckle noise. When suppressing speckle noise, the 3D image can be obtained in a period of approximately 16.67 ms. In addition, taking into account that a method for achieving a high-speed complex amplitude modulation using a digital mirror device (DMD) has been already proposed [29], the VPC-OT has a large potential to provide very high-speed 3D tomographic imaging.

## 4. Conclusion

We proposed a VPC-OT for achieving a 3D tomographic imaging using a single-shot detection. In VPC-OT, the digital encryption and decryption of the probe beam provide a capability for optical sectioning without using a low-coherence interferometry or a confocal optical system. An important advantage of the VPC-OT technique is the ability to perform 3D tomographic imaging using a single-shot detection, which is challenging in conventional OTI. We successfully demonstrated the depth imaging of a cover glass as well as a 3D tomographic imaging of frog blood cells, thus verifying the imaging abilities of VPC. In addition, imaging properties of VPC-OT, such as spatial resolution, were studied.

As mentioned in Section 3.2, to improve the intensity ratio between a signal and background noise (which is a disadvantage of VPC-OT), the non-periodic phase distribution *h _{d}*(

*x*,

*y*) can be optimized with methods such as simulated annealing and genetic algorithms. The imaging of an object through a scattering medium such as an agar gel that includes microparticles could be demonstrated in a future study for applying VPC in bioimaging. In this case, it is necessary to carefully consider that the VPC operates as an imaging gate allowing only backscattered beams as described above. In addition, the spatial resolutions along the depth and cross-section in actual conditions with objects including scattering and absorption and optical aberration will be analyzed experimentally in our future work. Further, by introducing a high speed SLM based on a DMD as well as a high frame rate imager, and optimizing their properties, the large potential of VPC-OT for achieving very high-speed 3D tomographic imaging will be verified in a future study.

## Funding

Japan Society for the Promotion of Science (JSPS) (JP17J01545).

## References and links

**1. **D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical Coherence Tomography,” Science **254**(5035), 1178–1181 (1991). [CrossRef] [PubMed]

**2. **A. F. Fercher, “Optical Coherence Tomography,” J. Biomed. Opt. **1**(2), 157–173 (1996). [CrossRef] [PubMed]

**3. **J. G. Fujimoto, C. Pitris, S. A. Boppart, and M. E. Brezinski, “Optical Coherence Tomography: An Emerging Technology for Biomedical Imaging and Optical Biopsy,” Neoplasia **2**(1–2), 9–25 (2000). [CrossRef] [PubMed]

**4. **D. Stifter, “Beyond biomedicine: a review of alternative applications and developments for optical coherence tomography,” Appl. Phys. B **88**(3), 337–357 (2007). [CrossRef]

**5. **R. F. Spaide, J. M. Klancnik Jr, and M. J. Cooney, “Retinal Vascular Layers Imaged by Fluorescein Angiography and Optical Coherence Tomography Angiography,” JAMA Ophthalmol. **133**(1), 45–50 (2015). [CrossRef]

**6. **R. Leitgeb, C. K. Hitzenberger, and A. F. Fercher, “Performance of fourier domain vs. time domain optical coherence tomography,” Opt. Express **11**(8), 889–894 (2003). [CrossRef] [PubMed]

**7. **M. Wojtkowski, V. J. Srinivasan, T. H. Ko, J. G. Fujimoto, A. Kowalczyk, and J. S. Duker, “Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation,” Opt. Express **12**(11), 2404–2422 (2004). [CrossRef] [PubMed]

**8. **E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full-field optical coherence microscopy,” Opt. Lett. **23**(4), 244–246 (1998). [CrossRef]

**9. **M. Akiba, K. P. Chan, and N. Tanno, “Full-field optical coherence tomography by two-dimensional heterodyne detection with a pair of CCD cameras,” Opt. Lett. **28**(10), 816–818 (2003). [CrossRef] [PubMed]

**10. **M. Minsky, U.S. Patent 3013467 (1961).

**11. **T. R. Corle, C.-H. Chou, and G. S. Kino, “Depth response of confocal optical microscopes,” Opt. Lett. **11**(12), 770–772 (1986). [CrossRef] [PubMed]

**12. **S. W. Paddock, “Confocal laser scanning microscopy,” BioTechniques , **27**(5), 992–1004 (1999). [PubMed]

**13. **G. Q. Xiao and G. S. Kino, “A real-time confocal scanning optical microscope,” Proc. SPIE **809**(22), 107–112 (1987). [CrossRef]

**14. **T. Tanaami, S. Otsuki, N. Tomosada, Y. Kosugi, M. Shimizu, and H. Ishida, “High-speed 1-frame/ms scanning confocal microscope with a microlens and Nipkow disks,” Appl. Opt. **41**(22), 4704–4708 (2002). [CrossRef] [PubMed]

**15. **H. J. Tiziani and H. M. Uhde, “Three-dimensional analysis by a microlens-array confocal arrangement,” Appl. Opt. **33**(4), 567–572 (1994). [CrossRef] [PubMed]

**16. **M. Eisner, N. Lindlein, and J. Schwider, “Confocal microscopy with a refractive microlens-pinhole array,” Opt. Lett. **23**(10), 748–749 (1998). [CrossRef]

**17. **E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. **1**(4), 153–156 (1969). [CrossRef]

**18. **Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express **17**(1), 266–277 (2009). [CrossRef] [PubMed]

**19. **W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R Dasari, and M. S Feld, “Tomographic phase microscopy,” Nature Methods **4**(9), 717–719 (2007). [CrossRef] [PubMed]

**20. **N. Streibl, “Three-dimensional imaging by a microscope,” J. Opt. Soc. Am. A **2**(2), 121–127 (1985). [CrossRef]

**21. **L. B. Lesem, P. M. Hirsch, and J. A. Jordan, “The Kinoform: A New Wavefront Reconstruction Device,” IBM J.Res.Dev. **13**(2), 150–155 (1969). [CrossRef]

**22. **J. Amako, H. Miura, and T. Sonehara, “Speckle-noise reduction on kinoform reconstruction using a phase-only spatial light modulator,” Appl. Opt. **34**(17), 3165–3171 (1995). [CrossRef] [PubMed]

**23. **M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. **72**(1), 156–160 (1982). [CrossRef]

**24. **D. J. Bone, H.-A. Bachor, and R. J. Sandeman, “Fringe-pattern analysis using a 2-D Fourier transform,” Appl. Opt. **25**(10), 1653–1660 (1986). [CrossRef] [PubMed]

**25. **F. Shen and A. Wang, “Fast-Fourier-transform based numerical integration method for the Rayleigh-Sommerfeld diffraction formula,” Appl. Opt. **45**(6), 1102–1110 (2006). [CrossRef] [PubMed]

**26. **K. Matsushima and T. Shimobaba, “Band-Limited Angular Spectrum Method for Numerical Simulation of Free-Space Propagation in Far and Near Fields,” Opt. Express **17**(22), 19662–19673 (2009). [CrossRef] [PubMed]

**27. **M. Cui and C. Yang, “Implementation of a digital optical phase conjugation system and its application to study the robustness of turbidity suppression by phase conjugation,” Opt. Express **18**(4), 3444–3455 (2010). [CrossRef] [PubMed]

**28. **M. Jang, H. Ruan, H. Zhou, B. Judkewitz, and C. Yang, “Method for auto-alignment of digital optical phase conjugation systems based on digital propagation,” Opt. Express **22**(12), 14054–14071 (2014). [CrossRef] [PubMed]

**29. **S. A. Goorden, J. Bertolotti, and A. P. Mosk, “Superpixel-based spatial amplitude and phase modulation using a digital micromirror device,” Opt. Express **22**(15), 17999–18009 (2014). [CrossRef] [PubMed]