## Abstract

In this manuscript, we propose a quantitative phase imaging method based on deep learning, using a single wavelength illumination to realize dual-wavelength phase-shifting phase recovery. By using the conditional generative adversarial network (CGAN), from one interferogram recorded at a single wavelength, we obtain interferograms at other wavelengths, the corresponding wrapped phases and then the phases at synthetic wavelengths. The feasibility of the proposed method is verified by simulation and experiments. The results demonstrate that the measurement range of single-wavelength interferometry (SWI) is improved by keeping a simple setup, avoiding the difficulty caused by using two wavelengths simultaneously. This will provide an effective solution for the problem of phase unwrapping and the measurement range limitation in phase-shifting interferometry.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Optical interferometry is widely used for quantitative phase imaging, investigation of mechanical component, visualization of droplet evaporation and other fields [1–3]. The single-wavelength interferometry (SWI) [4] is a well-established method for the high accuracy phase measurement, in which the intensity of the recorded interferogram is a sine (cosine) function, thus the phase information can be obtained by inverse cosine or arctangent operation. When the optical path difference is greater than the wavelength, the real phase will exceed 2π. The calculated phase is wrapped and phase unwrapping needs to be applied for removing the 2π discontinuities [5–7]. However, the fundamental assumption for the phase unwrapping is that the height variation of the measured sample between adjacent sampling points is less than half wavelength. This assumption limits the measurement range of SWI. For samples with complex structure and large surface gradients, the phase changes between adjacent pixels will be more than half wavelength, and in general the existing unwrapping algorithms fail to correctly retrieve the three-dimensional shape of the investigated sample. Dual-wavelength interferometry (DWI) [8] can overcome the 2π ambiguity of SWI by using two wavelengths to synthesize a longer synthetic-wavelength [9]. Moreover, by selecting two different wavelengths, the synthetic-wavelength of holographic interferometry can be expanded to the micrometer or even millimeter range [10–13].

In recent years, many DWI methods were developed [14–22]. Dual-wavelength interferometry with single wavelength phase shifting (SWPS-DWI) was proposed [14,15], this method needs to collect a group of interference fringes at each wavelength to extract the wrapped phases. However, it is time-consuming to perform acquisition and phase calculation in both interferograms. To simplify interferogram acquisition and phase calculation, a series of methods were proposed to separate the wrapped phases of single-wavelength directly from a hybrid interferogram but with the decrease of accuracy [16–19]. In [20,21], a color CCD is used to record simultaneously interferograms at two wavelengths. The color interferograms are divided into different groups. Compared with the SWPS-DWI, the acquisition time is decreased to a half, but with two problems. One is that when the wavelength difference is too small, it is difficult to ensure the correct separation of single wavelength interferograms; the other is that the incomplete alignment of interferograms with different wavelengths will cause errors. The single exposure DWI based on parallel phase-shifting can achieve real-time dynamic measurement, but with reduced spatial resolution and using expensive image sensors [22]. All the methods mentioned above need to use two lasers with different wavelengths to obtain the hybrid interferograms or interferograms with different wavelengths. Therefore, the complexity of the system and the cost of the experiment will limit the application of DWI.

Deep learning (DL) has shown its advantages in many applications, such as optical imaging [23,24], phase recovery and holographic image reconstruction [25,26], digital holographic microscopy [27], super-resolution microscopy [28], phase unwrapping [29,30] etc. [31]. In this paper, we propose a DWI method based on DL which uses single wavelength to realize dual-wavelength phase-shifting phase recovery. Conditional generative adversarial network (CGAN) [32,33] is used to learn the corresponding relationship between the interferograms at different wavelengths, so that only interferograms at certain wavelengths are needed to obtain interferograms at other wavelengths. Two or multi-step phase-shifting is used to obtain the wrapped phases at different wavelengths, and then the phase at synthetic wavelength. In the proposed method, we can use the interferogram at a certain wavelength (such as 632.8 nm) to obtain interferograms at other wavelengths. The measurement range of SWI is improved by keeping a simple set-up, avoiding the difficulty by the experiment using two wavelengths simultaneously. This will provide an effective solution for the problem of phase unwrapping and the measurement range limitation in phase-shifting interferometry. Simulation and experiment verify the feasibility of the method.

## 2. Methods

#### 2.1 Brief review of DWI

In DWI, it is assumed that lasers with wavelengths ${\lambda _l}$ ($l = 1,2$) are used as light sources. The phase distribution of the light wave passing through a sample having height $h(x,y)$ is:

The relationship between the real phase and the wrapped phase is: Where ${\varphi _l}(x,y)$ denotes the corresponding wrapped phase obtained from the interferogram. The relations between the heights of the measured sample and the wrapped phases are:CGAN is constructed for the phase retrieval and the flow chart of the proposed method is shown in Fig. 1. By using the interferogram at ${\lambda _1}$ and the shared parameters in the network, another interferogram at other wavelengths is obtained. Repeating this process *M* times with different phase shifting interferograms, we will obtain a sequence of phase-shifted interferograms at different wavelengths. The phase-shifting phase retrieval method is used to obtain the wrapped phases at different wavelengths from the input interferograms, the phase at the synthetic wavelength is obtained through Eq. (5). CGAN is trained to learn the principle of recovering interferogram at another wavelength from the interferogram recorded at 632.8 nm.

#### 2.2 Preparation of training data

Simulated data are used for training, and input initial interferograms (interferograms at wavelength ${\lambda _1}$) are calculated using Eqs. (6) and (7):

*y*are pixel coordinates with the center of the interferogram as origin, $n = 1,2,3 \cdots N$ are the sequence numbers of the interferograms. According to the Eq. (3), we can obtain the interferograms at wavelength ${\lambda _2}$:

In this method, there is a defined input output relationship between the interferograms at different wavelengths, as shown in Eqs. (6) and (8). This relationship can be used for generating many simulated data including various random shapes. The training data can be achieved by performing arithmetic operations on Gaussian functions with different mean and variance values and shape generating function. Furthermore, Gaussian noise or multiplicative noise with different level is added to these data. In the training, 20,000 interferograms with 256 × 256 pixels at the wavelength of 632.8 nm and the corresponding interferograms at 532 nm were generated. Few simulated interferograms at 632.8 nm are shown in Fig. 2.

#### 2.3 Network structure

A CGAN is designed to generate the interferogram at different wavelengths by using an interferogram at a certain wavelength. The training strategy is showed in Fig. 3. The CGAN contains a generator network and a discriminator network. The Deep U-Net structure is adopted in the generator network. Taking the achieved 532 nm interferogram as an example, the trained generator network is used to generate the interferogram at 532 nm by using the interferogram at 632.8 nm as input. In the training, the interferogram at 632.8 nm is firstly entered the generator network and passed through 8 convolution layers to achieve the feature maps of input image, and then the estimate interferogram at 532 nm is produced by passing through 8 deconvolution layers. In addition, the convolution layer and the deconvolution layer of the generator can supplement the high-frequency information of the reconstructed image through the skip connections. The discrimination network adopts the PatchGAN structure. There are two groups of inputs for the discriminator and each group is convoluted five times to obtain a 30×30 size feature map. The input interferogram X and the output G(X) are concatenated as the group 1 and then input to the discriminator to generate a feature map D1. The label interferogram and the interferogram at 632.8 nm are concatenated as the group 2 and then input to the discriminator to generate a feature map D2. D1 and D2 are used to build the loss function of the discriminator, which continuously optimize the parameters of the discriminator network. D1, G(X) and the label interferogram are used to build the loss function of the generator, and are also to optimize the parameters of the generator network. When the discriminator network cannot distinguish two groups of inputs (G (X) is very close to the label interferogram), the optimized generator network is considered to have learned the corresponding relationship between the interferograms at different wavelengths. The specific parameters of each layer, including the size and the step length of the convolution kernel, regularization mode, etc. are shown in Fig. 3. In addition, the Dropout (p = 0.5) is introduced to the first three layers of the generator network to prevent over fitting. After training, the model can be used to retrieve the interferogram at 532 nm from the interferogram at 632.8 nm. Here, the designed CGAN architecture is implemented by Python version 3.7 on a PC with NVIDIA Geforce GTX1080Ti GPU.

During the prediction process, by inputting the interferogram at 632.8 nm recorded by the Mach-Zehnder interferometer into the optimized network, we obtained the predicted interferogram at the wavelength of 532 nm. When the resolution of the interferogram is less than 256×256 pixels, the edge of the interferogram can be set as zeros to 256×256 pixels for prediction. When the resolution is greater than 256×256 pixels, the interferogram can be predicted by the method of segmentation and reassembly. The corresponding loss functions are defined as follows:

Where *n* denotes the sample data amount. **W** represents the weight size. *γ* is the weight of L1_*Loss*, and is taken *γ* = 100. *Loss*(**D**) is the loss function of the discriminator network, *Loss*(**G**) is the loss function of the generator network, **D1** and **D2** are the output of the discriminator network, **G(X)** is the generated interferograms, and **Y** is the corresponding label interferograms.

## 3. Numerical simulation and discussion

#### 3.1 Simulation results

Numerical simulation is used to analyze the performance of method. In the first test, a vortex phase plate with the maximum height of 0.899 microns is simulated. 4-frame phase-shifting interferograms with 256 × 256 pixels at 632.8 nm and the corresponding interferograms at 532 nm (Ground truth) are simulated. Examples of interferograms at 632.8 nm and 532 nm are shown in Figs. 4(a) and (b), respectively. The simulated interferograms at 632.8 nm are input into the trained convolution network, and the reconstructed interferograms at 532 nm are obtained, one of the generated interferograms is shown in Fig. 4 c). To compare the output to the ground truth, the intensity distribution in the 85^{th} line of the output and the ground truth are drawn in Fig. 4(d). The cross-section curve of the reconstructed interferogram almost coincide the ground truth, which proves the feasibility of this approach. Furthermore, by using the advanced least square iterative algorithm (AIA) [34], the wrapped phases of the single wavelength at 632.8 nm and 532 nm are calculated and shown in Figs. 5(a) and (b). The preset height map of the vortex phase plate is given in Fig. 5(c). The calculated height map of the vortex phase plate at the synthetic wavelength [Fig. 5(d)] can be determined by Eq. (5). Moreover, Fig. 5(e) also gives the height map obtained by combing the immune algorithm of phase ambiguity [19,35] and the height map at the synthetic wavelength. In addition, the corresponding height distribution in the 85^{th} line in Figs. 5(c)-(e) are shown in Fig. 5(f). It can be seen that the proposed method can correctly reconstruct the height of the sample, and the noise of the height distribution after using the immune algorithm is effectively decreased and is basically consistent with the preset height distribution. These results further demonstrate the feasibility of the proposed method. The training and testing error curves for the trained convolution network are shown in Fig. 6. These curves converge quickly to almost the same value, which demonstrates that the network has a good performance in the testing datasets.

#### 3.2 Discussion of the measurement range for the CGAN based DWI

In this section, the measurement range of the proposed method for the sample with different heights are discussed. We simulated four groups of interferograms with different heights of the vortex phase plate, as shown in Fig. 7, column (a), and the corresponding interferograms at 532 nm achieved from trained generator network are showed in column (b). Column (c) gives four different sets of preset height distributions, with the peak-valley (PV) values height of 0.153, 0.479, 0.786 and 1.534 microns, respectively. The phase distributions can be achieved by using the AIA approach from 4-frame phase-shifting interferogram at 632.8 nm and then unwrapping. The corresponding height distributions are showed in column (d). For comparison, the height distribution at the synthetic wavelength can be obtained from the wrapped phases at 632.8 nm and 532 nm using Eq. (5), and the results are shown as column (e). For quantitative analysis, the reconstructed heights using different methods are shown in Table 1. In step samples, the measurement range of SWI is only half of the illumination wavelength (316.4 nm), but the measurement range can be increased to half of the synthetic wavelength (1.67 µm) by using the proposed method. When the height variation of the adjacent pixels is less than 316.4 nm, both the SWI method and our method can achieve correct results, as shown in the first row of Table 1. However, when the height variation of the adjacent pixels is more than half of the wavelength, the phase recovered by using the SWI method cannot be correctly unwrapped. In this case, when the height variation of the adjacent pixels is more than half of the illumination wavelength, or even more than one illumination wavelength (632.8 nm), the trained CGAN can still correctly estimate the interferogram at single wavelength (532 nm) according to the adjacent fringe information, and then achieve the accurate height of the sample, as shown in the second and third rows of Table 1. In addition, we also investigated samples with a height close to half of the synthetic wavelength, and the correct phase distribution can still be obtained using our method, in which the PV values of the recovered height is showed in the fourth row of Table 1. That is to say, for the sample with discontinuous phase, the measurement range can be extended to the half of the synthetic wavelength.

#### 3.3 Detailed analysis for the simple input-output method

In our case, it should be noted that in the simple input-output network structure, the relationship between the interferogram and phase, does not perform well because it is difficult to accurately follow the phase wrap in the phase map. Therefore, instead of using the network for estimating the phase directly, the CGAN based neural networks are trained to predict the intermediate results, i.e., the interferogram of other wavelengths we need in Eq. (8), to derive a better estimate of the phase. To compare the simple input-output method (Extracting phase directly from interferogram at single wavelength) with the proposed CGAN based DWI (predicting the intermediate results from interferogram at single wavelength), we trained the same structure to estimate the phase directly from interferogram at 632.8 nm. Three groups of interferograms with different heights are simulated, as shown in column (a) in Fig. 8. Column (b) gives three different sets of preset height distributions, with the PV values of height of 0.153, 0.479, and 0.786 microns, respectively. The height distribution results reconstructed directly from interferogram at single wavelength are shown in column (c). Quantitatively, it is found that the PV values of the retrieval height under different cases are 1.084, 0.81 and 1.042 microns, respectively. It can be seen from the results that the correct height cannot be achieved although the shape can be achieved by using the simple input-output method.

## 4. Analysis and discussion of experimental results

A Mach-Zehnder interferometer-based single-wavelength in-line phase-shifting interference system is used to collect two sets of interferograms of Jurkat cell and vortex phase plate (RPC Photonics/VPP-1c). None of the interferograms used in the experiment are involved in the training. In the first test, a sequence of 150-frame phase-shifting interferograms with a size of 256 × 256 pixels are input to the trained networks for obtaining the corresponding phase-shifting interferograms at 532 nm. One of the interferograms at 632.8 nm and the generated interferograms at 532 nm are shown in Figs. 9(a) and (b), the reference phase distribution is calculated by AIA approach from the interferograms at 632.8 nm, and the height of the sample can be determined by Eq. (5), as shown in Fig. 9(c). Using the same approach, the height of sample at 532 nm is showed in Fig. 9(d), and the height distribution deviations between the reference height and the calculated height at 532 nm are also given in the Fig. 9(e). The root mean square error (RMSE) is calculated to quantitatively determine the accuracy of this method. The value of the RMSE is 0.0281 microns. The above results show that the sample height obtained from the interferogram at 532 nm is almost the same as that at 632 nm, proving the feasibility and superiority of the proposed method.

In the second test, the vortex phase plate (height of 607 nm) is used to verify the feasibility of the method, in which 43-frame phase-shifting interferograms are captured. One of the experimentally recorded interferograms and the generated interferograms at 532 nm are shown in Figs. 10(a) and (b). The area with the size of 200 × 200 pixels is selected for calculation and the wrapped phases retrieved with the AIA approach at 632.8 nm and 532 nm are shown in Figs. 10(c) and (d). As a comparison, the height map of the vortex phase plate recovered at 632.8 nm is showed in Fig. 10(e). After removing the background phase, the height map ${h_{sub}}$ at the longer synthetic wavelengths are calculated with Eq. (5), as shown in Fig. 10(f). Figure 10(g) shows the height map ${h_{add}}$ with high accuracy at shorter synthetic-wavelength calculated by the expression of ${\lambda _{seq}} = \frac{{{\lambda _\textrm{1}}{\lambda _\textrm{2}}}}{{|{{\lambda_\textrm{1}} + {\lambda_\textrm{2}}} |}}$ by using the immune algorithm. The PV value of the retrieved height at the synthetic wavelength is 609 nm. In addition, the corresponding height distribution in the 22^{th} line in Figs. 10(e), (f) and (g) are shown in Fig. 10(h). The experimental results show that the proposed method is effective in the measurement of samples with large phase gradient changes. Furthermore, the proposed method is used to calculate the vortex phase plate with higher height of 1.081 µm. Similarly, one of the experimentally recorded interferograms with 256 × 256 pixels and the generated interferograms at 532 nm are shown in Figs. 11(a) and (b). The height distributions retrieved with the AIA approach at 632.8 nm and with the proposed method are shown in Figs. 11(c) and (d). Quantitatively, the PV value of the retrieved height at the synthetic wavelength is 1.094 µm. It can be seen from the results that the height of the sample is relatively steep, the wrapped phase obtained by the SWI method cannot be unwrapped correctly, but the height calculated by the proposed method is close to the actual height of the vortex phase plate, which further proves the effectiveness of the method.

## 5. Conclusion

In this paper, we propose a deep learning (DL) method of dual-wavelength interferometry (DWI). Conditional generative adversarial network (CGAN) is used to statistically learn the mapping relation between interferogram at 632.8 nm and the interferogram at other wavelengths, so that only the interferogram at 632.8 nm is needed to obtain the corresponding interferogram at other wavelengths to retrieve the wrapped phase. The proposed method requires to record phase-shifting interferograms only at single wavelength, so only half of the original time is required for image acquisition. Moreover, the experimental system is greatly simplified due to only a single illumination wavelength is used. Furthermore, by using the trained convolution network, we can conveniently obtain the interferograms at other wavelengths, this will provide a novel strategy for DWI. Simulation and experiment prove that this method provides an effective solution for the problem of discontinuous wrapped phases in phase unwrapping and the measurement range limitation in single-wavelength phase-shifting interferometry. However, for some special isolated step samples, the distribution of interferograms with a height difference of one illumination wavelength is the same due to the periodicity of the interferogram, thus the trained CGAN can only learn the information of interferogram whose height variation of the adjacent pixels changes in one wavelength. Note that in this case, the proposed method can still further improve the measurement range of the sample with discontinuous phase compared with the SWI method. Taking the illumination wavelength of 632.8 nm as an example, the measurement range can be increased from ∼300 nm to ∼630 nm. Therefore, further improving the measurement range of isolated step sample will be an interesting direction for future work.

## Funding

National Natural Science Foundation of China (61805086, 61727814, 61875059); China Postdoctoral Science Foundation (2018M643114).

## Disclosures

The authors declare no conflicts of interest.

## References

**1. **T. A. Ramirez-Delreal, M. Mora-Gonzalez, F. J. Casillas-Rodriguez, J. Muñoz-Maciel, and M. A. Paz, “Steps length error detector algorithm in phase-shifting interferometry using Radon transform as a profile measurement,” Opt. Express **25**(6), 7150–7160 (2017). [CrossRef]

**2. **J. W. Zhang, J. L. Di, Y. Li, T. L. Xi, and J. L. Zhao, “Dynamical measurement of refractive index distribution using digital holographic interferometry based on total internal reflection,” Opt. Express **23**(21), 27328–27334 (2015). [CrossRef]

**3. **Y. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics **12**(10), 578–589 (2018). [CrossRef]

**4. **E. Cuche, F. Bevilacqua, and C. Depeursinge, “Digital holography for quantitative phase-contrast imaging,” Opt. Lett. **24**(5), 291–293 (1999). [CrossRef]

**5. **R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: Two-dimensional phase unwrapping,” Radio Sci. **23**(4), 713–720 (1988). [CrossRef]

**6. **J. Arines, “Least-squares modal estimation of wrapped phases: application to phase unwrapping,” Appl. Opt. **42**(17), 3373–3378 (2003). [CrossRef]

**7. **J. Martinez-Carranza, K. Falaggis, and T. Kozacki, “Fast and accurate phase-unwrapping algorithm based on the transport of intensity equation,” Appl. Opt. **56**(25), 7079–7088 (2017). [CrossRef]

**8. **P. S. Lam, J. D. Gaskill, and J. C. Wyant, “Two-wavelength holographic interferometer,” Appl. Opt. **23**(18), 3079–3081 (1984). [CrossRef]

**9. **C. Wagner, W. Osten, and S. Seebacher, “Direct shape measurement by digital wavefront reconstruction and multiwavelength contouring,” Opt. Eng. **39**(1), 79–85 (2000). [CrossRef]

**10. **A. Khmaladze, A. Restrepo-Martínez, M. Kim, R. Castañeda, and A. Blandón, “Simultaneous dual-wavelength reflection digital holography applied to the study of the porous coal samples,” Appl. Opt. **47**(17), 3203–3210 (2008). [CrossRef]

**11. **T. Colomb, S. Krivec, H. Hutter, A. A. Akatay, N. Pavillon, F. Montfort, E. Cuche, J. Kühn, C. Depeursinge, and Y. Emery, “Digital holographic reflectometry,” Opt. Express **18**(4), 3719–3731 (2010). [CrossRef]

**12. **X. Q. Xu, Y. W. Wang, J. Ying, Y. Y. Xu, M. Xie, and H. Han, “A novel dual-wavelength iterative method for generalized dual-wavelength phase-shifting interferometry with second-order harmonics,” Opt. Laser. Eng. **106**, 39–46 (2018). [CrossRef]

**13. **K. Q. Wang, Q. Kemao, J. L. Di, and J. L. Zhao, “Y4-Net: a deep learning solution to one-shot dual-wavelength digital holographic reconstruction,” Opt. Lett. **45**(15), 4220–4223 (2020). [CrossRef]

**14. **R. Onodera and Y. Ishii, “Two-wavelength interferometry that uses a fourier-transform method,” Appl. Opt. **37**(34), 7988–7994 (1998). [CrossRef]

**15. **D. G. Abdelsalam and D. Kim, “Two-wavelength in-line phase-shifting interferometry based on polarizing separation for accurate surface profiling,” Appl. Opt. **50**(33), 6153–6161 (2011). [CrossRef]

**16. **D. G. Abdelsalam, R. Magnusson, and D. Kim, “Single-shot, dual-wavelength digital holography based on polarizing separation,” Appl. Opt. **50**(19), 3360–3368 (2011). [CrossRef]

**17. **W. P. Zhang, X. X. Lu, L. H. Fei, H. Zhao, H. L. Wang, and L. Y. Zhong, “Simultaneous phase-shifting dual-wavelength interferometry based on two-step demodulation algorithm,” Opt. Lett. **39**(18), 5375–5378 (2014). [CrossRef]

**18. **D. G. Abdelsalam, “Simultaneous dual-wavelength digital holographic microscopy with compensation of chromatic aberration for accurate surface characterization,” Appl. Opt. **58**(23), 6388–6395 (2019). [CrossRef]

**19. **J. X. Xiong, L. Y. Zhong, S. D. Liu, X. Qiu, Y. F. Zhou, J. D. Tian, and X. X. Lu, “Improved phase retrieval method of dual-wavelength interferometry based on a shorter synthetic-wavelength,” Opt. Express **25**(7), 7181–7191 (2017). [CrossRef]

**20. **A. Pförtner and J. Schwider, “Red-green-blue interferometer for the metrology of discontinuous structures,” Appl. Opt. **42**(4), 667–673 (2003). [CrossRef]

**21. **J. W. Min, M. L. Zhou, X. Yuan, K. Wen, X. H. Yu, T. Peng, and B. L. Yao, “Optical thickness measurement with single-shot dual-wavelength in-line digital holography,” Opt. Lett. **43**(18), 4469–4472 (2018). [CrossRef]

**22. **Y. Lee, Y. Ito, T. Tahara, J. Inoue, P. Xia, Y. Awatsuji, K. Nishio, S. Ura, and O. Matoba, “Single-shot dual-wavelength phase unwrapping in parallel phase-shifting digital holography,” Opt. Lett. **39**(8), 2374–2377 (2014). [CrossRef]

**23. **Y. Z. Li, Y. J. Xue, and T. Lei, “Deep speckle correlation: a deep learning approach towards scalable imaging through scattering media,” Optica **5**(10), 1181–1190 (2018). [CrossRef]

**24. **A. Sinha, G. Barbastathis, J. Lee, D. Mo, and L. Shuai, “Imaging through glass diffusers using densely connected convolutional networks,” Optica **5**(7), 803–813 (2018). [CrossRef]

**25. **Y. Rivenson, Y. Zhang, H. Günaydın, T. Da, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. **7**(2), 17141 (2018). [CrossRef]

**26. **H. Wang, M. Lyu, and G. H. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express **26**(18), 22603–22614 (2018). [CrossRef]

**27. **A. Ozcan, H. Günaydin, H. Wang, Y. Rivenson, Y. Zhang, and Z. Göröcs, “Deep learning microscopy,” Optica **4**(11), 1437–1443 (2017). [CrossRef]

**28. **W. Ouyang, A. Aristov, M. L. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. **36**(5), 460–468 (2018). [CrossRef]

**29. **K. Q. Wang, Y. Li, Q. Kemao, J. L. Di and J, and L. Zhao, “One-step robust deep learning phase unwrapping,” Opt. Express **27**(10), 15100–15115 (2019). [CrossRef]

**30. **T. Zhang, S. W. Jiang, Z. X. Zhao, K. Dixit, X. F. Zhou, J. Hou, Y. B. Zhang, and C. G. Yan, “Rapid and robust two-dimensional phase unwrapping via deep learning,” Opt. Express **27**(16), 23173–23185 (2019). [CrossRef]

**31. **S. J. Feng, C. Qian, G. H. Gu, T. Y. Tao, Z. Liang, H. Yan, Y. Wei, and Z. Chao, “Fringe pattern analysis using deep learning,” Adv. Photonics **1**(2), 025001 (2019). [CrossRef]

**32. **I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets, “ in Advances in neural information processing systems (2014), 2672–2680.

**33. **P. Isola, J. Y. Zhu, T. H. Zhou, and A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition. (IEEE, 2017), 1125–1134.

**34. **Z. Y. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. **29**(14), 1671–1673 (2004). [CrossRef]

**35. **J. Gass, A. Dakoff, and M. K. Kim, “Phase imaging without 2π ambiguity by multiwavelength digital holography,” Opt. Lett. **28**(13), 1141–1143 (2003). [CrossRef]