## Abstract

This paper proposes a low-cost snapshot quantitative phase imaging approach. The setup is simple and adds only a printed film to a conventional microscope. The phase of a sample is regarded as an additional aberration of the optical imaging system. And the image captured through a phase object is modeled as the distorted version of a projected pattern. An optimization algorithm is utilized to recover the phase information via distortion estimation. We demonstrate our method on various samples such as a micro-lens array, IMR90 cells and the dynamic evaporation process of a water drop, and our approach has a capability of real-time phase imaging for highly dynamic phenomenon using a traditional microscope.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Many label-free biological samples are transparent, which make them hard to be investigated by conventional microscopy. For a long time, Zernike phase contrast microscopy [1] and differential interference contrast (DIC) microscopy [2] have been two most popular methods to visualize transparent samples by qualitative phase imaging. However, the lack of quantitative phase measurement prevents further applications such as measuring the refractive index [3] and surface profiling [4]. In the past decade, lots of work have been done in the field of quantitative phase imaging (QPI) [5] to improve their performance in resolution, speed and simplicity.

QPI techniques can be broadly categorized as interferometric and non-interferometric techniques. Interferometry methods such as digital holography can estimate the optical path length distribution at a sub-wavelength resolution [6]. To increase the capturing speed, off-axis holography [7] and parallel quasi-phase-shifting holography [8] are proposed to multiplex multiple holograms in a single shot. Since these methods usually need a temporally coherent source for interferometry, they are quite expensive and difficult for alignment. Spatial light interference microscopy further extends the holography method to white light and works as an add-on to the phase contrast microscopy [9]. But the use of spatial light modulator (SLM) and phase contrast objective lens in [9] still makes it complex and expensive for QPI.

Transport of Intensity Equation (TIE) [10] and Differential Phase Contrast (DPC) [11] are two partially coherent QPI techniques, which can be used as simple add-ons for a conventional microscope. With suitable boundary conditions, high-resolution optical path length can be estimated [12, 13]. Despite their low-cost character, conventional TIE and DPC approaches still require multiple images to measure the gradient-related phase information. Thus many modifications have been introduced to make them as single-shot methods. Waller et al. make use of the chromatic aberration of the lenses to obtain three-plane imaging with an RGB camera for TIE-based reconstruction [14]. Wavelength multiplexing in the illumination side encodes the two-axis DPC information to a single RGB image and realizes single-shot reconstruction of the complex field [15]. However, the use of an RGB camera reduces its throughput and light efficiency due to the Bayer filter in front of the sensor. SLM-based [16] and multi-camera [17] setups achieve more flexibility within a single-shot but lose the simplicity and low-cost advantage. Pavani et al. utilize the aberration of an additional amplitude mask to recover the phase information [18], while they can only work for thick phase objects and fail to image most biological samples such as cells. In a word, it’s necessary for a simple and low-cost approach to quantitatively measure the phase of thin samples in a single shot at high-precision.

Here, we report a new snapshot computational QPI method, with only a printed film added to a conventional microscope. Different from the TIE and DPC, we estimate the phase information by observing the distortion of a reference image instead of the intensity contrast introduced by defocusing or angular illumination. The reference image is provided by the printed films or a projector and can be pre-calibrated before the experiments. As shown in Fig. 1(a), we place the sample at the defocused plane instead of the image plane and regard the phase of sample as an additional aberration of the optical system. Then the small phase change of the sample is encoded into a clear distorted image of the mask, which is specially magnified by the defocus distance. We can recover the phase information through distortion estimation as shown in Fig. 1(b). To test the schematic, we print a mask with a binary pattern and insert it between the condenser and the light source without doing any other hardware changes of a conventional microscopy. The whole modification can be completed in 5 minutes and costs less than 1 US dollar. Experiments on a microlens array, IMR90 cells and the dynamic evaporation process of a water drop are performed to show its comparable performance with TIE-based methods. We anticipate that researchers can use our algorithm as an open-source software (see Code 1 [19]) to achieve quantitative phase results by simply putting a mask on conventional microscope without careful alignment.

## 2. Theory

We treat the target transparent sample as an element that causes additional aberration to the optical system. To infer this introduced abberation quantitatively, we introduce a pattern with abundant textures to reveal the abberation cues.

The scheme of our model is illustrated in Fig. 2(a). The target transparent sample and the image of a textured pattern is located at *z* = Δ*z* and *z* = 0 plane, respectively. By placing a reference mask and removing the target sample, we firstly capture a reference image. Then with the sample placed, the captured image is distorted by the phase of sample. During capturing the reference and distorted images, we keep the camera focusing on *z* = 0 plane (i.e., the focus plane). Comparing these two image without and with sample [Figs. 2(b) and 2(c)], we can see obvious distortion revealing the phase information of the sample.

In order to obtain the relationship between distortion and the phase of sample, we need to analyze the relationship between the reference and distorted images, namely, the complex field at the focus plane with and without sample (i.e., *U*_{1}(*x*, *y*, 0) and *U*_{0}(*x*, *y*, 0)). As shown in Fig. 2(a), the derivation includes three steps: light propagation from *z* = 0 to *z* = Δ*z* plane without sample, placing sample on *z* = Δ*z* plane and light propagation from *z* = Δ*z* to *z* = 0 plane with sample.

#### 2.1. Light propagation without sample

For the case without a sample (i.e., the step 1 in Fig. 2), we discretize the complex field *U*_{0}(*x*, *y*, 0) at *z* = 0 plane into small patches labeled by (*m*, *n*):

*x*,

*y*) denotes the 2D lateral coordinates,

*d*is the patch size,

*U*

_{0}(

*x*,

*y*, 0;

*m*,

*n*) is the complex patch images of

*U*

_{0}(

*x*,

*y*, 0), whose central point is (

*md*,

*nd*) on

*z*= 0 plane, and rect(

*x*,

*y*) is the rectangular function.

The patch size *d* is determined by the image pixel size at the focus plane (*z* = 0), and the image patch *U*_{0}(*x*, *y*, 0; *m*, *n*) can be approximated as

*A*

_{0}(

*md*,

*nd*, 0) and

*ϕ*

_{0}(

*md*,

*nd*, 0) are the amplitude and phase of

*U*

_{0}(

*x*,

*y*, 0) at the central points (

*md*,

*nd*) on

*z*= 0 plane, respectively.

Then we can propagate the complex field on *z* = 0 plane *U*_{0}(*x*, *y*, 0; *m*, *n*) to the *z* = Δ*z* plane as:

*F*

_{0}(

*f*,

_{x}*f*, Δ

_{y}*z*;

*m*,

*n*) and

*F*

_{0}(

*f*,

_{x}*f*, 0;

_{y}*m*,

*n*) denote the Fourier transform of the complex field

*U*

_{0}(

*x*,

*y*, Δ

*z*;

*m*,

*n*) and

*U*

_{0}(

*x*,

*y*, 0;

*m*,

*n*), respectively, and

*H*(

*f*,

_{x}*f*; Δ

_{y}*z*) denotes the transfer function of the free space propagation over distance Δ

*z*.

Based on Eqs. (2) and (3) and the Fresnel diffraction theory, the complex field *U*_{0}(*x*, *y*, Δ*z*; *m*, *n*) at *z* = Δ*z* plane without sample is a Fresnel diffraction pattern of a rectangular aperture. We can ignore the sidelobes of the diffraction pattern whose amplitude is small and *U*_{0}(*x*, *y*, Δ*z*; *m*, *n*) is approximated to a patch image of which patch size is *d′* and central point is (*md*, *nd*) (i.e. *U*_{0}(*x*, *y*, Δ*z*; *m*, *n*) ≈ *U*_{0}(*x*, *y*, Δ*z*; *m*, *n*)rect [(*x* − *md*)/*d′*, (*y* − *nd*)/*d′*]). The patch size *d′* at *z* = Δ*z* plane is decided by the distance Δ*z* (more details in Sec. 5).

#### 2.2. Placing sample on *z* = Δ*z* plane

When the sample is placed on *z* = Δ*z* plane, the complex field *U*_{1}(*x*, *y*, Δ*z*; *m*, *n*) at *z* = Δ*z* plane becomes (i.e., the step 2 in Fig. 2)

*ϕ*(

*x*,

*y*) is the phase of the transparent sample. Then the phase information of sample is approximately linear within each single patch, and Eq. (4) can be rewritten as

*Ax*+

*By*+

*C*is the linear representation of the sample’s phase information

*ϕ*(

*x*,

*y*) at the (

*m*,

*n*) patch:

Based on Eqs. (3) and (5), we can derive the relationship between *U*_{0}(*x*, *y*, 0; *m*, *n*) and *U*_{1}(*x*, *y*, Δ*z*; *m*, *n*) in Fourier domain as:

*F*

_{1}(

*f*,

_{x}*f*, Δ

_{y}*z*;

*m*,

*n*) denotes the Fourier transform of

*U*

_{1}(

*x*,

*y*, Δ

*z*;

*m*,

*n*).

#### 2.3. Light propagation with sample

For the case with a sample, we can further back-propagate the complex field on sample plane *F*_{1}(*f _{x}*,

*f*, Δ

_{y}*z*;

*m*,

*n*) to the

*z*= 0 plane as (i.e., the step 3 in Fig. 2):

*F*

_{1}(

*f*,

_{x}*f*, 0;

_{y}*m*,

*n*) denotes the Fourier transform of the complex field

*U*

_{1}(

*x*,

*y*, 0;

*m*,

*n*) at

*z*= 0 plane with sample. According to the Fresnel diffraction theory, the transfer function term in Eq. (7) can be transformed as:

*k*is the wave number of illumination light, and

*λ*is the wavelength of illumination light. Substitute Eqs. (7) and (9) into Eq. (8), we can find the relationship between

*U*

_{0}(

*x*,

*y*, 0;

*m*,

*n*) and

*U*

_{1}(

*x*,

*y*, 0;

*m*,

*n*) in Fourier domain as:

By applying inverse Fourier transform to Eq. (10), we can represent the complex field *U*_{0}(*x*, *y*, 0; *m*, *n*) without sample as:

This is a discrete expression of the relationship between the complex field *U*_{0}(*x*, *y*, 0) without sample and *U*_{1}(*x*, *y*, 0) with a sample. The continuous formulation of Eq. (11) is:

#### 2.4. Distortion caused by the phase of sample

Based on Eq. (12), the intensity of the complex field at *z* = 0 plane with sample is distorted by the phase of sample:

*I*

_{0}(

*x*,

*y*) is the intensity of

*U*

_{0}(

*x*,

*y*, 0),

*I*

_{1}(

*x*,

*y*) is the intensity of

*U*

_{1}(

*x*,

*y*, 0), and

**w**(

*x*,

*y*) = (

*u*(

*x*,

*y*),

*v*(

*x*,

*y*)) denotes the distortion between the distorted image

*I*

_{1}(

*x*,

*y*) and the reference image

*I*

_{0}(

*x*,

*y*), which is determined by the phase of sample

*ϕ*, defocus distance Δ

*z*and wave number of illumination light

*k*:

## 3. Algorithm

Based on the above model, we propose an optimization framework to recover the dynamic phase video with a pre-calibrated binary reference image. According to the framework in Fig. 1, we first estimate the distortion **w**(**x**, *t*) = (*u*(**x**, *t*), *v*(**x**, *t*)) between each distorted video frame and the reference image. Based on Eq. (13), the estimation can be conducted by minimizing an objective function *J*(**w**(**x**, *t*)):

*E*(

_{d}**w**(

**x**,

*t*)) and

*E*(

_{m}**w**(

**x**,

*t*)) are data term and regularization term, respectively. Here

**x**= (

*x*,

*y*) is the 2D spatial coordinates of image pixels, and

*α*> 0 is a regularization parameter that balances the data term and regularization term.

Specifically, based on Eq. (13), the data term can be formulated as:

*T*is the number of video frames, Ω ⊂ ℝ

^{2}denotes the spatial range of valid image pixels,

*I*

_{0}(

**x**) denotes the reference image, and

*I*

_{1}(

**x**,

*t*) denotes the distorted images captured through the sample at time

*t*. To reduce the influence of slight changes in brightness, gradient term is used here and

*γ*is a weight to balance the image term and gradient term. $\psi ({\xi}^{2})=\sqrt{{\xi}^{2}+{\u220a}^{2}}$ is applied to reduce the influence of outliers on the distortion estimation and increase the robustness [20,21], and

*∊*is set to a small positive constant (empirically 0.001, which is much smaller than |

*ξ*|) so that

*ψ*(

*ξ*

^{2}) is still a convex function while approaching L1 funciton

*ψ*(

*ξ*) = |

*ξ*|.

The regularization term is defined from piecewise smoothness assumption on the gradient field of sample’s phase (i.e. the distortion **w**(**x**, *t*)). This piecewise smoothness constraint can eliminate the influence from inaccurate estimation of some image pixels and thus increase the robustness of our approach. The regularization term can be formulated as:

*u*(

**x**,

*t*) and

*v*(

**x**,

*t*) are the distortion between distorted images

*I*

_{1}(

**x**,

*t*) and reference image

*I*

_{0}(

**x**) along

*x*and

*y*direction, respectively. As it is very similar to the optical flow problem in computer vision field, we use the algorithm in [20–22] to solve this optimization problem with a few modifications: (1) Before applying the algorithm, an image brightness normalization is applied to the reference image and the distorted images. (2) To improve accuracy of our algorithm and correct the distortion caused by the misalignment of our optical system and the movement of objective lens, we also pre-shift the reference image with a fixed distortion to match the distorted images. The pre-shift distortion can be calibrated without sample placed before measurement. (3) After the optimization algorithm, we remove the defocus aberration caused by optical system itself. This aberration can be calibrated without sample placed before measuring according to the defocus distance.

Next, we can calculate the phase’s gradient information of each image pixel from the estimated distortion by Eq. (14). Finally we recover the phase video of the dynamic object from its gradient field by solving the Poisson equation, which has been well studied so far [23,24].

## 4. Experimental results

To validate and demonstrate the simplicity and convenience of the proposed method, we introduce only a printed binary mask into a conventional microscopy to build a snapshot quantitative phase microscope, and show its capability by capturing the microlens, IMR90 cells and the evaporation process of a water drop.

Figure 3 shows the schematic of our system and a photograph of our prototype setup. We print a film with a binary mask and insert it between the condenser and the light source of a conventional microscope without any other hardware changes. Here we do not insert the mask between the condenser and the sample directly to prevent the mask touching the sample. And the binary mask is used as a prior to enhance the contrast of our projected pattern and improve our robustness for the semi-opaque sample. An Andor Zyla 5.5 sCMOS Camera is used to capture images (6.5 *μm* pixel size, 2560 × 2160 pixels, and up to 100 fps). During capture, we place the sensor and mask on the conjugated focus plane (i.e., *z* = 0 plane) by adjusting the focusing knob of condenser and the objective lens. The focus plane is slightly off the sample plane with a distance Δ*z* and the aperture of condenser is setting to a small size. The distance Δ*z* is adjusted to achieve a suitable distortion. More details about the setting of Δ*z* are in Sec. 5. The whole operation can be completed in 5 minutes and costs less than 1 US dollar. All the sample here are immersed in the air.

As shown in Fig. 3(a), Either before or after the capture, the reference image produced by the mask can be obtained by removing the sample (the dotted line in Fig. 3). When the sample is placed on the stage, the aberration caused by the phase sample results in a shift on the sensor for each point of the reference image (the solid line in Fig. 3). Then, we can use this distortion from the reference image to reconstruct the phase of the sample based on the framework illustrated in Fig. 1(b).

To demonstrate the accuracy and robustness of our proposed approach, we use a standard micro-lens array as the target sample (RPC Photonics MLA-S100-f8, 100 *μm* pitch, *f*/# = 7.8, index of refraction is 1.56). The pitch size of the binary pattern at the focus plane is around 6.5 *μm*, and the distance between the focus plane and the sample plane is 100 *μm*. The phase reconstruction results of our approach and the TIE approach in [25] are shown in Figs. 4(c) and 4(e), respectively, captured with Nikon Eclipse Ti microscope and a Nikon CFI Plan Apochromat VC 20 × 0.75 NA objective. Here we display the phase reconstruction results *ϕ* as the height *h* of the sample for better visualization, where *h* = *ϕλ*/2*π*Δ*n* and Δ*n* is the differential refractive index between the specimen and the air. The reference image is shown in Fig. 4(a) and the distorted image is shown in Fig. 4(b). The binary pattern in Figs. 4(a) and 4(b) lose some contrast due to the projection optical system, and our method is robust for the dust and contrast lose on our reference image in Fig. 4(a). Figure 4(d) displays the two defocused images used for TIE reconstruction. The comparison results of a microlens cross-section is shown in Fig. 4(f). Our approach achieves better result for the edge of image compared with the conventional TIE approach in spite of only a single shot.

Experiments on IMR90 cells are demonstrated in Fig. 5. We use a 3.5 *μm* mask and the distance between the focus plane and the sample plane is 80 *μm*. A Zeiss Axio Observer Z1 microscope with Zeiss EC Plan-Neofluar 40 × 0.75*NA* objective is used to capture images. Before applying our algorithm, we remove the dark edge of the original image. Figure 5(a) is the images distorted by the samples. The nucleus and F-action are labeled by DAPI and Alexafluor 532, respectively. The fluorescence images of the same areas are shown in Fig. 5(b). Figure 5(d) displays the two defocused images used for TIE reconstruction (the defocus distance is ± 60 *μm*). Figures 5(c) and 5(e) are the phase reconstruction results of our approach and TIE approach in [25]. Our result reveals details about nucleus and cytoplasm distribution, which is well corresponding to the fluorescence images and the result of TIE method.

To further validate our proposed method for quantitative phase imaging of highly dynamic events, we use our setup to observe the dynamic evaporation process of a water drop as shown in Fig. 6. To demonstrate the robustness of our approach for a different pattern, here we use a binary pattern with less texture as a print mask. The patch size of the binary pattern on the focus plane is 20 *μm* and the distance between focus plane and sample plane is 100 *μm*. Then we capture a distorted video through a drip on a slide [Fig. 6(a)] with a Zeiss Plan-Apochromat 10 × 0.45*NA* objective at 33.3 frames per second (fps). Here we also remove the dark edge of the original image before applying our method. Figure 6(b) and
Visualization 1 show the reconstructed phase video of the drop at different stages during its evaporation. Our dynamic quantitative phase result visualizes the accurate evaporation process of water drop at high speed, which demonstrates the advantage of our snapshot imaging method.

## 5. Discussion

The key parameters of our approach is the distance Δ*z* between the sample and focus plane. During capture, the distance Δ*z* between the sample plane and the reference plane is adjusted to achieve a suitable distortion. From Eq. (14), a smaller gradient of the sample’s phase $\frac{\partial \varphi (x,y)}{\partial \mathbf{x}}$ requires a larger Δ*z* to reveal the distortion. However, a too large Δ*z* should be penalized to ensure high accuracy. Here we analyze the distance setting mathematically.

Based on Eqs. (2) and (3), when the aperture of illumination is small, the complex field *U*_{0}(*f _{x}*,

*f*, Δ

_{y}*z*;

*m*,

*n*) at plane

*z*= Δ

*z*is

*U*

_{0}(

*md*,

*nd*, 0) is the complex field at central point (

*md*,

*nd*) of

*z*= 0 plane, with

*d*being the size of rectangular aperture. The second term

*U*(

_{rec}*x*,

*y*, Δ

*z*) is the Fresnel diffraction pattern of a rectangular aperture when the propagation distance is Δ

*z*:

*C*(

*ξ*) and

*S*(

*ξ*) are the Fresnel integral function, with

*ξ*

_{1},

*ξ*

_{2},

*η*

_{1},

*η*

_{2}being

Therefore, the complex field *U*_{0}(*f _{x}*,

*f*, Δ

_{y}*z*;

*m*,

*n*) at

*z*= Δ

*z*plane without sample is a Fresnel diffraction pattern of a rectangular aperture with a constant amplitude and phase, and the amplitude of this complex field

*U*

_{0}(

*f*,

_{x}*f*, Δ

_{y}*z*;

*m*,

*n*) at

*z*= Δ

*z*plane without sample is:

For a suitable spreading distance Δ*z*, the diffraction is of limited size and we can ignore the sidelobes of the diffraction pattern whose amplitude is small:

*ε*is a small constant threshold. Then we can regard the complex field

*U*

_{0}(

*f*,

_{x}*f*, Δ

_{y}*z*;

*m*,

*n*) as a complex patch image with patch size

*d′*and located at (

*md*,

*nd*):

*d′*is decided by the amplitude of the diffraction pattern, which relates to the distance Δ

*z*, wave number

*k*and patch size

*d*[i.e., Eq. (22)]. As Δ

*z*increases, the corresponding increasing patch size

*d′*on the sample plane would decrease the accuracy of final reconstruction.

Furthermore, the maximum of distortion cannot be either too small or too large for our algorithm. Thus in practice we adjust the defocus distance Δ*z* to make sure that the maxinum distortion of the sample is around 10 pixel size, which is suitable for both our model and algorithm. To improve the accuracy and robustness for the semi-opaque sample, here we also use the binary mask as a prior and a gradient image term in the optimization function.

In addition, the spatial resolution of our approach depends on the patch size *d′* on the sample plane. Thus for the sample with small structure such as cells, we need a mask with small pitch size to measure it. And for the sample with fewer structure such as water drops, we can use a larger pattern instead. Based on Eq. (14), the resolution of phase’s gradient Δ*∂*_{x}*ϕ*(**x**) that can be estimated by our system is

*βw*is the smallest distortion that we can estimate by our algorithm. Thus the phase resolution of our approach is determined by wave number

*k*, the defocus distance Δ

*z*, the smallest distortion we can distinguish by our algorithm

*β*and the pixel size on the image plane

*w*.

## 6. Conclusion

In this paper, we propose a novel single-shot quantitative phase imaging approach, which is highly compatible with conventional microscope by only a printed film introduced added. The phase of the sample is regarded as an additional aberration of the optical system and a model is built to infer this aberration from the distortion with respect to a reference image. Based on this model, we develop an optimization algorithm to reconstruct the phase information via distortion analysis. We validate the effectiveness and accuracy of our proposed approach via various experiments, compared with TIE-based approaches. The quantitave phase imaging can be acquired at camera frame rate. It provides a practical, low-cost and open-source solution to achieve snapshot quantitative phase imaging simply.

## Funding

Project of NSFC (No. 61327902, No. 61722110 and No. 61671265).

## Acknowledgments

The authors thank Dr. Xu Zhang for providing the sample of IMR90 cells.

## References and links

**1. **F. Zernike, “Das phasenkontrastverfahren bei der mikroskopischen beobachtung,” Z. Techn. Phys. **16**, 454–457 (1935).

**2. **G. Nomarski, “Nouveau dispositif pour lobservation en contraste de phase differentiel,” J. Phys. Radium **16**, S88 (1955).

**3. **W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods **4**, 717 (2007). [CrossRef] [PubMed]

**4. **K. Stout and L. Blunt, *Three-Dimensional Surface Topography* (Elsevier, 2000).

**5. **G. Popescu, *Quantitative Phase Imaging of Cells and Tissues* (McGraw Hill Professional, 2011).

**6. **C. J. Mann, L. Yu, C.-M. Lo, and M. K. Kim, “High-resolution quantitative phase-contrast microscopy by digital holography,” Opt. Express **13**, 8693–8698 (2005). [CrossRef] [PubMed]

**7. **S. Witte, A. Plauşka, M. C. Ridder, L. van Berge, H. D. Mansvelder, and M. L. Groot, “Short-coherence off-axis holographic phase microscopy of live cell dynamics,” Biomed. Opt. Express **3**, 2184–2189 (2012). [CrossRef] [PubMed]

**8. **Y. Awatsuji, M. Sasada, and T. Kubota, “Parallel quasi-phase-shifting digital holography,” Appl. Phys. Lett. **85**, 1069–1071 (2004). [CrossRef]

**9. **Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express **19**, 1016–1026 (2011). [CrossRef] [PubMed]

**10. **M. R. Teague, “Deterministic phase retrieval: a green’s function solution,” J. Opt. Soc. Am. **73**, 1434–1441 (1983). [CrossRef]

**11. **S. B. Mehta and C. J. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett. **34**, 1924–1926 (2009). [CrossRef] [PubMed]

**12. **L. Tian, J. C. Petruccelli, and G. Barbastathis, “Nonlinear diffusion regularization for transport of intensity phase imaging,” Opt. Lett. **37**, 4131–4133 (2012). [CrossRef] [PubMed]

**13. **C. Zuo, Q. Chen, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform,” Opt. Express **22**, 9220–9244 (2014). [CrossRef] [PubMed]

**14. **L. Waller, S. S. Kou, C. J. Sheppard, and G. Barbastathis, “Phase from chromatic aberrations,” Opt. Express **18**, 22817–22825 (2010). [CrossRef] [PubMed]

**15. **Z. F. Phillips, M. Chen, and L. Waller, “Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC),” PLoS ONE **12**, 1–14 (2017). [CrossRef]

**16. **C. Zuo, Q. Chen, W. Qu, and A. Asundi, “Noninterferometric single-shot quantitative phase microscopy,” Opt. Lett. **38**, 3538–3541 (2013). [CrossRef] [PubMed]

**17. **J. Wu, X. Lin, Y. Liu, J. Suo, and Q. Dai, “Coded aperture pair for quantitative phase imaging,” Opt. Lett. **39**, 5776–5779 (2014). [CrossRef] [PubMed]

**18. **S. R. P. Pavani, A. R. Libertun, S. V. King, and C. J. Cogswell, “Quantitative structured-illumination phase microscopy,” Appl. Opt. **47**, 15–24 (2008). [CrossRef]

**19. **M. Zhang, “The code for snapshot quantitative phase microscopy with a printed film,” GitHub (2018). [retrieved 28 Jun. 2018], https://github.com/zmj1203/Snapshot-quantitative-phase-microscopy-with-a-printed-film.

**20. **T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in “European Conference on Computer Vision,” (Springer, 2004), pp. 25–36.

**21. **T. Brox and J. Malik, “Large displacement optical flow: Descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. **33**, 500–513 (2011). [CrossRef]

**22. **C. Liu, “Beyond pixels: Exploring new representations and applications for motion analysis,” Ph.D. thesis, MIT, Cambridge, MA, USA (2009).

**23. **A. Agrawal, R. Raskar, and R. Chellappa, “What is the range of surface reconstructions from a gradient field,” in “European Conference on Computer Vision,” (Springer, 2006), pp. 578–591.

**24. **A. Agrawal, R. Chellappa, and R. Raskar, “An algebraic approach to surface reconstruction from gradient fields,” in “IEEE International Conference on Computer Vision,” (IEEE, 2005), pp. 174–181.

**25. **C. Zuo, Q. Chen, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform,” Opt. Express **22**, 9220–9244 (2014). [CrossRef] [PubMed]