This paper proposes a low-cost snapshot quantitative phase imaging approach. The setup is simple and adds only a printed film to a conventional microscope. The phase of a sample is regarded as an additional aberration of the optical imaging system. And the image captured through a phase object is modeled as the distorted version of a projected pattern. An optimization algorithm is utilized to recover the phase information via distortion estimation. We demonstrate our method on various samples such as a micro-lens array, IMR90 cells and the dynamic evaporation process of a water drop, and our approach has a capability of real-time phase imaging for highly dynamic phenomenon using a traditional microscope.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Many label-free biological samples are transparent, which make them hard to be investigated by conventional microscopy. For a long time, Zernike phase contrast microscopy  and differential interference contrast (DIC) microscopy  have been two most popular methods to visualize transparent samples by qualitative phase imaging. However, the lack of quantitative phase measurement prevents further applications such as measuring the refractive index  and surface profiling . In the past decade, lots of work have been done in the field of quantitative phase imaging (QPI)  to improve their performance in resolution, speed and simplicity.
QPI techniques can be broadly categorized as interferometric and non-interferometric techniques. Interferometry methods such as digital holography can estimate the optical path length distribution at a sub-wavelength resolution . To increase the capturing speed, off-axis holography  and parallel quasi-phase-shifting holography  are proposed to multiplex multiple holograms in a single shot. Since these methods usually need a temporally coherent source for interferometry, they are quite expensive and difficult for alignment. Spatial light interference microscopy further extends the holography method to white light and works as an add-on to the phase contrast microscopy . But the use of spatial light modulator (SLM) and phase contrast objective lens in  still makes it complex and expensive for QPI.
Transport of Intensity Equation (TIE)  and Differential Phase Contrast (DPC)  are two partially coherent QPI techniques, which can be used as simple add-ons for a conventional microscope. With suitable boundary conditions, high-resolution optical path length can be estimated [12, 13]. Despite their low-cost character, conventional TIE and DPC approaches still require multiple images to measure the gradient-related phase information. Thus many modifications have been introduced to make them as single-shot methods. Waller et al. make use of the chromatic aberration of the lenses to obtain three-plane imaging with an RGB camera for TIE-based reconstruction . Wavelength multiplexing in the illumination side encodes the two-axis DPC information to a single RGB image and realizes single-shot reconstruction of the complex field . However, the use of an RGB camera reduces its throughput and light efficiency due to the Bayer filter in front of the sensor. SLM-based  and multi-camera  setups achieve more flexibility within a single-shot but lose the simplicity and low-cost advantage. Pavani et al. utilize the aberration of an additional amplitude mask to recover the phase information , while they can only work for thick phase objects and fail to image most biological samples such as cells. In a word, it’s necessary for a simple and low-cost approach to quantitatively measure the phase of thin samples in a single shot at high-precision.
Here, we report a new snapshot computational QPI method, with only a printed film added to a conventional microscope. Different from the TIE and DPC, we estimate the phase information by observing the distortion of a reference image instead of the intensity contrast introduced by defocusing or angular illumination. The reference image is provided by the printed films or a projector and can be pre-calibrated before the experiments. As shown in Fig. 1(a), we place the sample at the defocused plane instead of the image plane and regard the phase of sample as an additional aberration of the optical system. Then the small phase change of the sample is encoded into a clear distorted image of the mask, which is specially magnified by the defocus distance. We can recover the phase information through distortion estimation as shown in Fig. 1(b). To test the schematic, we print a mask with a binary pattern and insert it between the condenser and the light source without doing any other hardware changes of a conventional microscopy. The whole modification can be completed in 5 minutes and costs less than 1 US dollar. Experiments on a microlens array, IMR90 cells and the dynamic evaporation process of a water drop are performed to show its comparable performance with TIE-based methods. We anticipate that researchers can use our algorithm as an open-source software (see Code 1 ) to achieve quantitative phase results by simply putting a mask on conventional microscope without careful alignment.
We treat the target transparent sample as an element that causes additional aberration to the optical system. To infer this introduced abberation quantitatively, we introduce a pattern with abundant textures to reveal the abberation cues.
The scheme of our model is illustrated in Fig. 2(a). The target transparent sample and the image of a textured pattern is located at z = Δz and z = 0 plane, respectively. By placing a reference mask and removing the target sample, we firstly capture a reference image. Then with the sample placed, the captured image is distorted by the phase of sample. During capturing the reference and distorted images, we keep the camera focusing on z = 0 plane (i.e., the focus plane). Comparing these two image without and with sample [Figs. 2(b) and 2(c)], we can see obvious distortion revealing the phase information of the sample.
In order to obtain the relationship between distortion and the phase of sample, we need to analyze the relationship between the reference and distorted images, namely, the complex field at the focus plane with and without sample (i.e., U1(x, y, 0) and U0(x, y, 0)). As shown in Fig. 2(a), the derivation includes three steps: light propagation from z = 0 to z = Δz plane without sample, placing sample on z = Δz plane and light propagation from z = Δz to z = 0 plane with sample.
2.1. Light propagation without sample
For the case without a sample (i.e., the step 1 in Fig. 2), we discretize the complex field U0(x, y, 0) at z = 0 plane into small patches labeled by (m, n):
The patch size d is determined by the image pixel size at the focus plane (z = 0), and the image patch U0(x, y, 0; m, n) can be approximated as
Then we can propagate the complex field on z = 0 plane U0(x, y, 0; m, n) to the z = Δz plane as:
Based on Eqs. (2) and (3) and the Fresnel diffraction theory, the complex field U0(x, y, Δz; m, n) at z = Δz plane without sample is a Fresnel diffraction pattern of a rectangular aperture. We can ignore the sidelobes of the diffraction pattern whose amplitude is small and U0(x, y, Δz; m, n) is approximated to a patch image of which patch size is d′ and central point is (md, nd) (i.e. U0(x, y, Δz; m, n) ≈ U0(x, y, Δz; m, n)rect [(x − md)/d′, (y − nd)/d′]). The patch size d′ at z = Δz plane is decided by the distance Δz (more details in Sec. 5).
2.2. Placing sample on z = Δz plane
When the sample is placed on z = Δz plane, the complex field U1(x, y, Δz; m, n) at z = Δz plane becomes (i.e., the step 2 in Fig. 2)Eq. (4) can be rewritten as
2.3. Light propagation with sample
For the case with a sample, we can further back-propagate the complex field on sample plane F1(fx, fy, Δz; m, n) to the z = 0 plane as (i.e., the step 3 in Fig. 2):Eq. (7) can be transformed as: Eqs. (7) and (9) into Eq. (8), we can find the relationship between U0(x, y, 0; m, n) and U1(x, y, 0; m, n) in Fourier domain as:
By applying inverse Fourier transform to Eq. (10), we can represent the complex field U0(x, y, 0; m, n) without sample as:
This is a discrete expression of the relationship between the complex field U0(x, y, 0) without sample and U1(x, y, 0) with a sample. The continuous formulation of Eq. (11) is:
2.4. Distortion caused by the phase of sample
Based on Eq. (12), the intensity of the complex field at z = 0 plane with sample is distorted by the phase of sample:
Based on the above model, we propose an optimization framework to recover the dynamic phase video with a pre-calibrated binary reference image. According to the framework in Fig. 1, we first estimate the distortion w(x, t) = (u(x, t), v(x, t)) between each distorted video frame and the reference image. Based on Eq. (13), the estimation can be conducted by minimizing an objective function J(w(x, t)):
Specifically, based on Eq. (13), the data term can be formulated as:20,21], and ∊ is set to a small positive constant (empirically 0.001, which is much smaller than |ξ|) so that ψ(ξ2) is still a convex function while approaching L1 funciton ψ(ξ) = |ξ|.
The regularization term is defined from piecewise smoothness assumption on the gradient field of sample’s phase (i.e. the distortion w(x, t)). This piecewise smoothness constraint can eliminate the influence from inaccurate estimation of some image pixels and thus increase the robustness of our approach. The regularization term can be formulated as:20–22] to solve this optimization problem with a few modifications: (1) Before applying the algorithm, an image brightness normalization is applied to the reference image and the distorted images. (2) To improve accuracy of our algorithm and correct the distortion caused by the misalignment of our optical system and the movement of objective lens, we also pre-shift the reference image with a fixed distortion to match the distorted images. The pre-shift distortion can be calibrated without sample placed before measurement. (3) After the optimization algorithm, we remove the defocus aberration caused by optical system itself. This aberration can be calibrated without sample placed before measuring according to the defocus distance.
Next, we can calculate the phase’s gradient information of each image pixel from the estimated distortion by Eq. (14). Finally we recover the phase video of the dynamic object from its gradient field by solving the Poisson equation, which has been well studied so far [23,24].
4. Experimental results
To validate and demonstrate the simplicity and convenience of the proposed method, we introduce only a printed binary mask into a conventional microscopy to build a snapshot quantitative phase microscope, and show its capability by capturing the microlens, IMR90 cells and the evaporation process of a water drop.
Figure 3 shows the schematic of our system and a photograph of our prototype setup. We print a film with a binary mask and insert it between the condenser and the light source of a conventional microscope without any other hardware changes. Here we do not insert the mask between the condenser and the sample directly to prevent the mask touching the sample. And the binary mask is used as a prior to enhance the contrast of our projected pattern and improve our robustness for the semi-opaque sample. An Andor Zyla 5.5 sCMOS Camera is used to capture images (6.5 μm pixel size, 2560 × 2160 pixels, and up to 100 fps). During capture, we place the sensor and mask on the conjugated focus plane (i.e., z = 0 plane) by adjusting the focusing knob of condenser and the objective lens. The focus plane is slightly off the sample plane with a distance Δz and the aperture of condenser is setting to a small size. The distance Δz is adjusted to achieve a suitable distortion. More details about the setting of Δz are in Sec. 5. The whole operation can be completed in 5 minutes and costs less than 1 US dollar. All the sample here are immersed in the air.
As shown in Fig. 3(a), Either before or after the capture, the reference image produced by the mask can be obtained by removing the sample (the dotted line in Fig. 3). When the sample is placed on the stage, the aberration caused by the phase sample results in a shift on the sensor for each point of the reference image (the solid line in Fig. 3). Then, we can use this distortion from the reference image to reconstruct the phase of the sample based on the framework illustrated in Fig. 1(b).
To demonstrate the accuracy and robustness of our proposed approach, we use a standard micro-lens array as the target sample (RPC Photonics MLA-S100-f8, 100 μm pitch, f/# = 7.8, index of refraction is 1.56). The pitch size of the binary pattern at the focus plane is around 6.5 μm, and the distance between the focus plane and the sample plane is 100 μm. The phase reconstruction results of our approach and the TIE approach in  are shown in Figs. 4(c) and 4(e), respectively, captured with Nikon Eclipse Ti microscope and a Nikon CFI Plan Apochromat VC 20 × 0.75 NA objective. Here we display the phase reconstruction results ϕ as the height h of the sample for better visualization, where h = ϕλ/2πΔn and Δn is the differential refractive index between the specimen and the air. The reference image is shown in Fig. 4(a) and the distorted image is shown in Fig. 4(b). The binary pattern in Figs. 4(a) and 4(b) lose some contrast due to the projection optical system, and our method is robust for the dust and contrast lose on our reference image in Fig. 4(a). Figure 4(d) displays the two defocused images used for TIE reconstruction. The comparison results of a microlens cross-section is shown in Fig. 4(f). Our approach achieves better result for the edge of image compared with the conventional TIE approach in spite of only a single shot.
Experiments on IMR90 cells are demonstrated in Fig. 5. We use a 3.5 μm mask and the distance between the focus plane and the sample plane is 80 μm. A Zeiss Axio Observer Z1 microscope with Zeiss EC Plan-Neofluar 40 × 0.75NA objective is used to capture images. Before applying our algorithm, we remove the dark edge of the original image. Figure 5(a) is the images distorted by the samples. The nucleus and F-action are labeled by DAPI and Alexafluor 532, respectively. The fluorescence images of the same areas are shown in Fig. 5(b). Figure 5(d) displays the two defocused images used for TIE reconstruction (the defocus distance is ± 60 μm). Figures 5(c) and 5(e) are the phase reconstruction results of our approach and TIE approach in . Our result reveals details about nucleus and cytoplasm distribution, which is well corresponding to the fluorescence images and the result of TIE method.
To further validate our proposed method for quantitative phase imaging of highly dynamic events, we use our setup to observe the dynamic evaporation process of a water drop as shown in Fig. 6. To demonstrate the robustness of our approach for a different pattern, here we use a binary pattern with less texture as a print mask. The patch size of the binary pattern on the focus plane is 20 μm and the distance between focus plane and sample plane is 100 μm. Then we capture a distorted video through a drip on a slide [Fig. 6(a)] with a Zeiss Plan-Apochromat 10 × 0.45NA objective at 33.3 frames per second (fps). Here we also remove the dark edge of the original image before applying our method. Figure 6(b) and Visualization 1 show the reconstructed phase video of the drop at different stages during its evaporation. Our dynamic quantitative phase result visualizes the accurate evaporation process of water drop at high speed, which demonstrates the advantage of our snapshot imaging method.
The key parameters of our approach is the distance Δz between the sample and focus plane. During capture, the distance Δz between the sample plane and the reference plane is adjusted to achieve a suitable distortion. From Eq. (14), a smaller gradient of the sample’s phase requires a larger Δz to reveal the distortion. However, a too large Δz should be penalized to ensure high accuracy. Here we analyze the distance setting mathematically.
Therefore, the complex field U0(fx, fy, Δz; m, n) at z = Δz plane without sample is a Fresnel diffraction pattern of a rectangular aperture with a constant amplitude and phase, and the amplitude of this complex field U0(fx, fy, Δz; m, n) at z = Δz plane without sample is:
For a suitable spreading distance Δz, the diffraction is of limited size and we can ignore the sidelobes of the diffraction pattern whose amplitude is small:Eq. (22)]. As Δz increases, the corresponding increasing patch size d′ on the sample plane would decrease the accuracy of final reconstruction.
Furthermore, the maximum of distortion cannot be either too small or too large for our algorithm. Thus in practice we adjust the defocus distance Δz to make sure that the maxinum distortion of the sample is around 10 pixel size, which is suitable for both our model and algorithm. To improve the accuracy and robustness for the semi-opaque sample, here we also use the binary mask as a prior and a gradient image term in the optimization function.
In addition, the spatial resolution of our approach depends on the patch size d′ on the sample plane. Thus for the sample with small structure such as cells, we need a mask with small pitch size to measure it. And for the sample with fewer structure such as water drops, we can use a larger pattern instead. Based on Eq. (14), the resolution of phase’s gradient Δ∂x ϕ(x) that can be estimated by our system is
In this paper, we propose a novel single-shot quantitative phase imaging approach, which is highly compatible with conventional microscope by only a printed film introduced added. The phase of the sample is regarded as an additional aberration of the optical system and a model is built to infer this aberration from the distortion with respect to a reference image. Based on this model, we develop an optimization algorithm to reconstruct the phase information via distortion analysis. We validate the effectiveness and accuracy of our proposed approach via various experiments, compared with TIE-based approaches. The quantitave phase imaging can be acquired at camera frame rate. It provides a practical, low-cost and open-source solution to achieve snapshot quantitative phase imaging simply.
Project of NSFC (No. 61327902, No. 61722110 and No. 61671265).
The authors thank Dr. Xu Zhang for providing the sample of IMR90 cells.
References and links
1. F. Zernike, “Das phasenkontrastverfahren bei der mikroskopischen beobachtung,” Z. Techn. Phys. 16, 454–457 (1935).
2. G. Nomarski, “Nouveau dispositif pour lobservation en contraste de phase differentiel,” J. Phys. Radium 16, S88 (1955).
4. K. Stout and L. Blunt, Three-Dimensional Surface Topography (Elsevier, 2000).
5. G. Popescu, Quantitative Phase Imaging of Cells and Tissues (McGraw Hill Professional, 2011).
7. S. Witte, A. Plauşka, M. C. Ridder, L. van Berge, H. D. Mansvelder, and M. L. Groot, “Short-coherence off-axis holographic phase microscopy of live cell dynamics,” Biomed. Opt. Express 3, 2184–2189 (2012). [CrossRef] [PubMed]
8. Y. Awatsuji, M. Sasada, and T. Kubota, “Parallel quasi-phase-shifting digital holography,” Appl. Phys. Lett. 85, 1069–1071 (2004). [CrossRef]
9. Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19, 1016–1026 (2011). [CrossRef] [PubMed]
10. M. R. Teague, “Deterministic phase retrieval: a green’s function solution,” J. Opt. Soc. Am. 73, 1434–1441 (1983). [CrossRef]
11. S. B. Mehta and C. J. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett. 34, 1924–1926 (2009). [CrossRef] [PubMed]
13. C. Zuo, Q. Chen, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform,” Opt. Express 22, 9220–9244 (2014). [CrossRef] [PubMed]
15. Z. F. Phillips, M. Chen, and L. Waller, “Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC),” PLoS ONE 12, 1–14 (2017). [CrossRef]
18. S. R. P. Pavani, A. R. Libertun, S. V. King, and C. J. Cogswell, “Quantitative structured-illumination phase microscopy,” Appl. Opt. 47, 15–24 (2008). [CrossRef]
19. M. Zhang, “The code for snapshot quantitative phase microscopy with a printed film,” GitHub (2018). [retrieved 28 Jun. 2018], https://github.com/zmj1203/Snapshot-quantitative-phase-microscopy-with-a-printed-film.
20. T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in “European Conference on Computer Vision,” (Springer, 2004), pp. 25–36.
21. T. Brox and J. Malik, “Large displacement optical flow: Descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2011). [CrossRef]
22. C. Liu, “Beyond pixels: Exploring new representations and applications for motion analysis,” Ph.D. thesis, MIT, Cambridge, MA, USA (2009).
23. A. Agrawal, R. Raskar, and R. Chellappa, “What is the range of surface reconstructions from a gradient field,” in “European Conference on Computer Vision,” (Springer, 2006), pp. 578–591.
24. A. Agrawal, R. Chellappa, and R. Raskar, “An algebraic approach to surface reconstruction from gradient fields,” in “IEEE International Conference on Computer Vision,” (IEEE, 2005), pp. 174–181.
25. C. Zuo, Q. Chen, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform,” Opt. Express 22, 9220–9244 (2014). [CrossRef] [PubMed]