Fourier ptychography (FP) is an imaging technique that applies angular diversity functions for high-resolution complex image recovery. The FP recovery routine switches between two working domains: the spectral and spatial domains. In this paper, we investigate the spectral-spatial data redundancy requirement of the FP recovery process. We report a sparsely sampled FP scheme by exploring the sampling interplay between these two domains. We demonstrate the use of the reported scheme for bypassing the high-dynamic-range combination step in the original FP recovery routine. As such, it is able to shorten the acquisition time of the FP platform by ~50%. As a special case of the sparsely sample FP, we also discuss a sub-sampled scheme and demonstrate its application in solving the pixel aliasing problem plagued in the original FP algorithm. We validate the reported schemes with both simulations and experiments. This paper provides insights for the development of the FP approach.
© 2014 Optical Society of America
Fourier ptychography (FP) is a recently developed imaging approach that facilities microscopic imaging well-beyond the cutoff frequency of the employed optics [1–3]. Essentially, it brings together two innovations in computational optics to bypass the resolution barrier of conventional imaging platforms. The first innovation is the aperture synthesis technique originally developed for radio telescope . This technique synthesizes many complex measurements in the Fourier space to expand the passband and improve the achievable resolution [5–11]. The reconstruction process of this technique requires the knowledge of both the intensity and phase information of the incoming light field. The second innovation is the phase retrieval technique that uses intensity-only measurements to recover the phase information [12–20]. This technique typically consists of alternating enforcements of the known information of the object in the spatial and/or Fourier domains. In particular, FP shares its roots with ptychography [21–26], a lensless phase retrieval technique that uses translational diversities (i.e., moving the sample laterally) for complex image recovery. Different from the lensless ptychography approach, FP introduces angular diversity functions to expand the Fourier passband and recover the complex sample image at the same time.
A typical FP platform consists of an LED array and a conventional microscope with a low numerical aperture (NA) objective lens [1–3]. Each LED element illuminates the sample from a different incident angle. FP acquires multiple low-resolution intensity images of the sample under different plane wave illuminations. The acquired images are then iteratively synthesized in the Fourier space to produce a high-resolution complex sample image. The detailed recovery procedures can be found in [1, 3]. There are two working domains for the FP recovery process: the spatial domain and the Fourier domain. In the spatial domain, the amplitudes of the acquired images are used to constraint the FP solution, similar to the strategy of the phase retrieval technique. In the Fourier domain, the panning Fourier constraints are imposed to reflect the angular variation of plane wave illuminations. Such Fourier constraints also enable passband expanding in the Fourier space, sharing the same strategy of the aperture synthesis technique. Compared to the lensless ptychography approach, the use of lens elements in FP platforms also provide a higher signal-to-noise ratio on the acquired raw images and a lower spatial-coherence requirement on the illumination beams. The resolution of the final FP reconstruction is determined by the latest incident angle of the illumination. As such, it is able to bypass the design conflicts of conventional microscopes to achieve high-resolution, wide field-of-view imaging capabilities.
Drawing connections and distinctions between the FP approach and two related microscopy techniques, tomographic microscopy [27–30] and contact-imaging microscopy [31, 32], also helps to clarify the operating principle of the FP. Tomographic microscopy uses angle-varied plane waves for sample illuminations and uses the tomographic reconstruction routine to recover a 3D image of the sample. It is clear that both FP and tomographic microscopy capture multiple perspective images of a sample under different plane-wave illuminations. Instead of recovering the 3D information, FP synthesizes different angular perspectives to increase a 2D object’s spatial resolution. Furthermore, the resolution of tomographic microscopy is in general limited by the NA of the objective lens, while FP is able to bypass this limitation by imposing the panning Fourier constraint. We also note that, an FP data set of a 3D object can be processed in a similar manner as tomographic microscopy to perform 3D sample refocusing and rendering . Contact-imaging microscopy [31, 32] is a lensless imaging approach that uses angle-varied illuminations to introduce sub-pixel shift of the acquired images. These images are then registered in the spatial domain and deconvolved with the pixel point-spread-function to bypass the pixel aliasing problem of the image sensor. To achieve high-resolution imaging capability, this approach requires the sample to be placed in close proximity to the sensor chip. FP, on the other hand, alternatively imposes the object support constraints between the spatial and spectral domains to recover the high-resolution complex sample image. By introducing a phase factor in the recovery procedures, FP is able to correct for aberrations  and extend the depth-of-focus beyond the physical limitation of the objective lens.
A key aspect of a successful FP reconstruction is the data redundancy requirement of the recovery process. In particular, such a data redundancy requirement is important for recovering the ‘lost’ phase information of the sample. In this paper, we analyze such a requirement in both the spatial and Fourier domains. This paper is structured as follow: we will first discuss the spectral-spatial sampling requirement of the FP recovery process. We will then report a sparsely sampled FP scheme by selectively updating the pixel values in the spatial domain. We will also demonstrate the use of the reported scheme for bypassing the high-dynamic-range (HDR) combination step in the original FP algorithm. Next, we will discuss a sub-sampled FP scheme and use it to solve the pixel aliasing problem plagued in the original FP algorithm. Finally, we will summarize the results and discuss the future direction of the FP approach.
2. Sampling in the spectral domain
In a FP experiment, the interaction between a plane wave illumination and a sample can be modeled as , where is the complex transmitted function of the sample, and is a plane illumination with a wavevector (,). Multiplication of the plane wave illumination in the spatial domain is equivalent to shifting the sample spectrum in the Fourier domain. Thus, images with different plane wave illuminations correspond to different spectrum regions in the Fourier space. Intuitively, a certain amount of spectrum overlapping in between successive acquisitions is needed to connect all acquired images in the Fourier space. If there is no overlapping in between these spectrum regions, FP reduces to a conventional phase retrieval procedure where each image can be processed independently. In this section, we investigate the spectrum overlapping requirement of the FP recovery process. Specifically, we want to answer the following question: how much spectrum overlapping is needed for a successful FP reconstruction?
The spectrum overlapping percentage is determined by the angular variation in between two successive illuminations. It is defined as the overlapping spectral region of two successive acquisitions divided by the entire region of the objective’s pupil function. A typical FP platform uses an LED array for providing angle-varied illuminations. As such, the spectrum overlapping percentage is determined by the size of the LED element and the distance between the LED array and the sample. In Fig. 1, we investigate the spectrum overlapping requirement using simulations. The simulation parameters were chosen to realistically model a light microscope experiment, with an incident wavelength of 632 nm, a pixel size of 2.75 µm and an objective NA of 0.08. We simulated the use of a 15*15 LED array for illuminating the sample with different incident angles. Different spectrum overlapping ratio was achieved by adjusting the distance between the LED array and the sample.
The high-resolution input intensity and phase profiles are shown in Fig. 1(a1) and 1(a2), which serve as the ground truth of the simulated complex object. We then simulated the low-resolution measurements under different incident angles by imposing a low-pass filter at the corresponding regions of the Fourier space. These low-resolution images were then used to reconstruct the high-resolution complex sample image following the FP recovery procedures [1, 3]. Figure 1(b)-1(d) demonstrate the FP reconstructions under different spectrum overlapping percentages. It is obvious that the reconstruction quality of Fig. 1(b) (with an 18% overlapping percentage) is worse than those with higher overlapping percentages. The image qualities of different FP reconstructions are quantified in Fig. 1(e), where root-mean-square (RMS) errors (i.e., the difference between the ground truth and the recovered images) are plotted as a function of the spectrum overlapping percentage. It is shown that the RMS errors decreases as the spectrum overlapping percentage increases, and a minimum of ~35% overlapping percentage is needed for a successful FP reconstruction.
3. Sampling in the spatial domain
The FP recovery process uses the amplitudes of the acquired images to constraint the high-resolution reconstruction in the spatial domain. In order to discuss the sampling requirement in the spatial domain, we first review the amplitude updating process in the FP algorithm. The FP algorithm starts with a high-solution spectrum estimate of the sample. For each illumination angle, we select a small sub-region of this spectrum and perform inverse Fourier transform to generate a low-resolution target image. The amplitude component of this target image is then replaced by that of the acquired image while the phase component is kept unchanged. This amplitude updating process is repeated for all intensity measurements and we iterate through the process several times until solution convergence. In this section, we want to answer the following question: how many pixels need to be updated in the spatial domain for a successful FP reconstruction?
The simulation parameters are the same as those of the previous section. We introduce a sparsely sampled mask in the updating process, as shown in Fig. 2(a3)-2(c3). This mask contains only two types of pixel values: 0 and 1. The regions corresponding to value ‘1’ are updated as the original FP algorithm, while those corresponding to ‘0’ are kept unchanged in the updating process. The pixel with value ‘0’ is termed empty pixel. Figure 2(a1)-2(c1) and 2(a2)-(c2) demonstrate the recovered FP images with different empty pixel percentages. We can see that the reconstruction quality of Fig. 2(c) (with 90% empty pixels) is worse than those with lower empty pixel percentages. We also quantified the FP reconstruction qualities using the RMS error metric in Fig. 2(d). It is shown that, FP algorithm is able to recover the complex image with a maximum of ~70% of empty pixels.
In Fig. 3, we further analyze the joint spectral-spatial sampling requirement of the FP recovery process. Different curves in Fig. 3 represent different empty pixel percentages. The convergence region is enclosed by the dash line at the bottom right. It is shown that, a higher spectral sampling percentage results in a low spatial sampling requirement. The interplay between the spectral and spatial sampling requirements gives us more flexibility on designing the FP imaging platforms. For example, we can tradeoff the spatial sampling by using more LED illuminations. In the following two sections, we will demonstrate two application examples on exploring such a spectral-spatial sampling interplay.
4. Sparsely sampled Fourier ptychography
As discussed above, the sampling interplay between the spectral and the spatial domains allows one to tradeoff the spatial sampling with an increased number of illuminations. In this section, we will report a sparsely sampled scheme following such a strategy. The reported scheme is able to bypass the HDR combination process in the original FP platform, and shorten the acquisition time considerably.
As demonstrated in Ref , a typical FP platform needs to acquire multiple images of the same scene with different exposure times (normally, one short and one long exposure are needed). These raw images are then combined to produce a HDR image of the scene. Figures 4(a1)-4(c1) and Figs. 4(a2)-4(c2) demonstrate two examples of such a HDR combination process. Figure 4(a1) and 4(a2) are two different raw images of the same blood smear sample, where many regions are overexposed. Figure 4(b2) and 4(b2) demonstrate the reconstructed images following the HDR combination step.
The principle of the sparsely sampled FP is straight forward. In the amplitude updating process, it produces a sparsely sampled mask by binarizing the overexposed raw image, as shown in Fig. 4(c1) and 4(c2). This mask is then imposed in the amplitude updating process: the regions with overexposed pixels will be kept unchanged while other regions will be updated by the intensity measurement. Depending on the empty pixel percentage, one may need to increase the number of plane wave illuminations to ensure the solution convergence. In a typical microscope experiment, the percentage of overexposed pixels is no more than 15%, and thus, the solution convergence condition withstands.
We validated the sparsely sampled FP scheme using a light microscope experiment. The experimental geometry was similar to that of simulation and we used the blood smear slide as our sample, the same as that of Fig. 4. Figure 5(a) show the raw image of the sample with a pixel size of 2.75 µm. Figure 5(b1) and 5(b2) are the recovered intensity and phase images without using the HDR combination process. These two FP reconstructions are corrupted by the overexposed pixels in the raw images. Figure 5(c1) and 5(c2) are the recovered images using the HDR combination process. The corresponding acquisition time is about 3 minutes (450 images in total). The results of the proposed sparsely sampled FP are shown in Fig. 5(d1) and 5(d2), and the corresponding acquisition time is about 1.6 minute (225 images in total). From the comparisons shown in Fig. 5, we can see that the image quality of the reported scheme is comparable to that of the original FP with the HDR combination step. The advantage of the reported scheme is obvious: it gets rid of the multi-exposure acquisition process and shortens the acquisition time by ~50%.
5. Sub-sampled Fourier ptychography
In a FP platform, the pixel size of the image sensor needs to be carefully chosen to match the optical transfer function of the objective lens. Nyquist theorem dictates that, the pixel size needs to be smaller than λ/(2∙NA), where λ is wavelength of the light field and NA is the numerical aperture of the objective lens (the magnification factor is normalized in our discussion). A pixel size larger than this Nyquist limit may lead to the pixel aliasing problem in the Fourier domain (Fig. 6(a)). It will also significantly degrade the quality of the FP reconstruction. In this section, we will report a sub-sampled scheme, a special case of the sparsely sampled FP, to address the pixel aliasing problem. It is also important to acknowledge that, a similar updating procedure has been discussed in the lensless ptychography approach .
The sub-sampled FP scheme is shown in Fig. 6(b). We divide one original pixel into 4 sub-pixels, and thus, the effective pixel size is only half of the original pixel size. We then generate a sub-sampled mask in the amplitude updating step, as shown in the left part of Fig. 6(b). Only 1 out of 4 sub-pixels is updated by the measurement and the other 3 sub-pixels are kept unchanged in the updating process. Essentially, this scheme is a special case of the sparsely sampled FP, with a 75% empty pixel percentage and a pre-defined sub-sampled mask.
We first validate this scheme using simulations. We chose a pixel size of 4.125 µm, a wavelength of 0.63 µm, and a NA of 0.1. Therefore, the pixel size is larger than the Nyquist limit of 3.15 µm. We simulated the use of a 15*15 LED array for illuminating the sample from different incident angles. The spectrum overlapping percentage is ~65%. Figure 7(a) shows one raw intensity image of the sample. Figures 7(b1)-7(b2) demonstrate the FP reconstructions using the sub-sampled mask in the updating process. Figure 7(b3) shows the corresponding recovered spectrum in the Fourier space. Due to the pixel aliasing problem, there is not enough bandwidth to impose the circular pupil function in the Fourier space. As such, each low-resolution image in Fig. 7(b3) corresponds to a square region in the Fourier space. The case without using the sub-sampled mask is shown in Figs. 7(c1)-7(c3), where the reconstructions are corrupted by the pixel aliasing problem.
We then validated the sub-sampled FP scheme using a light microscope experiment. The experimental setting was the same as the simulation, and a USAF resolution target was used as the sample. Figure 8(a) shows the raw intensity image of the sample. Figure 8(b1) and 8(b2) are the recovered high-resolution image and spectrum using the sub-sampled scheme. The FP reconstructions without using the sub-sampled scheme are shown in Fig. 8(c1) and 8(c2). It is obvious that, the sub-sampled FP scheme is able to reconstruct an artifact-free sample image. On the other hand, the FP reconstructions without using the sub-sampled mask are corrupted by the pixel aliasing problem.
In conclusion, we have investigated the data redundancy requirements of the FP approach in both the spectral and spatial domains. We have reported a sparsely sampled FP scheme by selectively updating the pixel values in the spatial domain. Such a scheme is able to get rid of the multi-exposure acquisition process in the original FP platform, and considerably shortens the acquisition time. We have also discussed a sub-sampled FP scheme and used it solve the pixel aliasing problem plagued in the original FP setting. Our on-going effort includes the development of single-pixel FP by using the sub-sampled scheme.
Finally, we note that, the data redundancy requirements may also depend on the chosen samples. The relationship between the image compressibility and the data redundancy requirement deserves further investigations. This relationship can also be related to the recent development of compressive sensing . The study in this paper, however, provides an engineering guideline on designing FP experiments.
We are grateful for the constructive discussions with Mr. Xiaoze Ou. For more information on Fourier ptychography, please visit https://sites.google.com/site/gazheng/.
References and links
1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]
4. M. Ryle and A. Hewish, “The synthesis of large radio telescopes,” Mon. Not. R. Astron. Soc. 120, 220 (1960).
10. J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, “High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning,” Appl. Opt. 47(30), 5654–5659 (2008). [CrossRef] [PubMed]
11. T. R. Hillman, T. Gutzler, S. A. Alexandrov, and D. D. Sampson, “High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy,” Opt. Express 17(10), 7873–7892 (2009). [CrossRef] [PubMed]
12. R. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237 (1972).
15. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21, 215829 (1982).
16. R. A. Gonsalves, “Phase retrieval by differential intensity measurements,” J. Opt. Soc. Am. A 4(1), 166–170 (1987). [CrossRef]
17. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1-4), 65–75 (2001). [CrossRef]
23. J. Rodenburg, “Ptychography and related diffractive imaging methods,” Adv. Imaging Electron Phys. 150, 87–184 (2008). [CrossRef]
26. M. Humphry, B. Kraus, A. Hurst, A. Maiden, and J. Rodenburg, “Ptychographic electron microscopy using high-angle dark-field scattering for sub-nanometre resolution imaging,” Nat. Commun. 3, 730 (2012).
27. S. O. Isikman, W. Bishara, S. Mavandadi, F. W. Yu, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. U.S.A. 108(18), 7296–7301 (2011). [CrossRef] [PubMed]
29. Y. Sung, W. Choi, N. Lue, R. R. Dasari, and Z. Yaqoob, “Stain-free quantification of chromosomes in live cells using regularized tomographic phase microscopy,” PLoS ONE 7(11), e49502 (2012). [CrossRef] [PubMed]
30. C. Fang-Yen, W. Choi, Y. Sung, C. J. Holbrow, R. R. Dasari, and M. S. Feld, “Video-rate tomographic phase microscopy,” J. Biomed. Opt. 16, 011005 (2011).
31. G. Zheng, S. A. Lee, Y. Antebi, M. B. Elowitz, and C. Yang, “The ePetri dish, an on-chip cell imaging platform based on subpixel perspective sweeping microscopy (SPSM),” Proc. Natl. Acad. Sci. U.S.A. 108(41), 16889–16894 (2011). [CrossRef] [PubMed]
35. T. B. Edo, D. J. Batey, A. M. Maiden, C. Rau, U. Wagner, Z. D. Pešić, T. A. Waigh, and J. M. Rodenburg, “Sampling in x-ray ptychography,” Phys. Rev. A 87(5), 053850 (2013). [CrossRef]
36. R. G. Baraniuk, “Compressive sensing [lecture notes],” IEEE Sig. Proc. Mag. 24(4), 118–121 (2007). [CrossRef]