Fourier ptychography (FP) is a recently developed imaging approach that bypasses the resolution limit defined by the lens’ aperture. In current FP imaging platforms, systematic noise sources come from the intensity fluctuation of multiple LED elements and the pupil aberrations of the employed optics. These system uncertainties can significantly degrade the reconstruction quality and limit the achievable resolution, imposing a restriction on the effectiveness of the FP approach. In this paper, we report an optimization procedure that performs adaptive system correction for Fourier ptychographic imaging. Similar to the techniques used in phase retrieval, the reported procedure involves the evaluation of an image-quality metric at each iteration step, followed by the estimation of an improved system correction. This optimization process is repeated until the image-quality metric is maximized. As a demonstration, we used this process to correct for illumination intensity fluctuation, to compensate for pupil aberration of the optics, and to recover several unknown system parameters. The reported adaptive correction scheme may improve the robustness of Fourier ptychographic imaging by factoring out system imperfections and uncertainties.
© 2013 Optical Society of America
Fourier ptychography (FP) [1, 2] is a recently developed phase retrieval technique that applies angular diversity for high-resolution complex image recovery. The recovery procedure of FP shares its roots with synthetic aperture concepts [3–9] and other phase retrieval techniques [10–16]. In a typical Fourier ptychographic imaging platform, a fixed LED array is used as partially coherent light sources for angle-varied illuminations. At each illumination angle, FP records a low-resolution intensity image of the sample, with the resolution defined by the numerical aperture (NA) of the objective lens. By iteratively stitching together many of these low-resolution intensity images in the Fourier space, FP recovers a high-resolution complex (phase and amplitude) image of the sample. The resolution of the final FP reconstruction is determined by the largest incident angle of the LED source. In this regard, FP bypasses the conventional design constraints in microscope platforms, such as resolution versus field-of-view, resolution versus depth-of-focus and etc.
Drawing connections and distinctions between the FP approach and a closely related modality, ptychography, helps to clarify the FP’s principle of operation and understand the experimental challenges in FP platforms. Ptychography is a phase retrieval technique that applies translational diversity  for complex image recovery. While there exist many possible implementations [14, 17–22], the general ptychographic approach uses a spatially confined probe to illuminate the sample. The sample is then mechanically translated to multiple spatial locations, and the corresponding diffraction patterns are used as constraints in an iterative phase retrieval algorithm to invert the diffraction process. With ptychography, object support for phase retrieval is imposed by the confined illumination probe in the spatial domain. Similar to the conventional ptychography, FP also records multiple intensity images of the sample and uses them for complex image recovery. With FP, however, angle-varied illuminations are used as the angular diversity functions for phase retrieval, and the corresponding object supports are imposed by the confined coherent transfer function in the Fourier domain.
The differences between the conventional ptychography (with translational diversity) and the FP (with angular diversity) lead to different experimental challenges in their implementations. In ptychography, positional uncertainty in the mechanical scanning process imposes a restriction on the reconstruction quality and the achievable resolution. Accurate knowledge of translation positions is essential for a successful complex image recovery. Along this line, different ptychographic recovery routines have been developed for correcting positional errors, including the conjugate gradient algorithm , the annealing algorithm , the genetic algorithm , the global drift model , and the cross-correlation approach . In FP settings, on the other hand, the sample is fixed during the data acquisition process, while multiple light sources are used for angle-varied illuminations. One of major systematic noises in FP settings comes from the illumination uncertainty of the multiple LED elements, a counter problem to the positional uncertainty in conventional ptychography settings. Specifically, this challenge comes from four aspects: 1) illumination intensities and incident angles of different LED elements are all different. We need to calibrate the intensities of different LED elements and measure their angular emission characteristics. 2) We also need to characterize the angular response of the light-collecting optics as well as the pixel point-spread-function of the image sensor. 3) In wide field-of-view settings, illumination intensity varies spatially across the entire field-of-view. Such spatial variations are also different for different LED elements. 4) Illumination intensities of different LED elements behave differently over time. In our FP prototype, we observe ~40% intensity drift for certain LED elements, over a time period of several hours. We note that, it is possible to alleviate the illumination uncertainty problem by developing a sophisticated calibration procedure with real-time intensity monitoring setups and associated time-synchronized electronics. However, such a development may require a substantial amount of maintenance efforts that may not be accessible to most biologists or microscopists.
Another major difference between the conventional ptychography and the FP is their optical configurations. Conventional ptychography is a lensless approach, while FP employs a low-NA objective lens for image acquisition. The use of low-NA objective lens in FP settings naturally offers a fixed, large field-of-view, a higher signal-to-noise ratio (with focusing elements), and no mechanical scanning. However, the field-dependent pupil aberrations of the objective lens need to be characterized in the recovery process . Furthermore, any modification to the FP setups may require a new calibration process, imposing a severe restriction on the robustness of the FP approach.
In this paper, we report an optimization procedure that performs adaptive system correction for Fourier ptychographic imaging. Similar to the techniques used in phase retrieval, the reported procedure involves the evaluation of an image-quality metric at each iteration step, followed by the estimation of an improved system correction. This optimization process is repeated until the image-quality metric is maximized. In the following, we will first outline the concept of the proposed adaptive FP recovery routine. Next, we will use the reported scheme to correct for illumination intensity fluctuation, to perform automatic aberration correction, and to recover several unknown system parameters. Finally, we will summarize the results and discuss other future opportunities for FP algorithm developments. The reported adaptive correction scheme may make FP accessible to biologists and microscopists.
2. The adaptive Fourier ptychographic recovery framework
There are two key components in conventional adaptive optics systems: a wavefront sensor for measuring the distortion and an adaptive optical element (for example, a deformable mirror) for performing system correction. If the wavefront sensor is not available, an image-quality metric can be used as the guide star for providing feedback on system corrections. To extend the concepts of adaptive optics to Fourier ptychographic imaging, we need to first define an image-quality metric as a guide star for the optimization process, and then perform system corrections to maximize such a guide star. We note that, FP reduces the optical system to a complex transfer function in an iterative recovery process. Therefore, the system correction in FP can be performed in a pure computational manner (no adaptive optical component is needed), offering a unique advantage on system simplicity and reliability.
In order to define an appropriate image-quality metric (i.e., the guide star) for Fourier ptychographic imaging, it is better to review the FP setup and the key steps of the recovery algorithm (see the left and the middle parts of Fig. 1). As reported in Ref , we used a LED matrix and a low-NA objective lens in the FP setup. We sequentially turned on individual LED elements in the matrix and acquired the corresponding low-resolution intensity images of sample. Based on all images we collected, we then reconstructed the high-resolution complex sample image following an iterative algorithm. This FP recovery algorithm starts with a high-resolution spectrum estimate of the sample (can be a random guess): . Next, this sample spectrum estimate is sequentially updated with the low-resolution intensity measurements (subscript ‘m’ stands for measurement and ‘i’ stands for the ith LED). For each update step, we select a small sub-region of the , corresponding to a low-pass filter, and apply Fourier transformation to generate a new low-resolution target image (subscript ‘l’ stands for low-resolution and ‘i’ stands for the ith LED). We then replace the target image’s amplitude component with the square root of the measurement to form an updated, low-resolution target image . This image is then used to update its corresponding sub-region of . The replace-and-update sequence is repeated for all intensity measurements, and we iterate through the above process several times until solution convergence, at which point is transformed to the spatial domain to produce a high-resolution complex sample image.
A typical image-quality metric for conventional adaptive optical systems is the sharpness-metric , which can be calculated through the gradient of the image. In FP settings, however, the sharpness of the reconstructed image can be a result of incorrect modeling. For example, if there are some unknown aberrations associated with the objective lens and they have not been modeled in the pupil transfer function, the resulting FP reconstruction may contain many sharp features which are artifacts of the incorrect modeling. In this paper, we use a convergence-related property [10–16] to quantify the quality of FP reconstructions. We define the image-quality metric of Fourier ptychographic imaging as followsEq. (1) is for adding up all the pixel values in the spatial domain, and the ‘i’ summation is for adding up contributions of all intensity images. If the difference between and is small for all ‘i’, the FP reconstruction is considered consistent with the intensity measurements, and the resulting value of the convergence index is high. In other words, the convergence index can be used to quantify the quality of the FP reconstruction; higher the convergence index, better the quality of the FP reconstruction. We note that, such a convergence metric has been widely used in phase retrieval techniques [10–16] for quantifying the quality of reconstruction. In the following, we will use this metric as a guide star for various optimization processes in our FP settings.
3. Correction of illumination intensity
In this section, we will demonstrate the use of the convergence metric for correcting the illumination uncertainty problem in FP settings. Following the concepts developed in phase retrieval techniques, we introduce an intensity correction factor [29, 30] to minimize the term in Eq. (1). The flow chart of the adaptive Fourier ptychographic algorithm for intensity correction is shown in Fig. 1.
In the first iteration, the algorithm follows the FP recovery routine to produce a better sample estimate (subscript ‘1’ denotes the first iteration). Starting from the second iteration, we introduce a set of intensity correction factors to compensate for the intensity error of the LED elements. At the end of the second iteration, we recover a better sample estimate and a set of intensity correcting factors . Lastly, we repeat the procedures in the second iteration until the standard deviation of (i = 1,2,…N) is less than a pre-defined value. In our implementation, we used a pre-defined standard deviation of 0.01 and the corresponding number of iterations is typically 5-10.
We first evaluate the effectiveness of the intensity correcting algorithm using simulations. The parameters in the simulations were chosen to realistically model a light microscope experiment, with an incident wavelength of 632 nm, a pixel size of 2.75 µm and an objective NA of 0.08. The high-resolution input intensity and phase profiles are shown in Fig. 2(a1) and (a2); they serve as the ground truth of the simulated complex sample. We used a 15*15 LED matrix as the light sources for providing angle-varied illuminations. The distance between adjacent LED elements is 4 mm, and the distance between the sample and LED matrix is 70 mm. A set of 225 low-resolution intensity images was simulated under this setting, and each raw image corresponds to one plane-wave illumination. Intensity uncertainty was artificially introduced by multiplying each raw image with a random constant. We then used the conventional FP reconstruction routine (without intensity correction) to recover the high-resolution images. Figures 2(b)–2(d) demonstrate the FP reconstructions under different levels of illumination fluctuations. Figures 2(b1)–2(e1) and Figs. 2(b2)–2(e2) show the recovered intensity and phase profiles in the spatial domain, while Figs. 2(b3)–2(e3) show the recovered spectrum in the Fourier domain. The corresponding maximum synthetic NA in Figs. 2(b3)–2(e3) is ~0.5 under this setting. This set of simulation illustrates how intensity drift can affect the quality of FP reconstructions.
In Fig. 3, we demonstrate the FP reconstructions using the intensity correction routine. In this simulation, we introduce 100% intensity fluctuation to the raw image and use 10 iterations for the reconstruction. Figure 3(a) shows the intensity-correcting FP reconstructions. As a comparison, the results without using the intensity-correcting routine are shown in Fig. 3(b). The improvement of image quality is quantified in Fig. 3(c), where root-mean-square (RMS) errors (between the ground truth and the recovered image) are plotted as a function of the intensity drift. From Fig. 3, it is clear that intensity-correcting routine can substantially improve the quality of FP reconstruction in the presence of illumination uncertainty.
Using the same input images, we also investigate the robustness of the intensity-correcting routine under different conditions. In Fig. 4(a), we first test the reported routine under extreme intensity drift. The black curve shows the results without intensity correction; in this case, the RMS error saturates at 200% intensity drift. The red, blue and green curves in Fig. 4(a) show the results of the intensity-correcting routine, and their corresponding numbers of iteration are 10, 15 and 20. Two conclusions can be drawn from this figure: 1) the reported scheme is robust even under extreme intensity drift; 2) more iterations are needed for compensating large intensity drift.
In Fig. 4(b), we also study the noise performance of the reported scheme. In this simulation, different amounts of speckle noises were introduced to the raw images before performing the intensity-correcting FP reconstruction. The black, red, and blue curves in Fig. 4(b) demonstrate the cases of 0, 0.01, and 0.05 speckles noises, respectively. The mix of additive noises with intensity drifts only raises the floor of RMS error; it does not affect the convergence of the reconstruction. In this regard, the reported scheme is robust against additive noises.
The reported scheme was also validated using a light microscope experiment. The experimental geometry was similar to that of simulation and we used a pathology slide (human adenocarcinoma of breast) as our specimen. In this experiment, we did not calibrate the intensity of the light sources; instead, we make two simplified assumptions of the illumination intensity: 1) light intensities and angular emission profiles of different LED elements are identical; 2) in the range from −40 degrees to + 40 degrees, LED radiations are isotropic. Figure 5(a) show the captured raw data using a 2X objective lens (0.08 NA), with a pixel size of 2.75 µm (Truesense KAI-29050). In Fig. 5(b), we followed the conventional FP routine to perform image reconstruction, and Figs. 5(b1) and 5(b2) are the recovered intensity and phase profiles of the sample. Without accurate knowledge of the illumination intensity, the FP reconstructions in Fig. 5(b) are in low-quality and the cellular details of the sample cannot be resolved. On the other hand, the FP reconstructions using the reported scheme are shown in Fig. 5(c), which clearly demonstrates the improved image quality and resolution. We used 10 iterations for recovering the images in Fig. 5(b) and 5(c).
4. Correction of pupil aberrations
The reported adaptive FP scheme can also be used for correcting the pupil aberrations of the objective lens. Figure 6 demonstrates the use of the reported scheme for correcting the second-order pupil aberrations. We use three Zernike modes (i.e., three unknown parameters) to model the second-order pupil transfer function of the objective lens. We then apply the Generalized Pattern Search (GPS) algorithm  to maximize the convergence index defined in Eq. (1). Figure 6(a) shows the raw data of the USAF target. Figure 6(b) shows the FP reconstruction using the adaptive correction scheme, while Fig. 6(c) shows the control result without using the adaptive correction scheme. The recovered pupil function is shown in Fig. 6(d). From Fig. 6(b), we can resolve the group 9, element 3 (0.78 µm line width) of the USAF target, which clearly demonstrate the effectiveness of the adaptive FP scheme.
In Fig. 7, we also demonstrate the automatic aberration correction capability using a biological sample (a blood smear). Figure 7(a) shows the raw data of the blood smear. Figure 7(b) shows the FP reconstructions using the adaptive correction scheme, while Fig. 7(c) shows the control results without using the adaptive correction scheme. We note that, FP is capable of recovering both the high-resolution intensity and phase images of the sample. Along this line, Figs. 7(b1) and 7(c1) show the high-resolution recovered intensity images, and Figs. 7(b2) and 7(c2) show the high-resolution recovered phase images.
5. Recovering of unknown system parameters
In a typical FP setup, there are many system parameters regarding the optical configuration needed to be determined or measured. In this section, we will demonstrate the use of the adaptive FP scheme to recover any unknown system parameters in the FP setup.
In the first experiment, we use the reported scheme to recover the sample positions of a USAF resolution target. The USAF target is placed at a defocused position (treated as an unknown parameter). In the FP recovery process, we calculate the convergence index with respect to different defocus distances, as show in Fig. 8. For the red curve shown in Fig. 8(c), we placed the sample at z = −150 µm, and Figs. 8(a1)–8(a3) show the FP reconstructions using three different defocus distances: −200 µm, −150 µm, and −100 µm. When the defocused distance agrees with the actual sample position, we get the best FP reconstruction, as shown in Fig. 8(a2). As we discussed before, the maximum value of the convergence index indicates the best FP reconstruction. For the red curve shown in Fig. 8(c), the maximum value of the convergence index corresponds to a defocused distance of −150.1 µm, which is in a good agreement with the actual sample position. We also repeat this experiment by putting the sample at z = −50 µm, with the result shown in Fig. 8(b) and the blue curve in Fig. 8(c).
The capability of automatically recovering the sample position is highly useful for imaging samples that are not on a flat surface (for example, circulating tumor cells captured on a transparent filter ). In conventional microscope platforms, it is difficult to accommodate the sample to the in-focus position over a large field-of-view, as the depth-of-focus of a high-NA objective lens is typically on the order of microns. To get a high-resolution image over a large field-of-view, conventional platform requires 3D precise mechanical scanning (x-y scan for increasing the field-of-view, and z scan for refocusing the sample). In FP settings, the depth-of-focus is typically on the order of 0.3 mm or more, as we can model the sample defocus in the pupil transfer function . Therefore, the remaining problem for FP is to find out the defocused positions at different regions across the entire field-of-view. The adaptive scheme shown in Fig. 8 provides a solution along this line. In a typical adaptive FP implementation for imaging thin samples on a curved surface, we will first divide the entire field-of-view into many small segments. Next, we will determine the defocused position for each small segment by maximizing the convergence index. Finally, we will perform the corresponding FP reconstructions and combine them into one image.
Following the same strategy, we also recover two other system parameters in Fig. 9: the position of the LED matrix and the central wavelength of the LED emission. In Fig. 9(a), we plot the convergence index as a function of LED positions. The maximum value of the convergence index corresponds to a LED position at 85.0 mm (beneath the sample), in a very good agreement with our independent measurement of 85.0 mm. In Fig. 9(a), we plot the convergence index as a function of emission wavelengths. The maximum value of the index corresponds to a central wavelength of 635.0 nm, again, in a very good agreement with our independent measurement of 635.2 nm.
In conclusion, we have demonstrated an optimization procedure that performs adaptive system correction for Fourier ptychographic imaging. Similar to the concepts in adaptive optics, we define an image-quality metric to quantify the quality of FP reconstructions. System corrections are performed by maximizing such a metric. We have demonstrated the use of this optimization procedure to correct for illumination uncertainty problem, to recover several unknown system parameters, and to perform automatic aberration correction. The reported procedure may improve the robustness of Fourier ptychographic imaging by factoring out system imperfections and uncertainties. Also, it may provide an alternative approach for inferring some unknown physical quantities in various experimental setups, including the spectrum of the light emission, the position of the sample, the position/orientation of the light source, the relative intensity of different light sources, the pupil aberrations of the lens and etc.
Finally, we note that, there are two future directions for the development of the FP recovery algorithm: 1) incorporating the difference map approach  to recover both the high-resolution sample image and the coherent transfer function of the objective lens; 2) incorporating the position correcting scheme [16, 23, 25, 26, 30] to the FP approach, an important step for implementing FP using electron and X-ray.
We are grateful for the constructive discussions and generous help from Mr. Roarke Horstmeyer and Mr. Xiaoze Ou from Prof. Changhuei Yang’s group at Caltech.
References and links
1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]
6. J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, “High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning,” Appl. Opt. 47(30), 5654–5659 (2008). [CrossRef] [PubMed]
7. L. Granero, V. Micó, Z. Zalevsky, and J. García, “Synthetic aperture superresolved microscopy in digital lensless Fourier holography by time and angular multiplexing of the object information,” Appl. Opt. 49(5), 845–857 (2010). [CrossRef] [PubMed]
8. T. Gutzler, T. R. Hillman, S. A. Alexandrov, and D. D. Sampson, “Coherent aperture-synthesis, wide-field, high-resolution holographic microscopy of biological tissue,” Opt. Lett. 35(8), 1136–1138 (2010). [CrossRef] [PubMed]
9. A. E. Tippie, A. Kumar, and J. R. Fienup, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Express 19(13), 12027–12038 (2011). [CrossRef] [PubMed]
10. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21, 215829 (1982).
12. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1–4), 65–75 (2001). [CrossRef]
17. J. Rodenburg, “Ptychography and related diffractive imaging methods,” Adv. Imaging Electron Phys. 150, 87–184 (2008). [CrossRef]
20. M. J. Humphry, B. Kraus, A. C. Hurst, A. M. Maiden, and J. M. Rodenburg, “Ptychographic electron microscopy using high-angle dark-field scattering for sub-nanometre resolution imaging,” Nat. Commun. 3, 730 (2012). [CrossRef] [PubMed]
21. W. Hoppe and G. Strube, “Diffraction in inhomogeneous primary wave fields. 2. Optical experiments for phase determination of lattice interferences,” Acta Crystallogr. A 25, 502–507 (1969). [CrossRef]
22. P. Nellist, B. McCallum, and J. Rodenburg, “Resolution beyond the'information limit'in transmission electron microscopy,” Nature 374, 630–632 (1995).
23. A. M. Maiden, M. J. Humphry, M. C. Sarahan, B. Kraus, and J. M. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef] [PubMed]
24. A. Shenfield and J. M. Rodenburg, “Evolutionary determination of experimental parameters for ptychographical imaging,” J. Appl. Phys. 109(12), 124510 (2011). [CrossRef]
26. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21(11), 13592–13606 (2013). [CrossRef] [PubMed]
30. P. Thibault and M. Guizar-Sicairos, “Maximum-likelihood refinement for coherent diffractive imaging,” New J. Phys. 14(6), 063004 (2012). [CrossRef]
31. C. Audet and J. E. Dennis Jr., “Analysis of generalized pattern searches,” SIAM J. Optim. 13(3), 889–903 (2002). [CrossRef]
32. S. Zheng, H. Lin, J.-Q. Liu, M. Balic, R. Datar, R. J. Cote, and Y.-C. Tai, “Membrane microfilter device for selective capture, electrolysis and genomic analysis of human circulating tumor cells,” J. Chromatogr. A 1162(2), 154–161 (2007). [CrossRef] [PubMed]