## Abstract

Fourier ptychographic microscopy (FPM) is recently proposed as a computational imaging method to bypass the limitation of the space-bandwidth product of the traditional optical system. It employs a sequence of low-resolution images captured under angularly varying illumination and applies the phase retrieval algorithm to iteratively reconstruct a wide-field, high-resolution image. In current FPM imaging system, system uncertainties, such as the pupil aberration of the employed optics, may significantly degrade the quality of the reconstruction. In this paper, we develop and test a nonlinear optimization algorithm to improve the robustness of the FPM imaging system by simultaneously considering the reconstruction and the system imperfections. Analytical expressions for the gradient of a squared-error metric with respect to the object and illumination allow joint optimization of the object and system parameters. The algorithm achieves superior reconstructions when the system parameters are inaccurately known or in the presence of noise and corrects the pupil aberrations simultaneously. Experiments on both synthetic and real captured data validate the effectiveness of the proposed method.

© 2015 Optical Society of America

## 1. Introduction

A microscope, an instrument used to see objects that are too small for naked eyes, usually contains one or more lenses, producing an enlarged image of a sample placed in the focal plane. However, the traditional optical microscope always forces the users to compromise between a high-resolution image or a wide-field one, limited by the space-bandwidth product (SBP) [1] of the optical system. Fourier ptychographic microscopy (FPM) [2] is recently proposed as a computational imaging method to enhance the SBP of the optical system via post-processing. In this method, a simple light-emitting diode (LED) matrix illumination module, which is used to provide angularly varying illumination, is added to the traditional optical microscope. The required low-resolution images are captured by sequentially lighting up a single LED and snapshotting. Then the phase retrieval algorithm is employed to reconstruct a wide-field, high-resolution image. This method is able to transform a conventional optical microscope into a high-resolution (0.78 *mm* half-pitch resolution, 0.5 NA), wide-FOV (120 *mm*^{2}) microscope with a final SBP of 0.23 gigapixels.

In current FPM imaging system, system uncertainties may significantly degrade the reconstruction quality. To exploit the full throughput of the FPM imaging system, Zheng *et al.* [2] introduced a digital wavefront correction strategy to correct for spatially varying aberration [3–5]. One of the drawbacks of this strategy is that it needs the pre-characterization of the spatially varying aberration of the microscopy system, which can be computationally onerous and sensitive to the movement of the elements in the system. Bian *et al.* [6] put forward an adaptive Fourier ptychographic recovery framework for wavefront correction under the guidance of an image-quality metric. However, this framework includes a global optimization process, which imposes a heavy load on computational resources, and is only able to correct a limited number of low order aberrations in a reasonable time duration. Ou *et al.* [7] proposed an embedded pupil function with reasonable computational cost, based on the extended ptychographic iterative engine [8–10]. However, this method is susceptible to the system initialization and easily trapped in a locally optimal solution due to the raster grid artefact problem [11].

In this paper, we develop and test a nonlinear optimization algorithm to improve the robustness of the FPM imaging system via simultaneously recovering the reconstruction as well as correcting system imperfections and uncertainties. By defining the squared-error metric of the reconstruction problem, we employ a gradient-descent-based algorithm to jointly optimize over the object and the pupil function, which not only improves the quality of the reconstructed object but also refines the estimation of the pupil function. In this case, an aberration-free reconstruction of the object can be recovered and the pupil aberration of the image system can be corrected without a complicated calibration process.

The remainder of this paper is organized as follows. In Section 2, we briefly overview the procedure of the conventional FPM method and elucidate the importance of a precise pupil function. Then we introduce a nonlinear factor into the original convergence-related metric and define a squared-error metric for the FPM reconstruction problem. Furthermore, we provide analytic expressions for the gradient of the metric with respect to the object and the pupil function and introduce a conjugate-gradient routine to update the sample function and pupil function iteratively. In Section 3, we verify the effectiveness of the proposed algorithm by simulation. Meanwhile, we introduce a quality metric to quantify the quality of the reconstructions and propose an empirical strategy for determining an appropriate nonlinear factor. In Section 4, we demonstrate that our method can help improve the quality of the FPM imaging system and correct the pupil aberration via implementing the proposed method over the truly captured data. In Section 5, we summarize our present work and briefly introduce the future work.

## 2. Theory and method

#### 2.1. Fourier ptychographic microscopy

As detailed described in previous publications [2, 12–16], the FPM method iteratively stitches together a number of variably illuminated, low-resolution intensity images in Fourier space to produce a wide-field, high-resolution complex sample image. Before explaining the procedure of FPM, we should note that this method is based upon three assumptions:

- The recovery process alternates between the spatial and Fourier domains.
- Illuminating a thin sample by an oblique plane wave is equivalent to shifting the center of the sample’s spectrum in the Fourier domain.
- The filtering function of the objective lens (that is, coherent optical transfer function) in the Fourier space is a circular pupil.

In the acquisition procedure, sequentially scanning through the LEDs in the array creates angularly varying illumination. The required low-resolution images are captured while the sample is illuminated from different angles, which corresponds to shifts of the samples Fourier spectrum in the pupil plane. Here, we define ** r** = (

*x,y*) as the coordinate in the spatial domain and

**= (**

*k**u,v*) as the coordinate in the spatial frequency domain (Fourier domain) and model the acquisition process as a complex multiplication: the exit light wave from a thin sample

*s*(

**), which is illuminated by oblique plane wave (from the**

*r**n*

^{th}LED) with a wavevector

*k**= (*

_{n}*u*), can be expressed as

_{n},v_{n}*e*(

**) =**

*r**s*(

**)**

*r**exp*(

*ik**·*

_{n}**). The light propagates to the detector is the convolution of the exit wave and the spatially invariant point spread function**

*r**p*(

**) of the microscope system, such that:**

*r*

*I**is the intensity of the image captured under the illumination of the*

_{n}*n*

^{th}LED,

*S*(

**) is the Fourier spectrum of the sample and**

*k**P*(

**) is the pupil function of the imaging system.**

*k*The goal of the reconstruction algorithm is to recover the functions of *S*(** k**) and

*P*(

**) which satisfy Eq. (1) for all captured images. In the conventional FPM algorithm, we are supposed to have a precise estimation of the pupil function, and therefore the reconstruction problem is transformed into finding a proper sample function**

*k**S*(

**) that satisfies Eq. (1). Traditional phase retrieval algorithms [17–21] reconstruct the sample function by an iterative approach. Such an iterative approach relies on a precise estimation of the pupil function, which is difficult to acquire in practice because the pupil aberration of the employed optics is usually unknown. That is, the conventional FPM algorithm only updates the sample function**

*k**S*(

**), which neglects the influence of the potential pupil aberration. In case that pupil aberration is small enough, the conventional FPM algorithm works well, but under the condition of severe pupil aberration, the algorithm may lead to bad reconstructions.**

*k*Before detailed explaining our algorithm, it is necessary to give a brief overview of the reconstruction procedure of the conventional FPM algorithm. During the acquisition procedure, we capture low-resolution images under angularly varying illumination *I** _{n}*, where

*n*= 1,2,…,

**, corresponds to the index of the LED and**

*L***is supposed to be the number of LEDs employed in the acquisition procedure. At the beginning, an initial guess of the sample function**

*L**S*(

_{g}**) is provided with the pupil function**

*k**P*(

_{g}**) to start the algorithm, where subscript**

*k**g*denotes guess. The notations ${S}_{g}^{(j)}$ (

**) and ${P}_{g}^{(j)}$ (**

*k***) are the sample function and pupil function in the**

*k**j*

_{th}loop. The first step is an extraction procedure. According to Eq. (1), a sub-region is extracted from the sample function with the pupil function and transformed into the spatial domain to generate the simulated image:

The second step is a replace procedure. The amplitude of the simulated image is replaced by the captured image to obtain the corrected image

*c*and

*s*denote corrected and simulated respectively.

In the third step, which can be described as an update procedure, the corrected image is used to update the sample function. Specially, the corrected image is transformed into the Fourier domain and updates the corresponding sub-region of the sample function.

And it should be noted that the pupil function remains unchanged during the iteration, which can be expressed as

Then in the fourth step, the updated sample function is used as the input of the next extract-replace-update procedure and repeat step 1–3 until all the captured images are employed. Empirically, one iteration is not enough for a convergent reconstruction, so in the fifth step, repeat step 1–4 until the algorithm reaches convergence.

#### 2.2. Nonlinear optimization approach

The limitation of the traditional FPM algorithm is that it is based on a precisely estimated pupil function, and this may have a negative effect on reconstruction quality when the system suffers severe pupil aberration. A better solution to this problem is to jointly optimize over the sample function and the pupil function, which eliminates the effects of pupil aberration.

A proper metric, which describes the difference between the reconstruction and the correct one, can be of great help to finding a good solution for the FPM recovery routine. However, a typical image-quality metric like the sharpness metric [22], can be a result of incorrect modeling. For example, conventional FPM with incorrect pupil function may lead to reconstructions with noise artifacts, but also contain many sharp features, which may have a good performance in terms of the sharpness metric [6]. Considering the reconstruction procedure of FPM (Eqs. (1) and (2)), a convergence-related metric, which measures how good our reconstruction matches the captured data, is provided as follows

*W*(

_{n}*x,y*) is a weighting function that can be used to emphasize regions with high SNR or set to zero for regions with low SNR or where no signal was measured. The weighting function can be used to eliminate the effects of a beam stop or dead detector pixels, if needed. In this paper, we used a uniform unity weighting function for all pixels.

With the provided metric, we are able to achieve a better FPM reconstruction with a gradient-descent-based algorithm. However, the metric in Eq. (6) is a necessary but not sufficient condition for the FPM recovery routine in math. That is, a good FPM reconstruction can satisfy the convergence-related metric well, but a solution derived from the metric may be incorrect for FPM. Inspired by previous works [10,23–25], we introduce a nonlinear factor to generalize the original convergence-related metric and define a squared-error metric for the FPM recovery routine as follows

*δ*is a small constant that prevents problem in the gradient computation when

*I**and*

_{s,n}

*I**are close to zero and*

_{n}*γ*is a real-valued constant that is considered as the nonlinear factor.

The squared-error metric in Eq. (7) can be considered as a generalized version of that in Eq. (6), and each given *γ* can help make a FPM reconstruction. Specially, when *γ* = 0.5, the squared-error metric is the same as the convergence-related metric. Mathematically, generalization expands the solution domain. In this case, we employ a gradient-descent-based algorithm to produce different reconstructions and then select a best reconstruction in all results. In this way, we may achieve a better reconstruction than the one reconstructed with the convergence-related metric.

To run the gradient-descent-based algorithm, first of all, we need to calculate the gradients of *ε*. The gradient of *ε* with respect to the real and imaginary parts of the sample function,
${S}_{g}^{(j)}(\mathit{k})={S}_{g,R}^{(j)}(\mathit{k})+i{S}_{g,\mathit{I}}^{(j)}(\mathit{k})$, is obtained by computing the expression

*M*,

*N*is the size of the captured image and subscripts

*R*and

*I*denote real and imaginary.

The gradient with respect to the real and imaginary parts of the pupil function, ${P}_{g}^{(j)}(\mathit{k})={P}_{g,R}^{j}(\mathit{k})+i{P}_{g,\mathit{I}}^{(j)}(\mathit{k})$, can be computed in a similar fashion

With the expressions for the gradients given by Eq. (8) and Eq. (9), we update the sample function and pupil function iteratively with a conjugate-gradient routine [25].

*α*and

*β*are real-valued constants, which can be adjusted to alter the step-size of the update. In this paper,

*α*= 1 and

*β*= 1 are used for the results.

Apparently, the pupil function has also been corrected during the reconstruction procedure. Jointly optimize over the sample function and the pupil function also helps suppress noise because more constraints can be introduced into the procedure. For example, as an assumption of the FPM imaging system, the pupil function is a circularly shaped low-pass filter, so the area of the pupil function, which is out of the circle, should always be zero. During the reconstruction procedure, the zero points in the pupil function may become non-zero, which we call non-zero errors, when updated by the noisy data. Then we can use the constraint to correct the pupil function by eliminating the non-zero errors and use the corrected pupil function to update the sample function. Iteratively, this constraint can eliminate or suppress the bad effects of the noise during the reconstruction procedure.

## 3. Simulation results

To verify the effectiveness of our method, we first test the algorithms on simulated FPM datasets, where we can evaluate the quality of reconstructions qualitatively and quantitatively. Without loss of generality, we employ different types of samples, including an image of boat (512 × 512 pixels, from [26]) and an image of a pathological slide (500 × 500 pixels, from [27]). The correct pupil function is set circularly shaped and the phase of the pupil function is set as zero for simplicity, as shown in Figs. 1(d) and 2(d). We simulated a sequence of 121 images with enough overlap in Fourier domain to assure convergence of the algorithm [28] and the simulation procedure is similar to [7,15,16].

As mentioned in previous work [2], the reconstruction procedure starts with a FPM pupil function guess, which is set as a circularly shaped low-pass filter, and a sample function guess which is the spectrum of the upsampled low-resolution image. In this case, the up sample ratio is 10. Comparisons between the results reconstructed by the conventional FPM method and our method are shown in Figs. 1 and 2. It is clear that the imprecise pupil function degrades the quality of the FPM reconstruction severely whereas our method makes a successful reconstruction. This is because the imprecise pupil function repeatedly influences the low and high frequency components of the sample spectrum, leading to a significant degree of crosstalk between the sample intensity and phase. With a proper nonlinear factor and joint optimization over the sample function and the pupil function, our method eliminates the bad effect of the pupil aberration and achieves a much better reconstruction. Meanwhile, our method has also corrected the pupil aberration of the system during the reconstruction procedure, which can be further studied to characterize the behavior of the lenses.

Besides, we introduce a quantitative metric to evaluate the reconstruction quality. Although the error metric given in Eq. (7) is a good measure of convergence to a solution, it only measures how good our estimation matches the captured data. In numerical simulations, we actually have the correct solution (ground truth for reconstruction), thus an ideal quantitative metric should not only measure the agreement with the captured data but also indicate convergence to the correct solution. Inspired by the work of [7,25,29], we introduce the normalized invariant field root mean square error (NIF-RMSE) as

*S*(

_{t}**) is the true sample spectrum,**

*k**P*(

_{t}**) is the true pupil function with the subscript**

*k**t*denoting true, and the parameter

*ρ*is given by:

_{n}This parameter allows the error metric to be invariant to a constant multiplication and a constant phase offset. As shown in Fig. 3, we employ the NIF-RMSE metric to evaluate the quality of reconstructions on both simulated datasets. Apparently, imprecise estimation of the pupil function severely degrades the quality of the FPM reconstructions, which leads to a large NIF-RMSE. For the dataset of pathological slide, the NIF-RMSE value even increases along with the increasement of iterations (the red curve in Fig. 3(b)). In contrast, the ones reconstructed by our method have a much better performance in NIF-RMSE for both two simulated datasets. The quality of the reconstructions improves iteration by iteration and the NIF-RMSE finally reaches a value much smaller than that in the conventional FPM method.

As discussed in Section 2.2, an appropriate nonlinear factor is important for our method and will lead to a satisfying reconstruction. Therefore, choosing a proper nonlinear factor is the key to our method. Here, we define the normalized error like
${\epsilon}_{N}=\frac{\epsilon}{{\epsilon}_{o}}$, where *ε _{o}* is the squared-error of the initial sample function guess, to evaluate the convergence speed of the algorithms and make a comparison between different

*γ*. As shown in Fig. 4, when

*γ*becomes very small (red curve), the algorithm needs much more iterations for convergence. When

*γ*is large (blue curve), the performance of the algorithm becomes unsteady. And a proper

*γ*converges fast and steady for different types of samples.

A good reconstruction of the sample function and pupil function should match the captured data well, that is, having a small value in squared-error metric (convergence error). Therefore, it would be a bad solution if a reconstruction has a large convergence error. As shown in Fig. 5(a), when the nonlinear factor is large, the algorithm converges with a large convergence error, but when the nonlinear factor is small, the reconstructions have a much smaller value in squared-error metric. Meanwhile, we calculate the NIF-RMSE value of all reconstructions, as shown in Fig. 5(b). It is clear that the ones reconstructed with small *γ* have a much smaller NIF-RMSE value than those reconstructed with large *γ*, verifying that large convergence error leads to low reconstruction quality.

Based on the above analysis, we know that an appropriate nonlinear factor should be neither too small nor too large since the small one leads to more iterations for convergence and the large one leads to large convergence error and low reconstruction quality. However, it is still not clear which *γ* leads to the best reconstruction. A main problem is that there is no effective way to decide a best reconstruction at present, but visual comparison. A solution with large convergence error is absolutely a bad reconstruction, but the one with the smallest convergence error may not be the best reconstruction. This is because the sample function we reconstruct is actually the Fourier spectrum of the sample. The solution that performs best in convergence error results in a spectrum closest to the correct one, but when it is transformed back into the space domain, the performance may be good but not the best. As we can see in Figs. 1(c) and 1(g), the FPM reconstructions have a clear view of texture but look much darker than the ground truth, even though they perform well in NIF-RMSE (almost near zero), which is also a metric based on the spectrum difference.

To give a perceptual description of our conclusion, we make a visual comparison between reconstructions with different *γ*, on both two simulated datasets. As shown in Fig. 6, each reconstruction takes 20 iterations. When *γ* is too small (*γ* = 0.01), 20 iterations are not enough for the algorithm to converge, in which the reconstruction suffers severe noise artefacts (Fig. 6(a)). When *γ* is too large (*γ* = 0.9), the convergence error reaches a large value, which is bigger than 10^{8}. As a result, the reconstruction suffers a severe crosstalk between the intensity and phase and the reconstructed intensity is much darker than others (Figs. 6(d) and 6(h)). When *γ* is neither too large nor too small (*γ* = 0.15 and *γ* = 0.5), the convergence error is small (less than 100). However, the reconstructed phase with *γ* = 0.5 suffers significant noise (white spots in Fig. 6(g)), while the reconstruction with *γ* = 0.15 performs much better. Similarly, we find the varies of *γ* influence the performance of reconstructions on the dataset of pathological slide as well. As shown in Fig. 7, too small *γ* or too large *γ* leads to bad reconstructions. For those results that reach a small convergence error (*γ* = 0.2 and *γ* = 0.4), the quality of reconstructions are different. As shown in Fig. 7(g), the reconstructed phase suffers significant noise which degrades the quality of the reconstruction. In this case, *γ* = 0.2 is a proper choice.

In conclusion, small *γ* leads to noise-free reconstructions if it takes enough iterations. When *γ* is too small (like *γ* = 0.01), about 100 iterations are necessary for a successful reconstruction (we test it on the two provided simulated datasets). Large *γ* may lead to a bad reconstruction. Empirically, a reasonable choice of the nonlinear factor is a small one but not too small.

## 4. Experimental results

In this section, we demonstrate the performance of our method with experimental datasets, which are captured by the real FPM imaging system. The setup of the imaging system includes a conventional microscope with a *NA* = 0.1 objective lens, a CCD camera with 1.8545*µm* pixel size and a programmable color LED matrix. The distance between the LED matrix and the sample plane is 90.88*mm* and the lateral distance between two adjacent LEDs is 4*mm*. Figure 8 shows the reconstructions using FPM blood smear dataset, which includes a sequence of 225 images and is captured under the illumination of 0.63*µm* light. Under a low NA objective lens, the captured raw data is not clear enough to distinguish blood cells from each other. The initial guess of the pupil function is set as a circularly shaped low-pass filter, whose radius is determined by NA, with zero phase. The initial guess of the sample function is the Fourier spectrum of a up-sampled raw data. Figures 8(b) and 8(e) show the intensity and phase reconstruction of the blood smear with the conventional FPM method. Due to the very significant pupil aberration of the objective lens, it is difficult to recognize the contour of the blood cells and distinguish blood cells from each other. Figures 8(c) and 8(f) show a high quality reconstruction, which is reconstructed by our method, and in this case, *γ* is 0.1. In the image of reconstructed intensity, the morphology of the blood cells is clear and the shape of the nucleus of the cells is recognizable. We can also see donut shape of the blood cells and distinguish them from each other. Meanwhile, intensity and phase of the pupil function are recovered during the reconstruction procedure, as shown in Figs. 8(d) and 8(g).

Besides, our method is able to suppress the noise of the reconstructions as well. Without loss of generality, we employ our method on the dataset of a USAF resolution target, which contains a sequence of 225 images and is captured in a similar setup. Figure 9(a) shows the raw data of USAF dataset (cropped from the whole resolution target). Due to the simple structure of the reconstruction and slight pupil aberration, both reconstructions perform well and the line pairs in group 9, element 3 (0.775*µ*m) are able to be distinguished. However, imprecise estimation of the pupil function adds much noise to the image reconstructed by the conventional FPM method (shown in Fig. 9(b)) and degrades the quality of the reconstruction. As shown in Fig. 9(c), our method jointly optimizes over the sample function and the pupil function, eliminating the bad effect of the pupil aberration and improving the quality of the reconstruction.

## 5. Conclusion and discussion

In this paper, we develop and test a nonlinear optimization algorithm to improve the robustness of the FPM imaging system and the performance of the FPM reconstruction under the condition of unknown pupil aberration. By introducing a proper nonlinear factor and jointly optimizing over the sample function and the pupil function, our method reconstructs a better estimation of the sample function and refines the estimation of the pupil function. In this case, without the time-consuming and laborious acquisition of pupil characterization, an aberration free estimation of the object can be recovered and the robustness of the imaging system can be improved. Both simulation and experimental results demonstrate the validity of our method.

The limitation of our method is that the time-consuming is slightly larger than that in the conventional FPM method. Since we jointly optimize both the sample function and the pupil function while the conventional FPM method only updates the sample function, the time-consuming of our method may be twice that of the conventional FPM method. In other words, the extra cost is spent on recovering the pupil function. Besides, lacking of a reliable quality metric for experimental reconstructions makes it difficult to determine the best *γ* for the reconstruction procedure, where *γ* is set empirically at present.

More broadly speaking, the efficiency of FPM will strongly promote the development of digital pathology, hematology and neuroanatomy, which requires high-SBP observations. With our method, the imaging system becomes more robust and is no longer influenced by the pupil aberrations. Meanwhile, the reconstructed pupil function can be further studied to characterize the behavior of the lenses. Therefore, developing the robustness of the system and applying the system to more research area will be a research emphasis in the near future.

## Acknowledgments

We are grateful to the editor and the anonymous reviewers for their insightful comments on the manuscript. The authors acknowledge funding support from the National Natural Science Foundation of China under Grant U1301257, 61571254 and U1201255.

## References and links

**1. **A. W. Lohmann, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space-bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A **13**(3), 470–473 (1996). [CrossRef]

**2. **G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics **7**(9), 739–745 (2013). [CrossRef]

**3. **G. Zheng, X. Ou, R. Horstmeyer, and C. Yang, “Characterization of spatially varying aberrations for wide field-of-view microscopy,” Opt. Express **21**(13), 15131–15143 (2013). [CrossRef] [PubMed]

**4. **H. Nomura and T. Sato, “Techniques for measuring aberrations in lenses used in photolithography with printed patterns,” Appl. Phys. Lett. **38**(13), 2800–2807 (1999).

**5. **J. Wesner, J. Heil, and Th. Sure, “Reconstructing the pupil function of microscope objectives from the intensity PSF,” Proc. SPIE **4767**4845 (2002).

**6. **Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express **21**(26), 32400–32410 (2013). [CrossRef]

**7. **X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express **22**(5), 4960–4972 (2014). [CrossRef] [PubMed]

**8. **J. Marrison, L. Rty, P. Marriott, and P. O’Toole, “Ptychography-a label free, high-contrast imaging technique for live cells using quantitative phase information,” Sci. Rep. **3**, 2369 (2013). [CrossRef]

**9. **A. M. Maiden, J. M. Rodenburg, and M. J. Humphry, “A new method of high resolution, quantitative phase scanning microscopy,” Proc. SPIE **7729**77291I (2010). [CrossRef]

**10. **A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy **109**(10), 1256–1262 (2009). [CrossRef] [PubMed]

**11. **K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express **23**(5), 6171–6180 (2015). [CrossRef] [PubMed]

**12. **X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. **38**(22), 4845–4848 (2013). [CrossRef] [PubMed]

**13. **L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express **5**(7), 2376–2389 (2014). [CrossRef] [PubMed]

**14. **S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express **5**(6), 1757–1767 (2014). [CrossRef] [PubMed]

**15. **Y. Zhang, W. Jiang, L. Tian, L. Waller, and Q. Dai, “Self-learning based Fourier ptychographic microscopy,” Opt. Express **23**(14), 18471–18486 (2015). [CrossRef] [PubMed]

**16. **W. Jiang, Y. Zhang, and Q. Dai, “Multi-channel super-resolution with Fourier ptychographic microscopy,” Proc. SPIE **9273**927336 (2014). [CrossRef]

**17. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, (15) 2758–2769 (1982). [CrossRef] [PubMed]

**18. **J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. **3**(1), 27–29 (1978). [CrossRef] [PubMed]

**19. **J. R. Fienup, “Phase retrieval algorithms: a personal tour [invited],” Appl. Opt. **52**(1), 45–56 (2013). [CrossRef] [PubMed]

**20. **R. Horstmeyer and C. Yang, “A phase space model of Fourier ptychographic microscopy,” Opt. Express **22**(1), 338–358 (2014). [CrossRef] [PubMed]

**21. **J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. **85**(20), 1385–1391 (2004). [CrossRef]

**22. **J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A **20**(4), 609–620 (2003). [CrossRef]

**23. **P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution scanning X-ray diffraction microscopy,” Science **321**(5887), 379–382 (2008). [CrossRef] [PubMed]

**24. **P. Thibaulta, M. Dierolfa, O. Bunka, A. Menzela, and F. Pfeiffera, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy **109**(4), 338–343 (2009). [CrossRef]

**25. **M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express **16**(10), 7264–7278 (2008). [CrossRef] [PubMed]

**26. ** University of South California, “SIPI image database,”http://sipi.usc.edu/database/

**27. ** The computational image lab at University of California Berkeley, “LED array Fourier ptychography dataset,” http://www.laurawaller.com/opensource/.

**28. **S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express **22**(5), 5455–5464 (2014). [CrossRef] [PubMed]

**29. **J. R. Fienup, “Invariant error metrics for image reconstruction,” Appl. Opt. **36**(32), 8352–8357 (1997). [CrossRef]