Abstract

Recently Fourier Ptychography (FP) has attracted great attention, due to its marked effectiveness in leveraging snapshot numbers for spatial resolution in large field-of-view imaging. To acquire high signal-to-noise-ratio (SNR) images under angularly varying illuminations for subsequent reconstruction, FP requires long exposure time, which largely limits its practical applications. In this paper, based on the recently reported Wirtinger flow algorithm, we propose an iterative optimization framework incorporating phase retrieval and noise relaxation together, to realize FP reconstruction using low SNR images captured under short exposure time. Experiments on both synthetic and real captured data validate the effectiveness of the proposed reconstruction method. Specifically, the proposed technique could save ~ 80% exposure time to achieve similar retrieval accuracy compared to the conventional FP. Besides, we have released our source code for non-commercial use.

© 2015 Optical Society of America

1. Introduction

Fourier Ptychography (FP) is a newly reported technique for large field-of-view (FOV) and high-resolution (HR) imaging [13]. This technique sequentially captures a set of low-resolution (LR) images describing different spatial spectrum bands of the sample, and then stitches these spectrum bands together in the Fourier domain to reconstruct the entire HR spatial spectrum, including both amplitudes and phases. This HR spectrum can be transformed to the spatial domain to recover the HR image of the sample. Mathematically, FP reconstruction could be treated as a typical phase retrieval problem, which needs to recover a plural function given the magnitude measurements of its Fourier transform. Specifically, we only obtain the magnitudes of images corresponding to the sub-bands of the HR spatial spectrum, and intend to retrieve the plural HR spectrum. So far, all the FP applications [1,46] utilize the alternating projection (AP) algorithm [7, 8], a widely used method for phase retrieval, to implement the reconstruction process.

Recently, FP technique has been successfully applied to microscopic imaging and brings Fourier ptychography microscopy (FPM) [1]. FPM assumes plane-wave illuminations from the LED array. Therefore, according to [9], by sequentially lightening LEDs at different positions in the illumination plane, we can obtain different shifted versions of the sample’s spatial spectrum. Since the whole light field is filtered by the microscope’s objective lens, the changing of incident angles results in a set of LR images carrying different sub-bands of the entire spectrum. Finally, by utilizing the FP reconstruction technique (the AP algorithm), FPM can achieve large FOV and HR microscopic imaging. As stated in [1], the synthetic NA of the FPM setup is ~0.5, and the FOV could reach ~120 mm2, which greatly improves the throughput performance of existing microscopes.

However, the AP algorithm utilized in the reconstruction process is sensitive to the input noise [10], and thus long exposure time is required for capturing high signal-to-noise-ratio (SNR) inputs. As reported in [1], the FPM setup needs around 3 minutes to acquire high SNR images under 137 angularly varying illuminations, for subsequent reconstruction of a gigapixel grayscale image. This largely limits the practical applications of FPM and other FP applications. To shorten the acquisition time, this paper proposes a new phase retrieval method for FP reconstruction, which is able to deal with low SNR input images.

As for existing phase retrieval methods, typically they can be classified into two categories, namely alternating projection algorithms and semi-definite programming (SDP) based algorithms [10]. The former kind alternately operates in spatial space and Fourier space, imposing corresponding spatial-plane and Fourier-plane constraints to the retrieved plural function. These constraints include consistency with measured magnitudes [11], magnitudes’ non-negativity [12], signal support constraint [7], and so on. This kind of methods are efficient but at the risk of non-convergence and reaching to a local optimum [10]. The latter kind of approaches rely on the observation that the quadratic equations in the phase retrieval problem can be rewritten as linear equations in a higher dimensional space [13]. Typical SDP based algorithms include PhaseLift [14, 15] and PhaseCut [16], and PhaseLift has been successfully applied to the common phase retrieval task from captured coded diffraction patterns [17]. Such kind of methods could converge to a global optimum by a series of convex relaxations. However, they require matrix lifting to work in a higher space, and thus own high computation cost, which makes them less competitive.

Recently, based on Wirtinger derivatives [18, 19], Candes et al. [20] develop a non-convex formulation of the phase retrieval problem, and utilize the gradient descent scheme to derive a computation-cost-saving solution, termed Wirtinger flow (WF) algorithm. As stated in [20], although the quadratic model is non-convex, Wirtinger flow algorithm can rigorously retrieve exact phase information from a nearly minimal number of random measurements, by starting with a relatively accurate initialization. In this paper, we apply the Wirtinger flow scheme to FP, and further introduce a noise relaxation constraint for a new FP reconstruction framework. The proposed framework is termed WFP (Wirtinger flow optimization for Fourier Ptychography). The advantages of the proposed WFP are threefold:

  • Compared to the existing AP algorithm for FP, WFP can better handle the detector noise, and thus largely reduces the requisite long exposure time;
  • Compared to the SDP based algorithms such as PhaseLift and PhaseCut, WFP doesn’t need matrix lifting and largely decreases the computation cost;
  • WFP is a general optimization framework being able to incorporate various priors and constraints, and hence can be extended to save costs and increase the retrieval accuracy further.

The remainder of this paper is organized as follows: modeling and derivation of the optimization algorithm are explained in Sec. 2. Then, we conduct a series of experiments on both synthetic and real captured data to validate the proposed approach in Sec. 3. Finally, we conclude this paper with some summaries and discussions in Sec. 4.

2. Optimization framework

In this section, we first review the Wirtinger flow algorithm [20], and then introduce the Wirtinger flow formulation into the FP reconstruction process, and incorporate the noise constraint. Finally, we derive the WFP reconstruction algorithm robust to the measurement noise.

2.1. Review of the Wirtinger flow algorithm

Wirtinger flow algorithm [20] is a recently reported technique to solve the standard phase retrieval problem. Specifically, it retrieves a plural signal x ∈ ℂn from a series of its real sampling measurements b ∈ ℝm, with the measurement formation defined as b = |Ax|2 = (Ax)*Ax. Here A ∈ ℂm×n is a linear sampling matrix, and ⊙ stands for the dot product.

Based on the quadratic loss function, the Wirtinger flow algorithm transforms the phase retrieval task into a minimization problem as

minf(x)=12(Ax)AxbF2,xn.

Here || · ||F is the Frobenius norm, and is calculated as XF=ΣijXij2.

Such an optimization model can be solved in an iterative manner, utilizing the gradient descent scheme. According to [18,19], the derivative of the complex quadratic cost function with respect to x*, i.e. fx, is necessary for updating x in each iteration. In implementation, x is updated in a gradient descending manner as [20]

x(k+1)=x(k)Δfx|x=x(k).

Here Δ is the gradient descent step size set by users, and fx can be easily calculated according to the Wirtinger derivatives as

fx=12(Ax)AxbF2x=AH[(|Ax|.2b)(Ax)].

With the above derivations, the Wirtinger flow algorithm is summarized as Alg. 1.

Tables Icon

Algorithm 1:. Wirtinger flow algorithm

2.2. Wirtinger flow optimization for Fourier Ptychography—WFP

In terms of the FP reconstruction, the target is to recover the HR spatial spectrum from a series of LR images captured in spatial space. To begin with, we briefly review the image formation of FP from an optical standpoint. Firstly, applying non-normal incident illuminations results in shifting the scene’s HR spectrum by corresponding offsets in the pupil plane, and truncating them with the pupil of the lens. This process is equivalent to sequentially sampling different sub-bands of the latent HR spectrum. Then the LR sub-spectra are Fourier transformed on the CCD to form the captured LR images. Thus, the relation between the HR reconstruction and the LR observations corresponds to two sequential linear operations: (i) down-sampling caused by the pupil in the lens, and (ii) Fourier transform to the LR spectrum bands caused by the microscope imaging system. We treat these two operations as a whole and use A ∈ ℂm×n to represent this combinational sampling process. Specifically, denoting the HR spatial spectrum as a plural vector x ∈ ℂn, the corresponding sampling matrix A = FS is composed of two components: Fourier transform F and down-sampling S.

In addition, we also consider the measurement noise explicitly, and the measurement formation model becomes

b=|Ax|2+n,
where n ∈ ℝm denotes the measurement noise, we assume which to be Gaussian. We use σ ∈ ℝ to represent the standard deviation of the noise. The three sigma rule tells that, nearly all (99.73%) the samples of a random variable lie within 3 times standard deviation from its mean. Therefore, we can approximatively formulate our noise constraint as
|n|3σ.

Introducing a relaxation vector ε ∈ ℝm, we can transform the above inequality into an equality

nn9σ2+εε=0.

Combining the above noise constraint (Eq. 6) with the measurement formation (Eq. 4), we can get the optimization model for FP reconstruction as

minf(x)=12(Ax)Ax+nbF2s.t.nn9σ2+εε=0.

In the following, according to the Wirtinger flow algorithm, we derive the WFP optimization algorithm to solve the above model. First, we introduce a weighting parameter μ to incorporate the noise constraint into the objective function, and the model becomes

minf(x)=12(Ax)Ax+nbF2+μ2nn9σ2+εεF2.

This is similar to the augmented Lagrangian function [21], which can be solved by sequentially updating each variable [22], while keeping the other variables constant. The updating can be conducted either by assigning zero to the function’s partial derivative with respect to the updating variable, or by the gradient descent technique. Here we utilize a similar scheme, and sequentially update the optimization variables, i.e., x, n and ε, in Eq. 8.

For x, by calculating the partial derivative of f with respect to x*, we can get its updating rule using the gradient descent technique as

x(k+1)=x(k)Δxfx|x=x(k)=x(k)ΔxAH[(|Ax|.2+nb)(Ax)]|x=x(k),
with Δx being the gradient descent step size of x. Similarly, we can set the step size of n as Δn, and update n following
n(k+1)=n(k)Δnfn|n=n(k)=n(k)Δn[(|Az|.2+nb)+μ(nn9σ2+εε)2n]|n=n(k).

For updating of ε, we let the partial derivative of f with respect to ε equal to 0, and derive the closed-form updating rule as

fε|ε=ε(k+1)=[μ(nn9σ2+εε)2ε]|ε=ε(k+1)=0ε(k+1)=max(9σ2nn,0).

Based on the above derivations, the proposed WFP algorithm is summarized in Alg. 2. As for the initialization, we set x(0) as the spatial spectrum of the up-sampled version of the LR image, which is captured under the normal incident light.

Tables Icon

Algorithm 2:. WFP algorithm

As for the parameter settings of Δx and Δn, similar to WF, we assign Δx(k)=θ(k)x(0)2 and Δx(k)=θ(k)σ2, where θ(k) = min (1−ek/k0, θmax). As stated in [20], k0 = 330 and θmax = 0.4 work well, so we also use these settings in our algorithm. We have released our source code for noncommercial use, which can be downloaded from http://www.sites.google.com/site/lihengbian.

3. Experiments

In this section, we conduct a series of experiments on both synthetic and real captured data, to validate the proposed WFP algorithm.

3.1. Experiment on synthetic data

Algorithms for comparison

To demonstrate the performance and advantages of the proposed WFP algorithm, here we run the WFP as well as the AP algorithm on simulated FP data. Besides, to investigate WFP’s ability in addressing acquisition noise, we further compare its results with those produced by applying denoising before or after the AP reconstruction. Here we use BM3D [23] for denoising considering its promising results and high efficiency [24]. For simplicity, we respectively refer to these two denoising-operation-included methods as ”BM3D+AP” and ”AP+BM3D”.

Criterion

Besides the visual results, we also utilize two quantitative criteria to assess the recovery performance of the above methods. The first one is the peak signal to noise ratio (PSNR), which has traditionally been widely used to assess the quality of processed images compared to benchmark. PSNR intuitively describes the intensity difference between two images, and would be smaller for lower quality recovery. Another adopted criterion is the structure similarity (SSIM) [25] that measures the spatial structural closeness between two images, and thus consists with human perception better than PSNR. The SSIM score ranges from 0 to 1, and is higher when two images are of more similar structural information. Note that both PSNR and SSIM are calculated on the intensity images here.

Experiment parameter settings

The convergence experiment in [20] shows that the Wirtinger flow algorithm works successfully when measurements are more than 6 times of the signal entries to be recovered. In terms of the FP problem, assuming that the overlap ratio is ξ, the LR images are of m×m pixels, and we capture k×k LR images, then the sampling ratio between measurements and signal entries can be calculated as

η=m2k2[(1ξ)m(k1)+m]21(1ξ)2.

Similar to [1], by setting ξ = 65%, we can get the sampling ratio as around 8. The ratio is higher than the minimum convergence requisition (~6) in [20]. Therefore, we adopt the above experiment settings in our simulation experiment, namely ξ = 65%, k = 15, m = 100.

Results

Based on the above specifications, the captured image volume in our simulation experiment is synthesized by following three steps: 1) we apply FFT to the original HR image, and select subregions corresponding to different incident angles, by multiplying the HR spectrum with an ideal pupil function (all ones in the pupil circle and zeros outside). 2) We shift these sub spatial spectra to the origin location, and do Fourier transform to recover the LR plural images in the spatial domain. 3) We retain only the intensity of these LR plural images, and add Gaussian white noise to obtain the simulated captured noisy images. In our implementation, we use the ’Lena’ and the ’Aerial’ image (512×512 pixels) from the USC-SIPI image database [26] as the HR intensity and phase image, respectively. The LR images’ pixel numbers are set to be one tenth of the HR image along both dimensions.

First, we apply the proposed WFP on the simulated data with varying noise levels to study the algorithm’s performance. Specifically, the standard derivation σ of the additive noise ranges from 0.002 to 0.008 with a 0.002 interval. By algorithm testing, 500 iterations are enough for WFP to converge, and hence we set the iteration number of WFP as 500. The visual and quantitative results are respectively shown in Fig. 1 and Table 1. From the results we can see that WPF works well to reconstruct both intensity and phase information. Besides, as the noise level increases, the reconstruction quality does not degenerate much. This illustrates the robustness of WFP to different noise levels, and thus a wider applicability.

 figure: Fig. 1

Fig. 1 Reconstruction results by WFP on different noise-level simulated data. The top row shows the intensity images, while the bottom row shows the phase images.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Quantitative comparison among the recovery of WFP at different noise levels.

Then, we compare WFP with the above mentioned three other methods, i.e., AP, BM3D+AP, and AP+BM3D, to show their pros and cons. Here the noise level is fixed at σ = 0.004. The iteration number of conventional AP is set to 50 to ensure convergence. The simulated acquisition image under the normal illumination is shown in Figs. 2(a). The visual and quantitative reconstruction results are respectively shown in Figs. 2(b)–2(d) and Table 2.

 figure: Fig. 2

Fig. 2 Reconstruction results of different methods on the same dataset.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Quantitative comparison among different reconstruction methods.

Due to the noise corruption, the SNR of captured images is very low, especially for the images corresponding to high spatial frequencies. As a result, the reconstructed intensity and phase image of AP in Figs. 2(b) are very noisy. When BM3D is applied before the AP reconstruction, a lot of high frequency information are filtered out, thus there exist serious artifacts in the final recovery, as shown in Figs. 2(c). If we apply BM3D after the AP reconstruction, though most of the noise is removed, many crucial image details are filtered out as well (see the areas of hat tassels in Figs. 2(d)). Differently, the proposed WFP incorporates noise suppression into the reconstruction framework, and conducts these two operations jointly. This largely avoids error accumulation in successive processing and achieves higher reconstruction performance. Consequently, WFP could obtain satisfying results with less noise and more details, as shown in Figs. 2(e). In a nutshell, WFP largely outperforms the other three methods, on both visual and quantitative metrics.

The superiority of WFP’s performance is theoretically attributed to two points. First, introducing the noise term relaxes the reconstruction model, and enlarges the solution search space. Hence, WFP has a better chance to achieve high quality reconstruction than FP. Second, WFP is guaranteed to converge to the global optimum provided a good initialization. Conversely, conventional AP frequently leads to a local minimum. According to [20], the Wirtinger framework would converge to a global optimum when ||x(0) ± x|| 1/8 ||x||, where x is the latent signal to be retrieved, and x(0) is its initial solution. As stated in Sec. 2.2, we initialize x as the spatial spectrum of the up-sampled version of the LR image captured under the normal incident light. The convergence of WFP with such an initialization is experimentally validated on a series of images. Here we respectively use each image from the USC-SIPI dataset as the latent amplitude of the target scene, and use the ’Aerial’ image as its latent phase. Then we apply WFP and AP to reconstruct the scenes under different measurement noise levels, with other parameters same as those in the above experiments, and plot the quantitative results in Fig. 3. Here we only show the results of several images due to the limited space, including ’CameraMan’, ’Barbara’, ’Peppers’ and ’House’. From the figure, we can see that WFP works well on various scenes with our initialization, and largely outperforms AP in terms of both PSNR and SSIM. Specifically, WFP is statistically superior to conventional AP in PSNR and SSIM by at least 5dB and 0.3.

 figure: Fig. 3

Fig. 3 Quantitative comparison between reconstruction results of WFP and AP on different scenes.

Download Full Size | PPT Slide | PDF

However, the advantageous performance of WFP is at the expense of relatively high computation cost. We implement all the above four different methods using Matlab on an Intel i7 3.6 GHz CPU computer, with 16G RAM and 64 bit Windows 7 system. The running time of different methods are listed in the bottom row of Table 2. Obviously, WFP takes longer time than the other methods.

3.2. Experiment on real captured data

To further validate the effectiveness of WFP, we build a FPM setup to capture LR images as inputs for FP reconstruction. Similar to [27], an upright microscope with a 2× (NA = 0.1) objective is used in the platform. The LED array is placed around 8cm under the specimen, and the lateral distance between two adjacent LEDs is 4 mm. The central wavelength of incident light is 632nm. The pixel size of the captured raw images is ~1.85μm.

First, we capture the LR images corresponding to the 15×15 LEDs, with the exposure time for each LED as 1ms, and apply AP as well as WFP to the image set for performance comprison. Figs. 4(a) shows the LR images of the USAF chart and the blood smear sample captured under the normal illumination. The HR reconstruction results of AP and WFP are respectively presented in Figs. 4(b) and Figs. 4(c), where the left columns show recovered amplitudes, and the right columns present recovered phases.

 figure: Fig. 4

Fig. 4 Reconstruction comparison between AP and WFP on real captured data.

Download Full Size | PPT Slide | PDF

From the results we can conclude three advantages of WFP over conventional AP. First, the reconstruction results of WFP own higher resolution than those of AP, and thus contain more details (see the close-ups for clearer comparison). Second, WFP could effectively suppress noise (see the smooth regions in the USAF chart). Third, WFP could reconstruct much more accurate phase than AP. Note that there exists some phase jumps recovered by WFP in the feature areas of the USAF chart, where the magnitudes are close to zero. This is due to that in these areas, the phase can be any value and their assignments would not affect successful magnitude recovery.

Then, we set the reconstruction result of WFP from the LR inputs taken under 1ms exposure time as a benchmark, and successively increase the exposure time of AP’s inputs for comparison. As shown in Fig. 5, compared to the results of WFP, AP achieves comparable performance when the exposure time grows to 5ms. This indicates that WFP could save around 80% exposure time than AP, to achieve the same reconstruction quality.

 figure: Fig. 5

Fig. 5 Resolution demonstration of AP and WFP under different exposure time. (a) shows the amplitude recovery results by WFP with inputs captured under 1 ms exposure time. (b) and (c) respectively show the results by AP with inputs taken under 4 ms and 5 ms.

Download Full Size | PPT Slide | PDF

In all, WFP offers a feasible way for the FP technique to reconstruct highly accurate results from noise-deteriorated inputs. This will help a lot in real applications.

4. Conclusions and discussions

This paper proposes a reconstruction framework termed Wirtinger flow optimization for Fourier Ptychography (WFP). Based on the recently reported Wirtinger flow algorithm, WFP formulates the FP recovery as a quadratic optimization problem, and presents a solution utilizing the gradient descent scheme. By incorporating priors on the measurement noise, WFP can save around 80% of the exposure time for the present FP technique, while without obvious performance degeneration. Results on both synthetic and real captured data validate the effectiveness of WFP.

One extension of WFP is to handle non-uniform noise. This can be easily realized by treating the standard derivation of noise σ in Eq. (5) as spatial non-uniform, namely by changing σ ∈ ℝ to σ ∈ ℝm. Besides, as a flexible optimization framework, WFP can also be easily extended by introducing other priors and constraints. For example, we can incorporate the sparsity of the latent HR image [28] into our framework, which may further reduce the snapshot number and thus the acquisition time. We can also introduce the total variance prior [29, 30] into WFP to further suppress noise in the reconstruction results.

What’s more, in the optimization model in Eqs. (7), the sampling matrix can be composed of any kinds of linear operations (down-sampling and Fourier transform in FP). In other words, WFP is applicable for various linear optical systems which need phase retrieval. For example, WFP can be applied for conventional ptychographic reconstruction [31], which is important for the X-ray microscopy. In this case, according to the imaging mechanism, the sampling matrix A is the same as in FP, while the latent signal x becomes the plural scene map in the spatial domain. WFP can be also applied to different variants of conventional FP, such as multiplexed FP [32,33] and extended FP for fluorescence imaging [5]. For these applications, we only need to change the specifications of the latent signal x and the sampling matrix A. Therefore, WFP is a technique important for both image processing and optical communities.

Although advantages in multiple ways over the AP algorithm in FP reconstruction, WFP is limited in running efficiency, i.e., WFP needs more running time than AP. Therefore, shortening the running time of WFP is one of our future work. Utilizing accelerated gradient descent methods and introducing parallel computation techniques are two promising speeding-up options.

Acknowledgment

This work was supported by the National Natural Science Foundation of China, Nos. 61171119, 61120106003, and 61327902. The authors thank the editor and the anonymous reviewers for their insightful comments on the manuscript..

References and links

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]  

2. G. Zheng, “Breakthroughs in photonics 2013: Fourier ptychographic imaging,” IEEE Photonics J. 6, 0701207 (2014).

3. G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: A gigapixel superscope for biomedicine,” Opt. Photonics News 25, 26–33 (2014). [CrossRef]  

4. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via fourier ptychographic microscopy,” Opt. Lett. 38, 4845–4848 (2013). [CrossRef]   [PubMed]  

5. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via patternilluminated fourier ptychography,” Opt. Express 22, 20856–20870 (2014). [CrossRef]   [PubMed]  

6. S. Dong, K. Guo, P. Nanda, R. Shiradkar, and G. Zheng, “FPscope: a field-portable high-resolution microscope using a cellphone lens,” Biomed. Opt. Express 5, 3305–3310 (2014). [CrossRef]   [PubMed]  

7. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Optics 21, 2758–2769 (1982). [CrossRef]  

8. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its fourier transform using a support constraint,” J. Opt. Soc. Am. A 4, 118–123 (1987). [CrossRef]  

9. J. Goodman, Introduction to Fourier Optics (McGraw-hill, 2008).

10. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” arXiv preprint arXiv:14027350 (2014).

11. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237 (1972).

12. J. R. Fienup, “Reconstruction of an object from the modulus of its fourier transform,” Opt. Lett. 3, 27–29 (1978). [CrossRef]   [PubMed]  

13. L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Rev. 38, 49–95 (1996). [CrossRef]  

14. E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imag. Sci. 6, 199–225 (2013). [CrossRef]  

15. E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013). [CrossRef]  

16. I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program. Ser. A 45, 1–35 (2012).

17. E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” arXiv preprint arXiv:13103240 (2013).

18. R. Remmert, Theory of Complex Functions (Springer, 1991). [CrossRef]  

19. R. F. Fischer, Precoding and Signal Shaping for Digital Transmission (John Wiley & Sons, 2005).

20. E. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” arXiv preprint arXiv:14071065 (2014).

21. D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods (Academic, 1982).

22. Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,”in Artif. Intell. Evol. Comput. and Metaheuristics, (2013), pp. 345–369. [CrossRef]  

23. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE T. Image Process. 16, 2080–2095 (2007). [CrossRef]  

24. P. Milanfar, “A tour of modern image filtering: new insights and methods, both practical and theoretical,” IEEE Signal Proc. Mag. 30, 106–128 (2013). [CrossRef]  

25. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE T. Image Process. 13, 600–612 (2004). [CrossRef]  

26. A. G. Weber, “The usc-sipi image database version 5,” USC-SIPI Rep. 315, 1–24 (1997).

27. L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive illumination for fourier ptychography,” Opt. Lett. 39, 6648–6651 (2014). [PubMed]  

28. X. Li and V. Voroninski, “Sparse signal recovery from quadratic measurements via convex programming,” SIAM J. Math. Anal. 45, 3019–3033 (2013). [CrossRef]  

29. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D. 60, 259–268 (1992). [CrossRef]  

30. A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Sim. 4, 490–530 (2005). [CrossRef]  

31. A. M. Maiden, J. M. Rodenburg, and M. J. Humphry, “Optical ptychography: a practical implementation with useful resolution,” Opt. lett. 35, 2585–2587 (2010). [CrossRef]   [PubMed]  

32. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in fourier ptychographic imaging,” Biomed. Opt. Express 5, 1757–1767 (2014). [CrossRef]   [PubMed]  

33. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for fourier ptychography with an led array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]   [PubMed]  

References

  • View by:

  1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
    [Crossref]
  2. G. Zheng, “Breakthroughs in photonics 2013: Fourier ptychographic imaging,” IEEE Photonics J. 6, 0701207 (2014).
  3. G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: A gigapixel superscope for biomedicine,” Opt. Photonics News 25, 26–33 (2014).
    [Crossref]
  4. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via fourier ptychographic microscopy,” Opt. Lett. 38, 4845–4848 (2013).
    [Crossref] [PubMed]
  5. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via patternilluminated fourier ptychography,” Opt. Express 22, 20856–20870 (2014).
    [Crossref] [PubMed]
  6. S. Dong, K. Guo, P. Nanda, R. Shiradkar, and G. Zheng, “FPscope: a field-portable high-resolution microscope using a cellphone lens,” Biomed. Opt. Express 5, 3305–3310 (2014).
    [Crossref] [PubMed]
  7. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Optics 21, 2758–2769 (1982).
    [Crossref]
  8. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its fourier transform using a support constraint,” J. Opt. Soc. Am. A 4, 118–123 (1987).
    [Crossref]
  9. J. Goodman, Introduction to Fourier Optics (McGraw-hill, 2008).
  10. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” arXiv preprint arXiv:14027350 (2014).
  11. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237 (1972).
  12. J. R. Fienup, “Reconstruction of an object from the modulus of its fourier transform,” Opt. Lett. 3, 27–29 (1978).
    [Crossref] [PubMed]
  13. L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Rev. 38, 49–95 (1996).
    [Crossref]
  14. E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imag. Sci. 6, 199–225 (2013).
    [Crossref]
  15. E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
    [Crossref]
  16. I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program. Ser. A 45, 1–35 (2012).
  17. E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” arXiv preprint arXiv:13103240 (2013).
  18. R. Remmert, Theory of Complex Functions (Springer, 1991).
    [Crossref]
  19. R. F. Fischer, Precoding and Signal Shaping for Digital Transmission (John Wiley & Sons, 2005).
  20. E. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” arXiv preprint arXiv:14071065 (2014).
  21. D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods (Academic, 1982).
  22. Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,”in Artif. Intell. Evol. Comput. and Metaheuristics, (2013), pp. 345–369.
    [Crossref]
  23. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE T. Image Process. 16, 2080–2095 (2007).
    [Crossref]
  24. P. Milanfar, “A tour of modern image filtering: new insights and methods, both practical and theoretical,” IEEE Signal Proc. Mag. 30, 106–128 (2013).
    [Crossref]
  25. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE T. Image Process. 13, 600–612 (2004).
    [Crossref]
  26. A. G. Weber, “The usc-sipi image database version 5,” USC-SIPI Rep. 315, 1–24 (1997).
  27. L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive illumination for fourier ptychography,” Opt. Lett. 39, 6648–6651 (2014).
    [PubMed]
  28. X. Li and V. Voroninski, “Sparse signal recovery from quadratic measurements via convex programming,” SIAM J. Math. Anal. 45, 3019–3033 (2013).
    [Crossref]
  29. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D. 60, 259–268 (1992).
    [Crossref]
  30. A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Sim. 4, 490–530 (2005).
    [Crossref]
  31. A. M. Maiden, J. M. Rodenburg, and M. J. Humphry, “Optical ptychography: a practical implementation with useful resolution,” Opt. lett. 35, 2585–2587 (2010).
    [Crossref] [PubMed]
  32. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in fourier ptychographic imaging,” Biomed. Opt. Express 5, 1757–1767 (2014).
    [Crossref] [PubMed]
  33. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for fourier ptychography with an led array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014).
    [Crossref] [PubMed]

2014 (9)

G. Zheng, “Breakthroughs in photonics 2013: Fourier ptychographic imaging,” IEEE Photonics J. 6, 0701207 (2014).

G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: A gigapixel superscope for biomedicine,” Opt. Photonics News 25, 26–33 (2014).
[Crossref]

S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via patternilluminated fourier ptychography,” Opt. Express 22, 20856–20870 (2014).
[Crossref] [PubMed]

S. Dong, K. Guo, P. Nanda, R. Shiradkar, and G. Zheng, “FPscope: a field-portable high-resolution microscope using a cellphone lens,” Biomed. Opt. Express 5, 3305–3310 (2014).
[Crossref] [PubMed]

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” arXiv preprint arXiv:14027350 (2014).

E. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” arXiv preprint arXiv:14071065 (2014).

L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive illumination for fourier ptychography,” Opt. Lett. 39, 6648–6651 (2014).
[PubMed]

S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in fourier ptychographic imaging,” Biomed. Opt. Express 5, 1757–1767 (2014).
[Crossref] [PubMed]

L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for fourier ptychography with an led array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014).
[Crossref] [PubMed]

2013 (7)

X. Li and V. Voroninski, “Sparse signal recovery from quadratic measurements via convex programming,” SIAM J. Math. Anal. 45, 3019–3033 (2013).
[Crossref]

E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” arXiv preprint arXiv:13103240 (2013).

P. Milanfar, “A tour of modern image filtering: new insights and methods, both practical and theoretical,” IEEE Signal Proc. Mag. 30, 106–128 (2013).
[Crossref]

E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imag. Sci. 6, 199–225 (2013).
[Crossref]

E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
[Crossref]

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via fourier ptychographic microscopy,” Opt. Lett. 38, 4845–4848 (2013).
[Crossref] [PubMed]

2012 (1)

I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program. Ser. A 45, 1–35 (2012).

2010 (1)

2007 (1)

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE T. Image Process. 16, 2080–2095 (2007).
[Crossref]

2005 (1)

A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Sim. 4, 490–530 (2005).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE T. Image Process. 13, 600–612 (2004).
[Crossref]

1997 (1)

A. G. Weber, “The usc-sipi image database version 5,” USC-SIPI Rep. 315, 1–24 (1997).

1996 (1)

L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Rev. 38, 49–95 (1996).
[Crossref]

1992 (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D. 60, 259–268 (1992).
[Crossref]

1987 (1)

1982 (1)

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Optics 21, 2758–2769 (1982).
[Crossref]

1978 (1)

1972 (1)

R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237 (1972).

Bertsekas, D. P.

D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods (Academic, 1982).

Bian, L.

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE T. Image Process. 13, 600–612 (2004).
[Crossref]

Boyd, S.

L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Rev. 38, 49–95 (1996).
[Crossref]

Buades, A.

A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Sim. 4, 490–530 (2005).
[Crossref]

Candes, E.

E. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” arXiv preprint arXiv:14071065 (2014).

Candes, E. J.

E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” arXiv preprint arXiv:13103240 (2013).

E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
[Crossref]

E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imag. Sci. 6, 199–225 (2013).
[Crossref]

Chapman, H. N.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” arXiv preprint arXiv:14027350 (2014).

Chen, F.

Chung, J.

G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: A gigapixel superscope for biomedicine,” Opt. Photonics News 25, 26–33 (2014).
[Crossref]

Cohen, O.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” arXiv preprint arXiv:14027350 (2014).

Coll, B.

A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Sim. 4, 490–530 (2005).
[Crossref]

d’Aspremont, A.

I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program. Ser. A 45, 1–35 (2012).

Dabov, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE T. Image Process. 16, 2080–2095 (2007).
[Crossref]

Dai, Q.

L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive illumination for fourier ptychography,” Opt. Lett. 39, 6648–6651 (2014).
[PubMed]

Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,”in Artif. Intell. Evol. Comput. and Metaheuristics, (2013), pp. 345–369.
[Crossref]

Deng, Y.

Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,”in Artif. Intell. Evol. Comput. and Metaheuristics, (2013), pp. 345–369.
[Crossref]

Dong, S.

Egiazarian, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE T. Image Process. 16, 2080–2095 (2007).
[Crossref]

Eldar, Y. C.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” arXiv preprint arXiv:14027350 (2014).

E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imag. Sci. 6, 199–225 (2013).
[Crossref]

Fatemi, E.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D. 60, 259–268 (1992).
[Crossref]

Fienup, J. R.

Fischer, R. F.

R. F. Fischer, Precoding and Signal Shaping for Digital Transmission (John Wiley & Sons, 2005).

Foi, A.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE T. Image Process. 16, 2080–2095 (2007).
[Crossref]

Gerchberg, R. W.

R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237 (1972).

Goodman, J.

J. Goodman, Introduction to Fourier Optics (McGraw-hill, 2008).

Guo, K.

Horstmeyer, R.

G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: A gigapixel superscope for biomedicine,” Opt. Photonics News 25, 26–33 (2014).
[Crossref]

X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via fourier ptychographic microscopy,” Opt. Lett. 38, 4845–4848 (2013).
[Crossref] [PubMed]

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

Humphry, M. J.

Katkovnik, V.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE T. Image Process. 16, 2080–2095 (2007).
[Crossref]

Li, X.

E. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” arXiv preprint arXiv:14071065 (2014).

L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for fourier ptychography with an led array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014).
[Crossref] [PubMed]

E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” arXiv preprint arXiv:13103240 (2013).

X. Li and V. Voroninski, “Sparse signal recovery from quadratic measurements via convex programming,” SIAM J. Math. Anal. 45, 3019–3033 (2013).
[Crossref]

Maiden, A. M.

Mallat, S.

I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program. Ser. A 45, 1–35 (2012).

Miao, J.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” arXiv preprint arXiv:14027350 (2014).

Milanfar, P.

P. Milanfar, “A tour of modern image filtering: new insights and methods, both practical and theoretical,” IEEE Signal Proc. Mag. 30, 106–128 (2013).
[Crossref]

Morel, J.-M.

A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Sim. 4, 490–530 (2005).
[Crossref]

Nanda, P.

Osher, S.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D. 60, 259–268 (1992).
[Crossref]

Ou, X.

G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: A gigapixel superscope for biomedicine,” Opt. Photonics News 25, 26–33 (2014).
[Crossref]

X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via fourier ptychographic microscopy,” Opt. Lett. 38, 4845–4848 (2013).
[Crossref] [PubMed]

Ramchandran, K.

Remmert, R.

R. Remmert, Theory of Complex Functions (Springer, 1991).
[Crossref]

Rodenburg, J. M.

Rudin, L. I.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D. 60, 259–268 (1992).
[Crossref]

Segev, M.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” arXiv preprint arXiv:14027350 (2014).

Shechtman, Y.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” arXiv preprint arXiv:14027350 (2014).

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE T. Image Process. 13, 600–612 (2004).
[Crossref]

Shiradkar, R.

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE T. Image Process. 13, 600–612 (2004).
[Crossref]

Situ, G.

Soltanolkotabi, M.

E. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” arXiv preprint arXiv:14071065 (2014).

E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” arXiv preprint arXiv:13103240 (2013).

Strohmer, T.

E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
[Crossref]

E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imag. Sci. 6, 199–225 (2013).
[Crossref]

Suo, J.

Tian, L.

Vandenberghe, L.

L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Rev. 38, 49–95 (1996).
[Crossref]

Voroninski, V.

E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
[Crossref]

X. Li and V. Voroninski, “Sparse signal recovery from quadratic measurements via convex programming,” SIAM J. Math. Anal. 45, 3019–3033 (2013).
[Crossref]

E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imag. Sci. 6, 199–225 (2013).
[Crossref]

Waldspurger, I.

I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program. Ser. A 45, 1–35 (2012).

Waller, L.

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE T. Image Process. 13, 600–612 (2004).
[Crossref]

Weber, A. G.

A. G. Weber, “The usc-sipi image database version 5,” USC-SIPI Rep. 315, 1–24 (1997).

Yang, C.

G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: A gigapixel superscope for biomedicine,” Opt. Photonics News 25, 26–33 (2014).
[Crossref]

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via fourier ptychographic microscopy,” Opt. Lett. 38, 4845–4848 (2013).
[Crossref] [PubMed]

Zhang, Z.

Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,”in Artif. Intell. Evol. Comput. and Metaheuristics, (2013), pp. 345–369.
[Crossref]

Zheng, G.

Appl. Optics (1)

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Optics 21, 2758–2769 (1982).
[Crossref]

arXiv preprint (3)

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” arXiv preprint arXiv:14027350 (2014).

E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” arXiv preprint arXiv:13103240 (2013).

E. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” arXiv preprint arXiv:14071065 (2014).

Biomed. Opt. Express (3)

Commun. Pure Appl. Math. (1)

E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013).
[Crossref]

IEEE Photonics J. (1)

G. Zheng, “Breakthroughs in photonics 2013: Fourier ptychographic imaging,” IEEE Photonics J. 6, 0701207 (2014).

IEEE Signal Proc. Mag. (1)

P. Milanfar, “A tour of modern image filtering: new insights and methods, both practical and theoretical,” IEEE Signal Proc. Mag. 30, 106–128 (2013).
[Crossref]

IEEE T. Image Process. (2)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE T. Image Process. 13, 600–612 (2004).
[Crossref]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE T. Image Process. 16, 2080–2095 (2007).
[Crossref]

J. Opt. Soc. Am. A (1)

Math. Program. Ser. A (1)

I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program. Ser. A 45, 1–35 (2012).

Multiscale Model. Sim. (1)

A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Sim. 4, 490–530 (2005).
[Crossref]

Nat. Photonics (1)

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

Opt. Express (1)

Opt. Lett. (4)

Opt. Photonics News (1)

G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: A gigapixel superscope for biomedicine,” Opt. Photonics News 25, 26–33 (2014).
[Crossref]

Optik (1)

R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237 (1972).

Physica D. (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D. 60, 259–268 (1992).
[Crossref]

SIAM J. Imag. Sci. (1)

E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imag. Sci. 6, 199–225 (2013).
[Crossref]

SIAM J. Math. Anal. (1)

X. Li and V. Voroninski, “Sparse signal recovery from quadratic measurements via convex programming,” SIAM J. Math. Anal. 45, 3019–3033 (2013).
[Crossref]

SIAM Rev. (1)

L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Rev. 38, 49–95 (1996).
[Crossref]

USC-SIPI Rep. (1)

A. G. Weber, “The usc-sipi image database version 5,” USC-SIPI Rep. 315, 1–24 (1997).

Other (5)

D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods (Academic, 1982).

Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,”in Artif. Intell. Evol. Comput. and Metaheuristics, (2013), pp. 345–369.
[Crossref]

R. Remmert, Theory of Complex Functions (Springer, 1991).
[Crossref]

R. F. Fischer, Precoding and Signal Shaping for Digital Transmission (John Wiley & Sons, 2005).

J. Goodman, Introduction to Fourier Optics (McGraw-hill, 2008).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Reconstruction results by WFP on different noise-level simulated data. The top row shows the intensity images, while the bottom row shows the phase images.
Fig. 2
Fig. 2 Reconstruction results of different methods on the same dataset.
Fig. 3
Fig. 3 Quantitative comparison between reconstruction results of WFP and AP on different scenes.
Fig. 4
Fig. 4 Reconstruction comparison between AP and WFP on real captured data.
Fig. 5
Fig. 5 Resolution demonstration of AP and WFP under different exposure time. (a) shows the amplitude recovery results by WFP with inputs captured under 1 ms exposure time. (b) and (c) respectively show the results by AP with inputs taken under 4 ms and 5 ms.

Tables (4)

Tables Icon

Algorithm 1: Wirtinger flow algorithm

Tables Icon

Algorithm 2: WFP algorithm

Tables Icon

Table 1 Quantitative comparison among the recovery of WFP at different noise levels.

Tables Icon

Table 2 Quantitative comparison among different reconstruction methods.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

min f ( x ) = 1 2 ( Ax ) Ax b F 2 , x n .
x ( k + 1 ) = x ( k ) Δ f x | x = x ( k ) .
f x = 1 2 ( Ax ) Ax b F 2 x = A H [ ( | Ax | . 2 b ) ( Ax ) ] .
b = | Ax | 2 + n ,
| n | 3 σ .
n n 9 σ 2 + ε ε = 0 .
min f ( x ) = 1 2 ( Ax ) Ax + n b F 2 s . t . n n 9 σ 2 + ε ε = 0 .
min f ( x ) = 1 2 ( Ax ) Ax + n b F 2 + μ 2 n n 9 σ 2 + ε ε F 2 .
x ( k + 1 ) = x ( k ) Δ x f x | x = x ( k ) = x ( k ) Δ x A H [ ( | Ax | . 2 + n b ) ( Ax ) ] | x = x ( k ) ,
n ( k + 1 ) = n ( k ) Δ n f n | n = n ( k ) = n ( k ) Δ n [ ( | Az | . 2 + n b ) + μ ( n n 9 σ 2 + ε ε ) 2 n ] | n = n ( k ) .
f ε | ε = ε ( k + 1 ) = [ μ ( n n 9 σ 2 + ε ε ) 2 ε ] | ε = ε ( k + 1 ) = 0 ε ( k + 1 ) = max ( 9 σ 2 n n , 0 ) .
η = m 2 k 2 [ ( 1 ξ ) m ( k 1 ) + m ] 2 1 ( 1 ξ ) 2 .

Metrics