Realizing high resolution across large volumes is challenging for 3D imaging techniques with high-speed acquisition. Here, we describe a new method for 3D intensity and phase recovery from 4D light field measurements, achieving enhanced resolution via Fourier ptychography. Starting from geometric optics light field refocusing, we incorporate phase retrieval and correct diffraction artifacts. Further, we incorporate dark-field images to achieve lateral resolution beyond the diffraction limit of the objective ( larger NA) and axial resolution better than the depth of field, using a low-magnification objective with a large field of view. Our iterative reconstruction algorithm uses a multislice coherent model to estimate the 3D complex transmittance function of the sample at multiple depths, without any weak or single-scattering approximations. Data are captured by an LED array microscope with computational illumination, which enables rapid scanning of angles for fast acquisition. We demonstrate the method with thick biological samples in a modified commercial microscope, indicating the technique’s versatility for a wide range of applications.
© 2015 Optical Society of America
3D imaging techniques are crucial for thick samples and in situ studies; however, high-speed acquisition with high resolution remains a challenging task. Confocal microscopy  and multiphoton microscopy  are extremely popular for their high resolution, but require slow point scanning through the 3D volume. Wide-field 3D imaging generally involves tomography, in which projections of the sample are taken at many angles. Here, we consider phase imaging, which provides stain-free and label-free intrinsic contrast for transparent biological samples.
Tomography for 3D phase imaging usually requires two steps: first, the 2D phase is computed at each angle, and second, the data are used as input for a tomographic algorithm . 2D phase retrieval methods are easily combined with tomography [4–12], but when wave-optical effects become more prominent (as in microscopy), diffraction tomography [13–19] becomes necessary. All these methods require both angle scanning and multiple measurements at each angle, or a reference beam.
Here, we describe an alternative method in which only a single intensity image is captured for each angle. This is possible because data with angular diversity provide both 3D information and phase contrast. 2D phase of thin samples can be computed from images taken at multiple illumination angles [20–24] because of asymmetry introduced in the pupil plane . All of these approaches assume a thin sample; in thick samples, angle-dependent data usually represent tomographic information. Here, instead of choosing either 2D phase imaging of thin samples or 3D recovery of thick samples, we achieve both.
Further, we use angles of illumination greater than that allowed by the numerical aperture (NA) of the objective, so the resulting dark-field images contain subresolution feature information. Using Fourier ptychography [23,24], we use these to build up a larger effective NA, limited by the sum of the illumination and objective NAs. Thus, a low-magnification objective having a large field of view (FoV) can recover high-resolution gigavoxel-sized 3D intensity and phase images.
We take a holistic approach to the inverse problem, where an optimization procedure is used to recover 3D intensity and phase from the captured data, and remove aberrations. The algorithm is inspired by 3D ptychography [26,27], in which a multislice approach models the sample as a series of thin slices . The wave field propagates through the sample from slice to slice, with each slice modulating the field, and the objective NA limiting the resolution of the captured images. To solve the inverse problem, we iteratively update the 3D complex transmittance function for each illumination angle, effectively implementing a nonlinear 3D deconvolution  that removes out-of-plane blur. Since the multislice model makes no weak or single-scattering approximations, it can correct for multiple scattering.
Our work is best understood by its relation to light fields, which use space and angle to parameterize rays in 3D. Light field microscopes  capture projections across a 2D range of angles, whereas tomography generally only scans angles in 1D. Thus, light fields describe 4D data that can be thought of as limited-angle tomography with multiple directions of rotation [31,32]. The standard algorithm for light field digital refocusing fully incorporates 3D geometric effects [33,34]. However, when wave-optical effects become more prominent (e.g., at smaller feature sizes), the lateral resolution in the digitally refocused images deteriorates due to unaccounted for diffraction . Using the light field refocused result as an initial guess, our algorithm can be thought of as a diffraction correction routine for light fields, with embedded phase retrieval.
To collect 4D space-angle data, we use an LED array microscope, which scans through illumination angles quickly and with no moving parts. Similar LED array illumination was demonstrated previously for on-chip lensless imaging techniques [36,37]. Our system is built on a commercial microscope in which the illumination unit has been replaced by a programmable LED array. This simple, inexpensive hardware modification enables not only 4D light field capture [35,38], but also dark field [38,39], phase contrast [35,39], Fourier ptychography [23,24], and digital aberration removal .
A. Light Field Refocusing from LED Array Measurements
By placing the LED array sufficiently far above the sample that the illumination is considered spatially coherent, we can treat every LED’s illumination as a plane wave from a unique angle. Sequentially turning on each LED in the 2D array, while capturing images, therefore builds up a 4D data set of two spatial and two angular variables, similar to a light field measurement.
For 3D samples, light field refocusing is intuitively understood as a compensation for the geometric shift that occurs upon propagation. Figure 1 illustrates how off-axis illumination causes the intensity to shift from its original position in the plane of focus by a distance . With higher angles or larger , the rays shift further across the plane, as shown in Fig. 2(b). The slope of the line created by each feature is determined by its depth, while the location is defined by its crossing. The light field refocusing routine undoes the shift and sums over all angles to synthesize the intensity image.
However, diffraction and phase effects can cause the light to deviate from the straight lines predicted by geometrical optics. This is evidenced by the diffraction rings surrounding each dark line in Fig. 2(b). While light field refocusing corrects for geometric shifts, additional wave-optical effects degrade the resolution with defocus. Our algorithm starts from the light field refocused result, which captures most of the energy redistribution. We then iteratively estimate the phase and diffraction effects.
To achieve resolution beyond the diffraction limit of the objective, we need dark-field illumination from LEDs at high angles. For thin samples, each illumination angle shifts the sample spectrum around in Fourier space, with the objective aperture selecting out different sections. Thus, by scanning through different angles, many sections of Fourier space are captured. These can be stitched together with synthetic aperture approaches [41,42] to create a high-resolution image in real space. The caveat is that phase is required, which the Fourier ptychography algorithm [23,24] provides by performing translational diversity phase retrieval [43–46] in Fourier space.
When the sample is thick, each angle of illumination takes a different path through the sample. Thus, the Fourier spectrum of each illumination angle’s exit field is different, but all of these data are interrelated by the multislice model that we use here. Combining many angle-dependent low-resolution images can therefore still achieve enhanced resolution at all slices, limited by the sum of the illumination and objective NAs.
B. Multislice Forward Model
Our forward model assumes that the illumination from the th LED is a tilted plane wave , where spatial frequency is related to illumination angle by and is wavelength.
The field propagating through the thick sample is modeled by a multislice approximation [26,28] that splits the 3D sample into a series of thin slices, each having a complex transmittance function (), where denotes the lateral coordinates and indexes the slices. As light passes through each slice, the field is first multiplied by the 2D transmittance function of that slice, and then propagated to the next slice. The spacing between neighboring slices is modeled as a uniform medium (e.g., air) of thickness . Thus, the field exiting the sample can be calculated using a series of multiply-and-propagate operations:
C. Reconstruction Algorithm
Using the multislice forward model, we develop an iterative reconstruction routine that makes explicit use of the light field result as an initial guess. Light field refocusing predicts the intensity image at from the actual focal plane to be [35,38]
We then improve the estimate for the sample’s intensity and phase at each slice, as well as an estimate of the pupil function aberrations [40,44–46], by an iterative Fourier ptychography [23,24] reconstruction process, combined with the multislice inversion procedure in .
The reconstruction procedure aims to minimize the difference between the actual and estimated intensity measurements in a least-square sense:
- (1) Starting from the current guess of the multislice transmittance function, we use our forward model to generate the current estimate of the Fourier spectrum of the field at the camera plane, , when illuminating with the th LED.
Since the two functions come as a product, we use a gradient descent procedure, described by , to separate the two updates [24,44]. The procedure is general for updating any from the product of its previous estimate with another function , for example, :
- (4) The field is back-propagated through the 3D sample and the following steps are repeated until the first slice. At the th slice, the transmittance function and the incident field of this slice are updated using the same procedure as Eq. (11):
The updated exit field of the previous slice is related to by back-propagation:
- (5) At the first slice, the incident field is kept unchanged as the original illumination: .
After looping through the images from each of the LEDs, we check the convergence of the current sample estimate by computing the mean squared difference between the measured and estimated intensity images from each angle. The algorithm converges reliably within only a few iterations in all cases we tested.
Although iterative methods like the one used here often get stuck in local minima , we find that the close initial guess provided by the light field result helps to avoid this problem. Our observation agrees well with recent phase retrieval theory for similar iterative methods [49,50], which guarantees convergence to a global solution, given proper initialization. Further, the data contain significant redundancy (4D data for 3D reconstruction) and diversity (from angular variation), providing a highly constrained solution space. Similar robustness has also been shown with translational diversity data in real-space ptychography [44–46]. Thus, when our estimate correctly predicts the captured data and returns a good convergence criterion, we can be confident that the result is correct.
3. EXPERIMENTAL RESULTS
Our experimental setup (Fig. 1) consists of a custom-made LED array (4 mm spacing, central wavelength , with a 20 nm bandwidth) placed above the sample, replacing the microscope’s standard illumination unit (Nikon TE300 inverted). The LED array is controlled by an ARM microcontroller and is synchronized with the camera (PCO.edge) to scan through the LEDs at camera-limited speeds. Our camera is capable of 100 frames per second at full frame ( pixels) and with 16-bit data, although we use longer exposure times for dark-field images. Thus, the acquisition time can be easily traded off for image quality or resolution. Each LED has a square emitting area, resulting in an illumination coherence area of (according to the van Cittert–Zernike theorem, ). Thus, our coherent plane wave illumination assumption holds as long as we reconstruct the image in patches that have an area smaller than the coherence area. The final full FoV reconstruction is obtained by stitching together all the patches.
A. Improved Light Field Refocusing
We first demonstrate improvement over light field refocusing with a objective (0.25 NA), using only bright-field LEDs. This corresponds to a 69 LED circle at the center of our array. The two-slice test sample consists of two resolution targets, one placed above the focal plane and the other placed below the focal plane and rotated relative to the first one. Some raw images are shown in Fig. 2(a) for LEDs illuminating with a varying angle . The image shifts with illumination angle as shown in the space-angle plots in Fig. 2(b). In Fig. 2(c), we compare reconstructions from light field refocusing and our multislice method with images captured from physical focus (with all bright-field LEDs on). The resolution in the physically refocused images is 0.78 μm (Group 9, Element 3); however, the light field refocused image cannot resolve such small features, due to diffraction blurring. Multislice reconstructions with a two-slice model, however, do recover diffraction-limited resolution at both depths, providing a significant improvement over the light field refocused result.
Since our result represents the sample transmittance function, out-of-plane blur from the other resolution target is largely removed, unlike in physical focusing. In addition, our results have better high-frequency contrast. This is because the physically focused images are taken with all bright-field LEDs on, so the incoherent optical transfer function implies a larger frequency cutoff than the coherent case, but with decreased response at higher spatial frequencies . In our result, we synthesize a coherent transfer function (CTF) that has a uniform frequency response within the passband.
B. Multislice Fourier Ptychography
Next, we demonstrate multislice Fourier ptychography for obtaining resolution beyond the objective’s diffraction limit by including dark-field LEDs up to 0.41 illumination NA. To do this, we switch to a objective (0.1 NA), then use our method to recover lateral resolution with an effective NA of 0.51 (Fig. 3). We can resolve Group 9, Element 4, giving a resolution of 0.69 μm (five times better NA than the objective), as expected. The added benefit is that the FoV () of the objective is bigger, resulting in a large volume reconstruction. Note that the physically focused images now display significant out-of-plane blur, since the small NA provides a large depth of field. Our multislice reconstruction successfully mitigates most of this blur, resulting in a clean image at each depth.
Finally, we demonstrate our method on a continuous thick Spirogyra algae sample (Carolina Biological) having both absorption and phase effects (Fig. 4). Here, we use the objective (0.25 NA) with LEDs that provide a best possible lateral resolution of 0.59 μm (Rayleigh criteria with an effective NA of 0.66). The sample has a total thickness of , which we split into 11 slices spaced by 10 μm, representing a step size midway between the axial resolution of the objective and our predicted axial resolution [Eq. (17)]. Although the sample is continuous through the entire depth range, our multislice method will recover slices that only contain parts of the sample within the axial resolution range around each corresponding depth.
C. Analysis of Resolution
The stacked resolution targets provide a convenient way to experimentally characterize lateral resolution at multiple depths. Figure 5 plots simulated (theoretical) versus experimentally measured resolution for the two-slice situation using several methods and varying defocus depths, where the defocus distance refers to the relative distance of the test target from the physical focus plane of the microscope. We define resolution according to the closest set of bars that can be discriminated.
First, we examine the light field refocusing result. As the defocus distance increases, the lateral resolution degrades due to diffraction effects, as predicted by theory . Note that this is a different type of diffraction effect than that pointed out for the original light field microscope [30,52]. Our multislice method recovers resolution back to the full diffraction limit of the system. This provides considerable improvement in resolution over the light field refocusing case as the defocus distances get larger.
Using our multislice Fourier ptychography method, we expect to achieve lateral resolution at all slices that is limited by the sum of the NAs of the illumination and objective. When the target is at or near focus, we successfully achieve the maximum resolution expected of this system (0.59 μm). However, as the sample plane moves away from focus the resolution degrades, although this is not predicted by simulations (see Fig. 5). We believe this error to be due to LED position miscalibration, since a higher defocus is more sensitive to angle error. Thus, accurate calibration of LED positions will be crucial for extending our work to higher magnifications.
To analyze both lateral resolution and depth () sectioning theoretically, we use the 3D CTF of the imaging system [53–57], with the caveat that this theory assumes a single- or weak scattering approximation (e.g., Born or Rytov model) . Analytical theory for multiple scattering is not available, but many procedures start from single scattering and apply it recursively , so this should provide a good starting point for resolution analysis. The 3D CTF of our LED array microscope is sketched in Fig. 6, where the thick arcs describe the frequency coverage in the space. As expected, the lateral bandwidth is determined by the sum of the objective NA () and illumination NA ():Supplement 1), and can be calculated from Fig. 6 as 17) is . This estimation sets a lower bound for an achievable axial resolution, since multiple scattering may reduce it further, as observed in simulations (see Supplement 1). In practice, we may suffer further loss at large defocus distances, due to LED miscalibration.
The multislice Fourier ptychography method we present here is similar in concept to 2D Fourier ptychography  in the sense that it recovers resolution beyond the band limit of the objective. However, the extension to 3D becomes more similar to light fields or tomography. Although our inverse algorithm is quite similar in procedure to real-space 3D ptychography [26,27], the interpretation of the data is different. In our case, we collect real-space images for different angles of illumination, and thus different projection paths through the sample. This leads to the light field refocusing being a close initial guess. The two situations can be related in phase space by a 90 degree rotation . Ptychography collects the same data as a spectrogram [60,61]—Fourier space intensity with real-space aperture scanning , whereas Fourier ptychography collects real-space intensity with Fourier space scanning. Thus, the light field refocusing initial guess that we use here could in principle be applied to real-space 3D ptychography. Interestingly, the connection to phase space also predicts the connection between ray and wave optics , where the shearing of a light field is analogous to the shearing of a Wigner phase-space function.
One of the key factors of success of our method is the large amount of data collected, because it provides combined data redundancy and diversity, which will improve the convergence of the algorithm. While it is difficult to theoretically quantify the number of images necessary, we find empirically that we always need to collect significantly more pixels of data than we reconstruct. For example, in the experiment in Fig. 4, we reconstruct 11 slices of 2D intensity and phase, plus a single complex pupil function for digital aberration removal. Our data set has 225 captured images, giving a factor of more data collected than recovered. These numbers are comparable to the ratios typically used in 2D ptychography to achieve reliable convergence , although multiplexing has been shown to significantly reduce the captured data . Future work will explore the limits of data requirements for the 3D case.
We have presented a new method for multislice 3D Fourier ptychography that recovers 3D sample intensity and phase with resolution beyond the diffraction limit of the microscope objective used. Our data are captured by an LED array microscope, which is particularly attractive for commercial microscopy, since it can achieve rapid scanning of angles by LED array illumination. The method is label-free and stain-free, and so has wide application in the biological imaging of live samples.
National Science Foundation (NSF) (1351896); Office of Naval Research (ONR) (N00014-14-1-0083); United States Agency for International Development (USAID) (AID-OAA-A-12-00011, AID-OAA-A-13-00002).
The authors thank Ziji Liu for help with experiments and Shan Shan Kou for helpful discussions.
See Supplement 1 for supporting content.
1. C. J. Sheppard and D. M. Shotton, Confocal Laser Scanning Microscopy (BIOS Scientific, 1997).
2. W. R. Zipfel, R. M. Williams, and W. W. Webb, “Nonlinear magic: multiphoton microscopy in the biosciences,” Nat. Biotechnol. 21, 1369–1377 (2003). [CrossRef]
3. A. C. Kak and M. Slaney, Principle of Computerized Tomographic Imaging (Society for Industrial and Applied Mathematics, 2001).
4. C. M. Vest, Holographic Interferometry (Wiley, 1979), Vol. 476, p. 1.
5. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007). [CrossRef]
6. M. H. Maleki and A. J. Devaney, “Phase-retrieval and intensity-only reconstruction algorithms for optical diffraction tomography,” J. Opt. Soc. Am. A 10, 1086–1092 (1993). [CrossRef]
7. T. Gureyev, D. Paganin, G. Myers, Y. Nesterets, and S. Wilkins, “Phase-and-amplitude computer tomography,” Appl. Phys. Lett. 89, 034102 (2006). [CrossRef]
8. L. Tian, J. C. Petruccelli, Q. Miao, H. Kudrolli, V. Nagarkar, and G. Barbastathis, “Compressive X-ray phase tomography based on the transport of intensity equation,” Opt. Lett. 38, 3418–3421 (2013). [CrossRef]
9. M. H. Jenkins, J. M. Long, and T. K. Gaylord, “Multifilter phase imaging with partially coherent light,” Appl. Opt. 53, D29–D39 (2014). [CrossRef]
10. M. Dierolf, A. Menzel, P. Thibault, P. Schneider, C. M. Kewish, R. Wepf, O. Bunk, and F. Pfeiffer, “Ptychographic x-ray computed tomography at the nanoscale,” Nature 467, 436–439 (2010). [CrossRef]
11. M. Guizar-Sicairos, A. Diaz, M. Holler, M. S. Lucas, A. Menzel, R. A. Wepf, and O. Bunk, “Phase tomography from x-ray coherent diffractive imaging projections,” Opt. Express 19, 21345–21357 (2011). [CrossRef]
12. M. Holler, A. Diaz, M. Guizar-Sicairos, P. Karvinen, E. Färm, E. Härkönen, M. Ritala, A. Menzel, J. Raabe, and O. Bunk, “X-ray ptychographic computed tomography at 16 nm isotropic 3D resolution,” Sci. Rep. 4, 3857 (2014). [CrossRef]
13. A. Devaney, “A filtered backpropagation algorithm for diffraction tomography,” Ultrason. Imag. 4, 336–350 (1982).
14. W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Extended depth of focus in tomographic phase microscopy using a propagation algorithm,” Opt. Lett. 33, 171–173 (2008). [CrossRef]
15. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009). [CrossRef]
16. Y. Cotte, F. Toy, P. Jourdain, N. Pavillon, D. Boss, P. Magistretti, P. Marquet, and C. Depeursinge, “Marker-free phase nanoscopy,” Nat. Photonics 7, 113–117 (2013). [CrossRef]
17. G. Gbur and E. Wolf, “Diffraction tomography without phase information,” Opt. Lett. 27, 1890–1892 (2002). [CrossRef]
18. M. A. Anastasio, D. Shi, Y. Huang, and G. Gbur, “Image reconstruction in spherical-wave intensity diffraction tomography,” J. Opt. Soc. Am. A 22, 2651–2661 (2005). [CrossRef]
19. T. Kim, R. Zhou, M. Mir, S. D. Babacan, P. S. Carney, L. L. Goddard, and G. Popescu, “White-light diffraction tomography of unlabelled live cells,” Nat. Photonics 8, 256–263 (2014). [CrossRef]
20. A. Kirkland, W. Saxton, K.-L. Chau, K. Tsuno, and M. Kawasaki, “Super-resolution by aperture synthesis: tilt series reconstruction in CTEM,” Ultramicroscopy 57, 355–374 (1995). [CrossRef]
21. A. Kirkland, W. Saxton, and G. Chand, “Multiple beam tilt microscopy for super resolved imaging,” J. Electron. Microsc. 46, 11–22 (1997). [CrossRef]
22. S. B. Mehta and C. J. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett. 34, 1924–1926 (2009). [CrossRef]
23. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier Ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]
24. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]
25. D. Hamilton, C. Sheppard, and T. Wilson, “Improved imaging of phase gradients in scanning optical microscopy,” J. Microsc. 135, 275–286 (1984). [CrossRef]
26. A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29, 1606–1614 (2012). [CrossRef]
27. T. M. Godden, R. Suman, M. J. Humphry, J. M. Rodenburg, and A. M. Maiden, “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22, 12513–12523 (2014). [CrossRef]
28. J. M. Cowley and A. F. Moodie, “The scattering of electrons by atoms and crystals. i. A new theoretical approach,” Acta Crystallogr. 10, 609–619 (1957). [CrossRef]
29. T. J. Holmes and N. O’connor, “Blind deconvolution of 3D transmitted light brightfield micrographs,” J. Microsc. 200, 114–127 (2000). [CrossRef]
30. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” SIGGRAPH ‘06, New York, USA, 2006, pp. 924–934.
31. R. Ng, “Fourier slice photography,” SIGGRAPH ‘05, New York, USA, 2005.
32. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]
33. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]
34. M. Levoy, “Light fields and computational imaging,” Computer 39, 46–55 (2006). [CrossRef]
35. L. Tian, J. Wang, and L. Waller, “3D differential phase-contrast microscopy with computational illumination using an LED array,” Opt. Lett. 39, 1326–1329 (2014). [CrossRef]
36. T.-W. Su, S. O. Isikman, W. Bishara, D. Tseng, A. Erlinger, and A. Ozcan, “Multi-angle lensless digital holography for depth resolved imaging on a chip,” Opt. Express 18, 9690–9711 (2010). [CrossRef]
37. S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
38. G. Zheng, C. Kolner, and C. Yang, “Microscopy refocusing and dark-field imaging by using a simple LED array,” Opt. Lett. 36, 3987–3989 (2011). [CrossRef]
39. Z. Liu, L. Tian, S. Liu, and L. Waller, “Real-time brightfield, darkfield, and phase contrast imaging in a light-emitting diode array microscope,” J. Biomed. Opt. 19, 106002 (2014). [CrossRef]
40. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014). [CrossRef]
41. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Synthetic aperture Fourier holographic optical microscopy,” Phys. Rev. Lett. 97, 168102 (2006). [CrossRef]
42. D. J. Lee and A. M. Weiner, “Optical phase imaging using a synthetic aperture phase retrieval technique,” Opt. Express 22, 9380–9394 (2014). [CrossRef]
43. J. M. Rodenburg and H. M. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85, 4795–4797 (2004). [CrossRef]
44. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16, 7264–7278 (2008). [CrossRef]
45. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109, 338–343 (2009). [CrossRef]
46. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109, 1256–1262 (2009). [CrossRef]
47. R. Gerchberg and W. Saxton, “Phase determination for image and diffraction plane pictures in the electron microscope,” Optik 34, 275–284 (1971).
48. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]
49. E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” arXiv:1310.3240 (2013).
50. E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: theory and algorithms,” arXiv:1407.1065 (2014).
51. J. Goodman, Introduction to Fourier Optics (Roberts & Co., 2005).
52. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013). [CrossRef]
53. C. W. McCutchen, “Generalized aperture and the three-dimensional diffraction image,” J. Opt. Soc. Am. 54, 240–242 (1964). [CrossRef]
54. B. R. Frieden, “Optical transfer of the three-dimensional object,” J. Opt. Soc. Am. 57, 56–65 (1967). [CrossRef]
55. N. Streibl, “Three-dimensional imaging by a microscope,” J. Opt. Soc. Am. A 2, 121–127 (1985). [CrossRef]
56. C. J. R. Sheppard, Y. Kawata, S. Kawata, and M. Gu, “Three-dimensional transfer functions for high-aperture systems,” J. Opt. Soc. Am. A 11, 593–598 (1994). [CrossRef]
57. S. S. Kou and C. J. Sheppard, “Image formation in holographic tomography: high-aperture imaging conditions,” Appl. Opt. 48, H168–H175 (2009). [CrossRef]
58. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed. (Cambridge University, 1999).
59. R. Horstmeyer and C. Yang, “A phase space model of Fourier ptychographic microscopy,” Opt. Express 22, 338–358 (2014). [CrossRef]
60. H. Bartelt, K. Brenner, and A. Lohmann, “The Wigner distribution function and its optical production,” Opt. Commun. 32, 32–38 (1980). [CrossRef]
61. L. Waller, G. Situ, and J. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012). [CrossRef]
62. H. N. Chapman, “Phase-retrieval X-ray microscopy by Wigner—distribution deconvolution,” Ultramicroscopy 66, 153–172 (1996). [CrossRef]
63. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE Conference on Computational Photography (IEEE, 2009).
64. O. Bunk, M. Dierolf, S. Kynde, I. Johnson, O. Marti, and F. Pfeiffer, “Influence of the overlap parameter on the convergence of the ptychographical iterative engine,” Ultramicroscopy 108, 481–487 (2008). [CrossRef]