We report an imaging scheme, termed aperture-scanning Fourier ptychography, for 3D refocusing and super-resolution macroscopic imaging. The reported scheme scans an aperture at the Fourier plane of an optical system and acquires the corresponding intensity images of the object. The acquired images are then synthesized in the frequency domain to recover a high-resolution complex sample wavefront; no phase information is needed in the recovery process. We demonstrate two applications of the reported scheme. In the first example, we use an aperture-scanning Fourier ptychography platform to recover the complex hologram of extended objects. The recovered hologram is then digitally propagated into different planes along the optical axis to examine the 3D structure of the object. We also demonstrate a reconstruction resolution better than the detector pixel limit (i.e., pixel super-resolution). In the second example, we develop a camera-scanning Fourier ptychography platform for super-resolution macroscopic imaging. By simply scanning the camera over different positions, we bypass the diffraction limit of the photographic lens and recover a super-resolution image of an object placed at the far field. This platform’s maximum achievable resolution is ultimately determined by the camera’s traveling range, not the aperture size of the lens. The FP scheme reported in this work may find applications in 3D object tracking, synthetic aperture imaging, remote sensing, and optical/electron/X-ray microscopy.
© 2014 Optical Society of America
Fourier ptychography (FP) is a phase retrieval technique that uses the concept of angular diversity to recover high-resolution complex sample images [1–3]. Similar to other phase retrieval techniques [4–14], the recovery process of FP consists of alternating enforcement of the known sample information in the spatial domain, and a fixed constraint in the Fourier domain. In particular, the recovery process of FP shares its roots with ptychography [15–27], a lensless imaging approach that applies translational diversity (i.e., shifting the sample laterally) to recover its complex image. FP, instead, imposes the panning spectrum constraint in the Fourier domain to simultaneously expand the Fourier passband and recover the complex sample image.
Current FP platforms are mainly developed for microscopy applications. In these platforms, an LED array is used to provide angle-varied illumination, and the corresponding intensity images of the sample are captured using a low numerical aperture (NA) objective lens. These images are then iteratively stitched together in the Fourier domain to recover a high-resolution, complex sample image. Prior work has demonstrated FP’s microscopic imaging capabilities well beyond the cutoff frequency defined by the objective lens , its acquisition of quantitative phase , and the spectral multiplexing capability . Recent reviews on the FP approach can be found in [28, 29].
Despite these successful demonstrations of Fourier ptychography, they still share a major limitation: their imaged samples must be thin . Only under this assumption will the low-resolution images obtained at different incident angles uniquely map to different passbands of the 2D sample spectrum, allowing the FP algorithm to accurately impose the panning spectrum constraint to recover a high-resolution complex sample image. If the sample is not thin, this one-to-one mapping relationship in the Fourier plane is invalid, and the panning spectrum constraint cannot be imposed.
Variable-angle illumination is not the only way to capture shifted versions of a sample’s spectrum. Instead, if the sample is illuminated with a single plane-wave, a linearly translating imaging aperture (perpendicular to the optical axis) can achieve a similar effect. Such a setup allows us to circumvent the thin specimen assumption noted above. In this paper, we demonstrate such a detection-path-based imaging scheme, termed aperture-scanning Fourier ptychography. The reported scheme imposes a scanning aperture at the Fourier plane of an imaging system and acquires multiple intensity images of a sample. The acquired images are then synthesized in the frequency domain to recover the optical wavefront exiting the sample at high resolution. Unlike the illumination-based FP approach, the reported scheme’s recovered images each depend upon how the complex wavefront exits the sample – not enters it. Therefore, the sample thickness becomes irrelevant during reconstruction. After recovery, the exiting complex wavefront can then be back-propagated to any plane along the optical axis for 3D holographic refocusing.
Furthermore, the aperture-scanning scheme extends the FP framework for macroscopic imaging settings, where a photographic lens’s aperture naturally serves as a support constraint in the Fourier domain. By simply scanning the camera to different positions perpendicular to the optical axis, we can bypass its aperture-defined diffraction limit. The maximum achievable resolution of a FP-reconstructed image in this case will instead be defined by the camera’s traveling range. We note that, it is not possible to implement the FP concept in a macroscopic imaging setting using the original illumination-based scheme [1, 2].
In the following, we will first demonstrate the aperture-scanning FP scheme for 3D holographic refocusing. We will show that a resolution better than the detector pixel limit, i.e., pixel super-resolution, can be achieved using our prototype setup. Next, we will implement the reported scheme in a macroscopic imaging setting. We will show that our prototype setup is able to bypass the diffraction limit of a conventional photographic lens and recover a super-resolution image of an object placed in the far field. Finally, we will summarize the results and discuss future directions.
2. 3D refocusing via aperture-scanning Fourier ptychography
The aperture-scanning Fourier ptychographic imaging scheme is shown in Fig. 1(a), where a circular aperture mask is placed at the Fourier plane of a 4f system. Denoting the optical field exiting the sample as s(x,y), we assume the field at the Fourier plane is (kx,ky), the Fourier transform of s, following the well-known property of 4f setups. We used an x-y motion stage to scan the circular aperture a(kx,ky), which selectively transmits different passbands of the optical field (kx,ky) to the image plane, as shown in Fig. 1(b). For each position of the circular mask i, we acquire an intensity image Ii of the sample that takes the form, , where is the Fourier transform operator. We then synthesize these acquired images in the frequency domain to produce a complex sample image.
The prototype setup is shown in Fig. 1(c), where we used two Nikon lenses (50 mm focal length, f/1.8) to form a 4f system and placed a 2.5 mm circular aperture at its Fourier plane. A CCD camera with a 5.5 µm pixel size (8 bit dynamic range) was used to capture the sample’s intensity images. We chose an aperture size of 2.5 mm (0.025 NA) to match the 5.5 µm pixel’s sampling requirement (i.e., to avoid problems associated with pixel aliasing). In all experiments shown in this paper, we used a collimated red LED (central wavelength 632 nm) as the light source. The bandwidth of the LED is 25 nm, and the size of the LED’s active area is ~0.15 mm (leading to an approximate 1 mm transverse coherent length at the sample plane, as detailed in Appendix A). We used 15 x 15 scanning steps in our prototype, and acquisition time was about 8 minutes (we did not optimize the scanning speed). We note that, the mechanical scanning process in our prototype setup is only for proof-of-concept demonstration. It can be replaced by a spatial light modulator demonstrated in previous reports .
The recovery process of the aperture-scanning Fourier ptychographic imaging scheme is briefly outlined as follows. Figure 1(d) offers an algorithm sketch, while further details are available in . We note that this basic recovery process assumes the illumination field extending across the sample’s lateral extent is coherent. A more advanced FP recovery procedure may incorporate partial coherence modeling . The recovery algorithm starts with a high-resolution spectrum estimate of the sample, . This initial guess can be random. Next, this sample spectrum estimate is sequentially updated with the low-resolution intensity measurements (subscript m stands for measurement and i stands for the ith aperture position). For each update step, we select a small sub-region of , corresponding to one position of the circular aperture, and apply Fourier transformation to generate a new low-resolution target image (subscript l stands for low-resolution and i for the ith aperture position). We then replace the target image’s amplitude component with the square root of the measurement to form an updated, low-resolution target image . This image is then used to update its corresponding sub-region of . This replace-and-update sequence is repeated for all intensity measurements, and we iterate through the above process several times until solution convergence, at which point is Fourier transformed to the spatial domain to produce a high-resolution complex sample image .
We first tested the reported scheme using a US Air Force (USAF) resolution target. Figure 2(a) shows the raw image captured by our prototype setup. The resolution of this raw image is limited by the size of the circular aperture (0.025 NA, 12.4 µm resolution). From Fig. 2(a), we can resolve group 5, element 3 (line width of 12.4 µm) of the USAF target, in a good agreement with the diffraction limit of the optical system. In the acquisition process, we used a 0.85 mm step size for aperture scanning. The corresponding synthetic NA of the recovered image is about 0.1 (3.15 µm resolution), 4 times better than the original intensity image. Figure 2(b) shows the recovered image of the reported scheme. In this figure, group 7, element 3 (line width of 3.11 µm) of the USAF target can be clearly resolved, again, in a good agreement with the theoretical prediction. The processing time for Fig. 2(b) is less than one second using an Intel i7 CPU. For comparison, we also show an image captured by our system with the aperture fully open in Fig. 2(c). In this case, the NA of optical system is about 0.27 (f number of 1.8), corresponding to 1.2 µm resolution. According to the sampling theorem, a 0.6 µm pixel size is needed to fully characterize the captured image. However, such a small detector pixel size is currently not commercially available. The pixel size of our camera is 5.5 µm, a typical size for CCD image sensors. Therefore, the resolution of Fig. 2(c) remains 11 µm (pixel size *2), limited by the problem of pixel aliasing.
Pixel aliasing is a limiting factor for many imaging applications [31, 32]. The experimental results shown in Fig. 2 may provide a Fourier-domain solution for this problem. In the recovery algorithm, we model the aperture using a binary pupil function; no phase factor is introduced. If the system has some aberrations, we can also introduce a phase factor with different Zernike modes to compensate for the aberrations. We note that, in the reported scheme, the phase factor is different for each position of the employed aperture. We can use the adaptive correction framework  to find the global optimal Zernike coefficients. The reported scheme may provide a unique computational solution for compensating aberrations in large-format, high-NA imaging systems.
We also note that, in modern optical system design, there is a gap between the information capacity (i.e., the space-bandwidth product, SBP ) of an optical system and that of a digital recording device [35, 36]. For example, a simple closed-circuit-television (CCTV) lens can provide a field-of-view of 75 mm2 and a diffraction limited resolution of 0.78 µm (characterized at 632 nm wavelength), leading to a SBP of 0.5 billion . To fully sample the field transferred by this lens to its image plane, we need at least 0.5 billion effective pixels on the image sensor, which is orders of magnitude higher than the pixel count of existing CCD/CMOS image sensors. To this end, the result demonstrated in Fig. 2 may provide a solution to bridge the SBP gap between current optical elements and digital detectors. We can, for example, first match the information capability between an optical system and a digital recording device by inserting a mask to the pupil plane (a low-pass filter). Combination of multiple acquired low-resolution images subsequently can yield a wide-field, high-resolution image with a final pixel count matching the full information capacity of the optical system.
Another advantage of the reported scheme is its ability to record the exiting complex wavefront, i.e., the hologram, of the sample. Unlike the conventional holographic imaging techniques, the reported scheme requires no interferometric measurements and thus reduces the coherence requirement of the light source (see Appendix A for details regarding FP’s required coherence conditions). We note that, phase contrast images can be derived from different apertures of the Fourier plane or by oblique incident angles [38–40]. Our work, on the other hand, aims to synthesize different apertures and recover the complex wavefront at the same time.
In Fig. 3, we demonstrate 3D holographic refocusing with an LED light source. The experimental setup is the same as before, but the imaged sample is an axially tilted microscope slide (corn stem cell, B&H). Figure 3(a) shows a raw intensity image captured by the prototype setup. Figure 3(b) shows the recovered complex wavefront exiting the sample. Similar to other holographic imaging techniques, we can propagate the recorded hologram to any plane along the optical axis. Figure 3(c1)-3(c4) shows intensity images of the recovered field after digitally refocusing it to different axial locations. For example, in Fig. 3(c1) we propagated the recovered hologram by −1.3 mm axially. As the sample is tilted with respect to the optical axis, we can see that different parts of the sample are brought into focus in different spatial regions, as highlighted by red arrows in Fig. 3(c1)-3(c4).
We also tested the reported scheme using an extended 3D object (a spider’s leg). Figure 4(a) shows the raw intensity image of this sample, and Fig. 4(b) shows its recovered FP hologram. We then back-propagated this recovered hologram to different planes along the optical axis. Figure 4(c) shows the intensity of the recovered hologram propagated to different axial sections of the sample (also refer to Media 1). We note that, each recovered section contains information of the entire extended object; out-of-focus information will be superimposed on the in-focus information (such a feature can also be found in other holographic imaging techniques [41, 42]). Much like a through-focus image stack from a conventional microscope setup, this data still contains useful information regarding the sample’s three dimensional structure.
3. Macroscopic imaging beyond the diffraction limit via camera-scanning FP
The key innovation of reported scheme is to impose a constraint at a Fourier conjugate plane within an imaging system. This simple concept may be directly implemented outside of a 4f system in a macroscopic imaging platform. Figure 5(a) demonstrates a camera-scanning FP scheme, where the object is placed at the far field and the camera is scanned over different x-y positions to acquire images corresponding to different passbands. We note that, far field propagation is equivalent to performing Fourier transform of the light field. Therefore, the aperture of the camera lens naturally serves as a support constraint at the Fourier space. By scanning the entire camera at different x-y positions, we are able to synthesize a large passband in the Fourier space, and thus bypass the resolution limit imposed by the photographic lens.
Figure 5(b) shows the prototype camera-scanning FP setup. A USAF resolution target is used to characterize its imaging performance, and the entire camera is scanned through the x-y plane. We used a CCD camera with a 5.5 µm pixel size and a 50 mm Nikon photographic lens with a fixed f-number of 16 (we choose this f-number to avoid pixel aliasing problem of the image sensor; a smaller f-number can be used with a smaller pixel size). The same single LED is used as the illumination source as in Section 2, with a central wavelength of 632 nm. The step size of the mechanical scan is 1.2 mm in x and y, and we scanned to 7 x 7 locations in the included experiment.
Figure 6 demonstrates the camera-scanning FP setup’s imaging performance. Figure 6(a1) displays a section of one of the camera’s raw images, and Fig. 6(a2) shows the magnitude of its corresponding spectrum in Fourier space (on a log scale). Figure 6(b1) displays an example FP reconstruction from all 49 images, while the corresponding magnitude of this reconstructed image’s spectrum is in Fig. 6(b2). It is clear that the reported scheme is able recover an image with a resolution better than that of the photographic lens. For example, Fig. 6(b1) contains a resolved digit ‘4’, which is impossible to discern within the raw image. However, a number of artifacts persist in this reconstruction. One source of artifact may be the inadequate coherence of the illumination provided by the light source, which is detailed in Appendix A. Another source may be incorrect modeling of the aperture shape (we use a circular pupil in our FP reconstruction, while it should be an irregular pentagon corresponding to the Nikon photographic lens iris diaphragm). To reconstruct an improved-quality FP image, we can simultaneously recover both the high-resolution image and the irregular pupil function’s shape .
Drawing connections and distinctions between camera-scanning FP and three related imaging modalities – pixel super-resolution imaging [31, 32, 44, 45], synthetic aperture imaging [46, 47], and integral imaging [48–50] – helps clarify the operation of our approach. Pixel super-resolution imaging stitches together many sub-pixel shifted images in the spatial domain. Resolution improvement is only possible when the resolving power of the lens is greater than that of the pixel array. In other words, if the pixel size is smaller than the diffraction limit of the optics, there won’t be any resolution improvement using this technique. The lens aperture size still defines the system’s fundamental resolution limit. The reported camera-scanning FP approach, on the other hand, can bypass this fundamental limit by scanning the entire camera in the x-y plane. The final achievable resolution of this approach is determined by the traveling range of the camera, not the aperture size of the employed lens. However, the reported approach requires somewhat high-coherence illumination, while pixel super-resolution is nominally independent of source coherence.
Synthetic aperture imaging is a super-resolution technique originally developed for radio telescopes [46, 47]. The concept has also been adopted in microscopy imaging systems in recent years [51–57]. The basic idea of this technique is to combine images from a collection of telescopes in the Fourier domain to improve the achievable resolution. This technique’s data fusion process requires that each telescope measure the incoming signal’s amplitude and phase. While simple for radio frequencies, optical frequencies require an accurate interferometry setup to recover the incoming light field’s phase. This is why imaging with synthetic aperture technique has been used successfully in radio astronomy since the 1950s and in optical astronomy only since the 2000 decade. Like synthetic aperture setups, the reported camera-scanning FP approach also expands the detected field’s Fourier passband to improve the achievable resolution. However, it is different in two key regards: 1) the reported approach only records intensity information of the light field; no phase information is needed. 2) It requires aperture overlapping during successive acquisitions to enable our recovery algorithm to estimate the missing phase. The synthetic aperture technique does not require any redundant data, which is important to FP’s successful convergence.
Integral imaging [48–50] or light field imaging [58–61] is a multi-perspective imaging technique that uses multiple cameras to record multiple images of a scene from different perspectives. The acquired images are then shifted and added to perform 3D refocusing. Similar to integral imaging, the camera-scanning FP also captures multiple perspective images of a sample. However, integral imaging and our approach differ in two key regards. First, integral imaging’s resolution is still determined by the aperture size of a single photographic lens. Camera-scanning FP, on the other hand, bypasses the resolution limit of the photographic lens. Second, integral and light field imaging work with incoherent illumination, while again our approach requires coherent/partially coherent illuminations. In this regard, the reported scheme is limited to coherent (or partially coherent) imaging settings  while integral imaging use incoherent illumination.
In conclusion, we have demonstrated an aperture-scanning Fourier ptychography scheme for 3D holographic refocusing and super-resolution macroscopic imaging. There are several advantages associated with the reported scheme. 1) It does not require any interferometric measurement. We used an LED as for our demonstrations’ light source, which helps to suppress speckle noise and other coherent artifacts common to holographic imaging. 2) The reported scheme may provide a Fourier-domain solution to the pixel aliasing problem. 3) Aberrations of large-format, high-NA lenses can be modeled using coherent pupil functions. By introducing these pupil functions in the recover process, the reported scheme may be able to compensate for these aberrations. Gigapixel microscope can be built using high-capacity photographic lenses . 4) The reported scheme is capable of extending the FP concept to macroscopic imaging settings, where objects are placed in the far field. We show that by simply scanning a camera over a defined region, a recovered image can bypass the resolution limit set by the photographic lens. The final resolution limit is determined by the travelling range of the camera, not the lens’ aperture size.
There are two limitations associated with the reported approach. 1) The mechanical scanning used in our prototype setups is a limiting factor for high-throughput applications. However, we note that, for the aperture-scanning scheme, the scanning process can be implemented using a spatial light modulator or an array of MEMS mirrors. For the camera-scanning scheme, a camera array similar to multi-camera integral imaging setups [48–50] can remove this limitation. The ultimate throughput will be determined by how fast the data can be transferred from the image sensor to the computer. 2) The use of coherent (or partially coherent) illumination in the reported approach may impose a limit on potential applications. For example, the reported approach cannot be used for photographic imaging with outdoor ambient light. If we want to image an object far away from the lens, a collimated laser beam is needed for illumination and a laser-line filter is needed to filter out the ambient light components from the environment. The coherence requirement of the reported scheme is an interesting topic and requires further investigations.
The reported scheme may find applications in microscopy, 3D object tracking, synthetic aperture imaging, remote sensing, and other defense-related applications. Our on-going efforts include 1) incorporating the pupil function correction functionality into the reported scheme [1, 18, 21, 33, 43, 63], and 2) implementing the aperture-scanning scheme in a transmission electron microscope (TEM). We note that, in most commercially available TEM platforms, aperture scanning is a routine functionality for darkfield imaging . The reported scheme can be implemented in these platforms without instrument modification. Compared to the original lensless ptychography approach, the use of a lens in TEM may provide a higher signal-to-noise ratio  and a lower coherence requirement (detailed in the Appendix A).
Statement of competing financial interests
Guoan Zheng is the named inventor on a number of related patent applications. Guoan Zheng also has a competing financial interest in Clearbridge Biophotonics, which, however, did not support this work.
Appendix A: Partial coherence of aperture-scanning Fourier ptychography
Both the aperture and camera scanning FP setups assume each captured image’s optical field is sufficiently coherent (spatially and temporally). Our successful implementation using LED illumination demonstrates that a high-coherence laser source is not required. In the following, we detail exactly how the illumination source’s spatial and temporal coherence impacts our measured data. We conclude that both forms of incoherence must be taken into account during FP system design, but do not significantly impact the accuracy of our demonstrations.
Spatial Coherence: To begin, we will assume that our LED is emitting a quasi-monochromatic field from a finite source area A, within which the field is completely incoherent. Considering a 1D system for simplicity, we can express the statistical nature of light at the LED source plane L using the cross-spectral density (CSD) function CL as,Eq. (1), and the approximation drops a quadratic phase factor for simplicity. Assuming A is a circular aperture with diameter of w, Eq. (2) leads to the often-used metric of coherence length, , which is ’s primary lobe width and the length over which we may consider the optical field to be a single deterministic function. Within this range, the field obeys propagation’s well-known Fourier transforming property. In both the aperture and camera scanning FP setups, we use w = 150 µm, λ= 632 nm and z = 20 cm to find an approximate = 1 mm. The aperture scanning data in Fig. 2–4 (FOV less than 1 mm) may thus be considered a coherent field, thus satisfying our FP reconstruction assumptions. In our camera-scanning setup, the lens is placed at the far field, ~70 cm away from the sample. Assuming that the sample does not significantly modify the coherence property, the transverse coherent length is then determined to be ~3.5 mm at the aperture plane of the Nikon lens. The size of the aperture we used is about 3.1 mm, smaller than the coherent area. Therefore, the light field remains correlated within each acquisition.
If the sample is larger than the coherent region of the sample, it is beneficial to split the captured images into smaller coherent tiles (e.g., 1 mm tiles), apply the FP algorithm separately to each tile, and then re-combine the resolution-enhanced tiles into a final reconstruction. This procedure, also employed in , enables us to assume that each tile contains a fully coherent field and leads to an accurate resolution-enhanced image over the full FOV without loss of generality.
It is direct to show that imperfect spatial coherence degrades FP’s captured data gradually. Likewise, a digital filter can help remove these negative effects to recover raw intensity images that closely match images from a perfectly spatially coherent source (given accurate knowledge of the source shape). This becomes clear if we propagate our statistical description of the partially coherent field in Eq. (2) through the rest of the optical system. The light’s CSD after the sample is , assuming it is thin. Neglecting coordinate scaling factors for simplicity, this CSD is transformed by both the scanning and shifting setups to the image plane via a coherent transfer function ,Eq. (3) to express the detected intensity as a function of both these variables as,Equation (4) is identical in form to partially coherent description of the FP microscope’s captured data set in  (i.e., a spatially offset partially coherent source is equivalent to a shifted imaging aperture). Several manipulations help verify the only difference between Eq. (4) and the description of a fully coherent FP setup is its inclusion of a non-negligible . As demonstrated in , it is possible to remove the effects of from Eq. (4) via post-processing. A deconvolution operation can recover the same data that a fully coherent FP setup would capture, assuming the source shape A is known and that noise is negligible.
Finally, we note that, a mixed-state formulation can be used to model the partially coherent effects in FP platforms . The finite extent of the light source can be modeled as multiple independent point sources. The finite spectrum of the light source can be modeled as multiple light sources emitting light at different wavelengths.
Temporal Coherence: The LED source’s finite temporal coherence may limit the FP algorithm’s maximum achievable resolution and should be taken into account during system design. While minimally important in our setup (as detailed below), a simple analysis helps determine an optimal system focal length and scan distance. The optical field at the 4f setup’s Fourier plane will spatially scale with wavelength, causing FP’s shifting aperture to filter different spatial frequencies as a function of wavelength. Uneven filtering will increase with the shifted aperture’s distance from the optical axis. Thus, for a maximum allowable fractional disparity in LED wavelength λmin/λmax, focal length f, and the fixed sub-aperture width w, a simple trigonometric relationship may establish a maximum off-axis aperture distance dmax.
We thank Prof. Changhuei Yang for helpful discussion. We also thank him for letting us use his motion controller. Huolin Xin acknowledges support from the Center for Functional Nanomaterials, Brookhaven National Laboratory, which is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-AC02-98CH10886. For more information on Fourier ptychography, please visit us at ‘Smart Imaging Lab at UConn’: https://sites.google.com/site/gazheng/.
References and links
1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]
3. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]
4. R. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237 (1972).
7. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21, 215829 (1982).
8. R. A. Gonsalves, “Phase retrieval by differential intensity measurements,” J. Opt. Soc. Am. A 4(1), 166–170 (1987). [CrossRef]
9. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1-4), 65–75 (2001). [CrossRef]
13. L. Taylor, “The phase retrieval problem,” IEEE Trans. Antennas Propag. 29(2), 386–391 (1981). [CrossRef]
19. M. Dierolf, P. Thibault, A. Menzel, C. M. Kewish, K. Jefimovs, I. Schlichting, K. von König, O. Bunk, and F. Pfeiffer, “Ptychographic coherent diffractive imaging of weakly scattering specimens,” New J. Phys. 12(3), 035017 (2010). [CrossRef]
21. F. Hüe, J. M. Rodenburg, A. M. Maiden, and P. A. Midgley, “Extended ptychography in the transmission electron microscope: Possibilities and limitations,” Ultramicroscopy 111(8), 1117–1123 (2011). [CrossRef] [PubMed]
22. A. Shenfield and J. M. Rodenburg, “Evolutionary determination of experimental parameters for ptychographical imaging,” J. Appl. Phys. 109(12), 124510 (2011). [CrossRef]
23. M. J. Humphry, B. Kraus, A. C. Hurst, A. M. Maiden, and J. M. Rodenburg, “Ptychographic electron microscopy using high-angle dark-field scattering for sub-nanometre resolution imaging,” Nat. Commun. 3, 730 (2012). [CrossRef] [PubMed]
24. T. B. Edo, D. J. Batey, A. M. Maiden, C. Rau, U. Wagner, Z. D. Pešić, T. A. Waigh, and J. M. Rodenburg, “Sampling in x-ray ptychography,” Phys. Rev. A 87(5), 053850 (2013). [CrossRef]
25. S. Marchesini, A. Schirotzek, C. Yang, H.- Wu, and F. Maia, “Augmented projections for ptychographic imaging,” Inverse Probl. 29(11), 115009 (2013). [CrossRef]
26. W. Hoppe and G. Strube, “Diffraction in inhomogeneous primary wave fields. 2. Optical experiments for phase determination of lattice interferences,” Acta Crystallogr. A 25, 502–507 (1969). [CrossRef]
27. J. M. Rodenburg and R. H. T. Bates, “The Theory of Super-Resolution Electron Microscopy Via Wigner-Distribution Deconvolution,” Philos. Trans. R. Soc., A 339(1655), 521–553 (1992). [CrossRef]
28. G. Zheng, “Fourier ptychographic imaging,” IEEE Photonics Journal 6, April Issue (2014).
29. G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier Ptychographic Microscopy: A Gigapixel Superscope for Biomedicine,” Opt. Photon. News 25, 26–33 (2014).
30. C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27(3), 1–10 (2008). [CrossRef]
32. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010). [CrossRef] [PubMed]
34. A. W. Lohmann, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space-bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A. 13(3), 470–473 (1996). [CrossRef]
35. O. S. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in Computational Photography (ICCP),2011IEEE International Conference on, (IEEE, 2011), 1–8. [CrossRef]
42. M. Daneshpanah, S. Zwick, F. Schaal, M. Warber, B. Javidi, and W. Osten, “3D Holographic Imaging and Trapping for Non-Invasive Cell Identification and Tracking,” Display Technology, Journalism 6, 490–499 (2010).
44. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Processing Mag. 20(3), 21–36 (2003). [CrossRef]
45. J. C. Gillette, T. M. Stadtmiller, and R. C. Hardie, “Aliasing reduction in staring infrared imagers utilizing subpixel techniques,” Opt. Eng. 34(11), 3130–3137 (1995). [CrossRef]
46. M. Ryle and A. Hewish, “The synthesis of large radio telescopes,” Mon. Not. R. Astron. Soc. 120, 220 (1960).
48. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef] [PubMed]
56. J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, “High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning,” Appl. Opt. 47(30), 5654–5659 (2008). [CrossRef] [PubMed]
57. T. R. Hillman, T. Gutzler, S. A. Alexandrov, and D. D. Sampson, “High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy,” Opt. Express 17(10), 7873–7892 (2009). [CrossRef] [PubMed]
58. V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Volume 2, (IEEE Computer Society, 2006), pp. 2331–2338. [CrossRef]
59. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (TOG), (ACM, 2005), 765–776.
60. B. S. Wilburn, M. Smulski, H.-H. K. Lee, and M. A. Horowitz, “Light field video camera,” in Electronic Imaging 2002, (International Society for Optics and Photonics, 2001), 29–36.
62. A. Wax, Coherent Light Microscopy: Imaging and Quantitative Phase Analysis (Springer, 2011), Vol. 46.
64. D. B. Williams and C. B. Carter, The Transmission Electron Microscope (Springer, 1996).
65. D. J. Brady, Optical Imaging and Spectroscopy (John Wiley & Sons, 2009).