This paper presents a technique to image the complex index of refraction of a sample across three dimensions. The only required hardware is a standard microscope and an array of LEDs. The method, termed Fourier ptychographic tomography (FPT), first captures a sequence of intensity-only images of a sample under angularly varying illumination. Then, using principles from ptychography and diffraction tomography, it computationally solves for the sample structure in three dimensions. The experimental microscope demonstrates a lateral spatial resolution of 0.39 μm and an axial resolution of 3.7 μm at the Nyquist–Shannon sampling limit (0.54 and 5.0 μm at the Sparrow limit, respectively) across a total imaging depth of 110 μm. Unlike competing methods, this technique quantitatively measures the volumetric refractive index of primarily transparent and contiguous sample features without the need for interferometry or any moving parts. Wide field-of-view reconstructions of thick biological specimens suggest potential applications in pathology and developmental biology.
© 2016 Optical Society of America
It is challenging to image thick samples with a standard microscope. High-resolution objective lenses offer a shallow depth of field, which require one to axially scan through the sample to visualize its three-dimensional (3D) shape. Unfortunately, refocusing does not remove light from areas above and below the plane of interest. This longstanding problem has inspired a number of solutions, the most widespread being confocal designs, two-photon excitation methods, light sheet microscopy, and optical coherence tomography. These methods “gate out” light from sample areas away from the point of interest and offer excellent signal enhancement, especially for thick, fluorescent samples .
Such gating techniques also encounter several problems. First, they typically must scan out each image, which might require physical movement and can be time consuming. Second, the available signal (i.e., the number of ballistic photons) decreases exponentially with depth. To overcome this limit, one must use a high NA lens, which provides a proportionally smaller image field of view (FOV). Finally, little light is backscattered when imaging non-fluorescent samples that are primarily transparent, such as commonly seen in embryology, in model organisms such as zebrafish, and after the application of recent tissue clearing  and expansion  techniques.
Instead of capturing just the ballistic photons emerging from the sample, one might instead image the entire optical field, which includes light that has scattered. Several techniques have been proposed to enable depth selectivity without gating for ballistic light. One might perform optical sectioning through digital deconvolution of a focal stack . Light-field imaging  and point-spread function engineering  are two other alternatives. All three of these methods primarily operate with incoherent light, e.g., from fluorescent samples. They are thus not ideal tools for obtaining the refractive index distribution of a primarily transparent and non-fluorescent medium.
To do so, it is useful to use coherent illumination. For example, the amplitude and phase of a digital hologram may be computationally propagated to different depths within a thick sample, much like refocusing a microscope. However, the field at out-of-focus planes still influences the final result. Several techniques also aim for depth selectivity by using quasi-coherent illumination or through acquiring multiple images [7–10].
A very useful framework to summarize how coherent light scatters through thick samples is diffraction tomography (DT), as first developed by Wolf . In a typical DT experiment, one illuminates a sample of interest with a series of tilted plane waves and measures the resulting complex diffraction patterns in the far field. These measurements may then be combined with a suitable algorithm into a tomographic reconstruction. An early demonstration of DT by Lauer is a good example . Typically, the reconstruction algorithm assumes the first Born [13,14] or the first Rytov  approximation. It is also possible to apply the projection approximation, which models light as a ray. As a synthetic aperture technique, DT comes with the additional benefit of improving the resolution of an imaging element beyond its traditional diffraction-limit cutoff .
However, as a technique that models both the amplitude and phase of a coherent field, most implementations of DT require a reference beam and holographic measurement, or some sort of phase-stable interference (including spatial light modulator coding strategies, e.g., as in Ref. ). Since it is critical to control for interferometric stability  and thus limit the motion and phase drift to sub-micrometer variations, DT has been primarily implemented in well-controlled, customized setups. Several prior works have considered solving DT from intensity-only measurements to possibly remove the need for a reference beam [17–26]. However, while some applied the first Born approximation, these works also required customized setups and typically imposed additional sample constraints (e.g., a known sample support). They did not operate within a standard microscope or connect their reconstruction algorithms to ptychography, from which improvements like computational aberration correction  and multiplexed reconstruction  may be easily adopted.
Here, we perform DT using standard intensity images captured under variable LED illumination from an array source. Our technique, termed Fourier ptychographic tomography (FPT), acquires a sequence of images while changing the light pattern displayed on the LED array. Then, it combines these images using a phase retrieval-based ptychographic reconstruction algorithm, which computationally segments a thick sample into multiple planes, as opposed to physically rejecting light from above and below one plane of interest. Similar to DT, FPT also improves the lateral image resolution beyond the standard cutoff of the imaging lens. The end result is an accurate three-dimensional map of the complex index of refraction of a volumetric sample obtained directly from a sequence of standard microscope images.
2. RELATED WORK
To begin, a number of techniques attempt 3D imaging without applying the first Born approximation. These include lensless on-chip devices , lensless setups that assume an appropriate linearization , and methods relying upon effects like defocusing (e.g., the transport of intensity equation ) or spectral variations . These techniques do not necessarily fit within a standard microscope setup or offer the ability to simultaneously improve spatial resolution. Two related works for 3D imaging, which also do not use DT with the first Born approximation, are by Tian and Waller  and Li et al. . These two setups are quite similar to ours, and we discuss their operations in more detail below.
As mentioned above, there are also several prior works applying DT under the first Born approximation that only use intensity measurements [17–26]. While some of these works examine phase retrieval as a reconstruction algorithm, they must either shift the focal plane [22,24] or source  axially between each measurement, or must assume constraints on the sample [19,20,23,26] to successfully recover the phase. These prior works do not connect DT to ptychographic phase retrieval (i.e., they do not recover the phase by using diversity between the variably illuminated DT images). Connections between phase retrieval and DT under the first Born approximation have also been explored within the context of volume hologram design .
The field of x ray ptychography also offers a number of methods to image 3D samples with intensity measurements [36–38]. However, none of these ptychography methods seem to directly modify DT under the first Born or Rytov approximation, to the best of our knowledge. A popular technique appears to use standard two-dimensional (2D) ptychographic solvers to determine the complex field for individual projections of a slowly rotated sample, which are subsequently combined using conventional DT techniques .
Fourier ptychography (FP)  uses a standard microscope and no moving parts to simultaneously improve image resolution and measure quantitative phase but is restricted to thin samples. FPT effectively extends FP into the third dimension. As noted above, Tian and Waller  and Li et al.  also examine the problem of 3D imaging from intensities in a standard microscope. These two examples adopted their reconstruction technique from a 3D ptychography method [37,38] that splits up the sample into a specified number of infinitesimally thin slices (each under the projection approximation) and applies the beam propagation method (i.e., assumes small-angle scattering) . This “multi-slice” approach remains accurate within a different domain of optical scattering than the first Born approximation (i.e., for different types of samples and setups, see chapter 2 in Ref. ). For example, one may include both forward and backscattered light in a DT solver to accurately reconstruct a 3D sample under the Born approximation . However, the multi-slice method does not directly account for backscattered light. Its projection approximation also assumes the lateral divergence of the optical field gradient at each slice is zero. Alternatively, the validity of first Born approximation breaks down when the total amount of absorption and phase shift from a sample is large , whereas this appears to impact the multi-slice approach less (e.g., it can image two highly absorbing layers separated by a finite amount of free space [33,34]).
Thus, while the experimental setup of FPT is similar to prior work [33,34,40], our reinterpretation of ptychography within the physical framework of DT (under the first Born approximation) allows us to accurately reconstruct new specimen types in 3D without measuring phase. For example, primarily transparent samples of continuously varying optical density, which are often encountered in biology, typically obey the first Born approximation. Accordingly, we have used FPT to compute some of the first quantitatively accurate 3D maps of clear samples that contain contiguous features (e.g., an unstained nematode parasite and starfish embryo) from standard microscope images.
In addition, FPT offers a clear picture of the location and amount of data it captures in 3D Fourier space. Such knowledge currently helps us to establish various solution guarantees for 2D image phase retrieval . These guarantees may also extend to the current case of 3D tomographic phase retrieval (e.g., to support its potential clinical use). Furthermore, instead of specifying an arbitrary number of sample slices and their location in a 3D volume, FPT simply inserts measured data into its appropriate 3D Fourier space location and ensures phase consistency between each measurement. By solving for the first term in the Born expansion, we aim this approach as a general framework to eventually form quantitatively accurate tomographic maps of complex biological samples with sub-micrometer resolutions.
3. METHOD OF FPT
In this section, we develop a mathematical expression for our image measurements using the FPT framework and then summarize our reconstruction algorithm. We use the vector to define the 3D sample coordinates and the vector to define the corresponding k-space (wave vector) coordinates (see Fig. 1).
A. Image Formation in FPT
It is helpful to begin our discussion by introducing a quantity termed the scattering potential, which contains the complex index of refraction of an arbitrarily thick volumetric sample,
Here, is the spatially varying and complex refractive index profile of the sample, is the index of refraction of the background (which we assume is constant), and is the wavenumber in vacuum. We note that , where is associated with the sample’s refractive index, is associated with its absorptivity, and we define for notational clarity. We typically neglect the dependence of on since we illuminate with quasi-monochromatic light. This dependence cannot be neglected when imaging with polychromatic light. Finally, we use the term “thick” for samples that do not obey the thin sample approximation, which requires the sample thickness to be much less than , where is the magnitude of the maximum scattering angle .
Next, to understand what happens to light when it passes through this volumetric sample, we define the complex field that results from illuminating the thick sample, , as a sum of two fields: . Here, is the field “incident” upon the sample (i.e., from one LED) and is the resulting field that “scatters” off of the sample. We may insert this decomposition into the scalar wave equation for light propagating through an inhomogeneous medium and use Green’s theorem to determine the scattered field as 
Here, is the Green’s function connecting light scattered from various sample locations, denoted by , to an arbitrary location . is the scattering potential from Eq. (1). Since is unknown at all sample locations, it is challenging to solve Eq. (2). Instead, it is helpful to apply the first Born approximation, which replaces in the integrand with . This approximation assumes that . It is the first term in the Born expansion that describes the scattering response of an arbitrary sample . It assumes a weakly scattering medium. Specifically, the first Born approximation remains valid when the relative index shift and sample thickness obey the relation, . We expect FPT to remain quantitatively accurate with samples obeying this condition. By including higher-order terms, the above framework may in principle include samples with stronger scattering [46,47].
Our system sequentially illuminates the sample with an LED array, which contains sources positioned a large distance from the sample (in a uniform grid, with inter-LED spacing ; see Fig. 1). It is helpful to label each LED with a 2D counter variable , where and , as well as a single counter variable , where . Assuming each LED acts as a spatially coherent and quasi-monochromatic source (central wavelength ) placed at a large distance from the sample, the incident field takes the form of a plane wave traveling at a variable angle such that and with respect to the - and -axes, respectively. We may express the th field incident upon the sample as
As and vary, will always assume values along a spherical shell in 3D space (i.e., the Ewald sphere), since the value of is a deterministic function of and .
After replacing in Eq. (2) with from Eq. (3) and additionally approximating the Green’s function as a far-field response, the following relationship emerges between the scattering potential and the Fourier transform of the th scattered field, , in the far field :
We refer to as the k-space scattering potential, which is the three-dimensional Fourier transform of , with the scattered wave vector in the far field. For simplicity, we have left out a multiplicative pre-factor on the right-hand side of Eq. (5), and instead assume it is included within the function for the remainder of this presentation. The field scattered by the sample and viewed at a large distance, , is given by the values along a specific manifold (or spherical “shell”) of the k-space scattering potential, here written as . We illustrate the geometric connection between and for a 2D optical geometry in Fig. 2(b). The center of the th shell is defined by the incident wave vector, . For a given shell center, each value of lies on a spherical surface at a radial distance of [see colored arcs in Fig. 2(b)]. As varies with the changing LED illumination, the shell center shifts along a second shell with the same radius [since is itself constrained to lie on an Ewald sphere; see gray circle in Fig. 2(b)].
The goal of DT is to determine all the complex values within the volumetric function from a set of scattered fields, , which is often measured holographically [12,15]. Each 2D holographic measurement maps to the complex values of along one 2D shell. The values from multiple measurements [i.e., the multiple shells in Fig. 2(b)] can be combined to form a k-space scattering potential estimate, . Nearly all stationary optical setups will yield only an estimate, since it is challenging to measure data from the entire k-space scattering potential without rotating the sample. Figures 1(c) and 2(c) display typical measurable volumes, also termed a bandpass, from a limited-angle illumination and detection setup. Once sampled, an inverse 3D Fourier transform of the band-limited yields the desired complex scattering potential estimate, , which contains the quantitative index of refraction.
In FPT, we do not measure the scattered fields holographically. Instead, we use a standard microscope to detect image intensities and apply a ptychographic phase-retrieval algorithm to solve for the unknown complex potential. The scattered fields in Eq. (5) are defined at the microscope objective back focal plane (i.e., its Fourier plane), whose 2D coordinates are Fourier conjugate to the microscope focal plane coordinates . If we neglect the effect of the constant background plane wave term (i.e., in the sum ), we may now write the th shifted field at our microscope back focal plane as . These new coordinates highlight the 3D to 2D mapping from to , where again is a deterministic function of , and the same applies between and .
Each shifted, scattered field is then bandlimited by the microscope aperture function, , before propagating to the image plane. The limited extent of (defined by the imaging system NA) sets the maximum extent of each shell along and . The th intensity image acquired by the detector is given by the squared Fourier transform of the bandlimited field at the microscope back focal plane:
Here, denotes a 2D Fourier transform with respect to , and we neglect the effects of magnification (for simplicity) by assuming the image plane coordinates match the sample plane coordinates, . The goal of FPT is to determine the complex 3D function from the real, non-negative data matrix . A final 3D Fourier transform of yields the desired scattering potential, and subsequently the refractive index distribution, of the thick sample.
B. FPT Reconstruction Algorithm
Equation (6) closely resembles the data matrix measured by FP , but now the intensities are sampled from shells within a 3D space (i.e., the curves in Fig. 2). We use an iterative reconstruction procedure, mirroring that from FP , to “fill in” the k-space scattering potential with data from each recorded intensity image. Ptychography and FP require at least approximately 50%–60% data redundancy (i.e., overlapping measurements in k-space) to ensure the successful convergence of the phase retrieval process . With such a similar problem structure, FPT will also require overlap between shell regions in 3D k-space. With one extra dimension, overlap is less frequent and more images are needed for an accurate reconstruction. Both a smaller LED array pitch and a larger array-sample distance along increase the amount of k-space overlap. An example cross section of FPT k-space overlap is shown in Fig. 2(c). As we demonstrate experimentally, several hundred images are sufficient for a complex reconstruction that offers a increase in resolution along and contains approximately 30 unique axial slices. Additional overlap (i.e., more images across the same angular range) will increase robustness to noise.
It is important to select the correct limits and discretization of 3D k-space (i.e., the FOV and resolution of the complex sample reconstruction). The maximum resolvable wave vector along and is proportional to , where is the objective NA and is maximum NA of LED illumination. This lateral spatial resolution limit matches FP . The maximum resolvable wave vector range along is also determined as a function of the objective and illumination NA as . As shown in Ref. , this relationship is easily derived from the geometry of the k-space bandpass volume in Fig. 2. We typically specify the maximum imaging range along the axial dimension, , to approximately match twice the expected sample thickness. This then sets the discretization level along : . The total number of resolved slices along is set by the ratio .
We now summarize the FPT reconstruction algorithm:
- 1. Initialize a discrete estimate of the unknown k-space scattering potential, , using an appropriate 3D array size (see above). In our experiments, we form a refocused light field with the raw intensity image set and use its 3D Fourier transform for initialization . However, we have noticed that simpler alternative initializers, such as the 3D Fourier transform of a single raw image padded along all three dimensions or a 3D array containing a constant value, also often lead to an accurate reconstruction.
- 2. For to images, compute the center coordinate, , and select values along its associated shell (radius , maximum width ). This selection process samples a discrete 2D function, , from the 3D k-space volume. The selected voxels must partially overlap with voxels from adjacent shells. Currently, no interpolation is used to map voxels from the discrete shell to pixels within .
- 3. Fourier transform to the image plane to create and constrain its amplitudes to match the measured amplitudes from the th image. For our experiments, we use the amplitude update form, . More advanced alternating projection-based updates are also available .
- 4. Inverse 2D Fourier transform the image plane update, , back to 2D k-space to form . Use the values of to replace the voxel values of at locations where voxel values were extracted in step 2.
- 5. Repeat steps 2–4 for all to images. This completes one iteration of the FPT algorithm. Continue for a fixed number of iterations, or until satisfying some error metric. At the end, 3D inverse Fourier transform to recover the complex scattering potential, .
In practice, we also implement a pupil function recovery procedure  as we update each extracted shell from k-space, which helps remove possible microscope aberrations. As with other alternating projections-based ptychography solvers, the per iteration cost of the above FPT algorithm is , using the big- notation. Additional details regarding algorithm robustness and convergence are in Supplement 1.
4. RESULTS AND DISCUSSION
We experimentally verify our reconstruction technique using a standard microscope outfitted with an LED array. The microscope uses an infinity corrected objective lens (, Olympus MPLN, ) and a digital detector containing 4.54 μm pixels (Prosilica GX 1920, pixel count). The LED array contains surface-mounted elements (model SMD3528, center wavelength , 20 nm approximate bandwidth, 4 mm LED pitch, 150 μm active area diameter). We position the LED array 135 mm beneath the sample to create a maximum illumination NA of . This leads to an effective lateral NA of and a lateral resolution gain along of slightly over a factor of 2 (from a 1.6 μm minimum resolved spatial period in the raw images to a 0.78 μm minimum resolved spatial period in the reconstruction). The associated axial Nyquist resolution is computed at 3.7 μm. We reconstruct samples across a total depth range of approximately , which is approximately 20 times larger than the stated objective lens DOF of 5.8 μm.
For most of the reconstructions presented below, we capture and process images from the same fixed pattern of LEDs. Some LEDs from within the array produce images containing a shadow of the microscope back focal plane. We do not use these LEDs so as to avoid certain reconstruction artifacts. We typically use the following parameters for FPT reconstruction: each raw image is cropped to pixels, the reconstruction voxel size is for Nyquist–Shannon rate sampling, the reconstruction array contains approximately voxels (110 μm total depth), and the algorithm runs for 5 iterations. The first Born approximation should remain valid across this total imaging depth. The primarily transparent and somewhat sparse samples that we test next justify the relatively large 110 μm depth that we typically use.
A. Quantitative Verification
First, we verify the ability of FPT to improve the lateral image resolution. The sample consists of 800 nm-diameter microspheres (index of refraction ) immersed in oil (index of refraction ). We highlight a small group of these microspheres in Fig. 3. The single raw image in Fig. 3(a) (generated from the center LED) cannot resolve the individual spheres gathered in small clusters. Based upon the coherent Sparrow limit for resolving two points (), this raw image cannot resolve points that are closer than 1.1 μm. After FPT reconstruction, we obtain the complex index of refraction in Fig. 3(b), where we show the real component of the recovered index. The FPT reconstruction along the slice clearly resolves the spheres within each cluster. This 800 nm distance is close to the expected Sparrow limit for the FPT reconstruction: . The ringing features around each sphere indicate a jinc-like point-spread function, as expected theoretically from the circular shape of the finite FPT bandpass along [in Fig. 1(c)] for all reconstructions. The resulting constructive interference forms the undesired dip feature at the center of each cluster.
Second, we check the quantitative accuracy of FPT by imaging microspheres that extend across more than just a few reconstruction voxels. Figure 4 displays a reconstruction of 12 μm diameter microspheres (index of refraction ) immersed in oil (index of refraction ). We use the same data capture and post-processing steps as in Fig. 3. Here, we display a cropped section ( voxels) of the full 3D reconstruction, which required 221 seconds of computation time on a standard laptop. We again display the real (non-absorptive) component of the recovered index across both a lateral slice (along the plane) and a vertical slice (along the plane). We also include detailed 1D traces along the center of the vertical slice.
Three observations here are noteworthy. First, the measured index shift approximately matches the expected shift of across the entire bead, thus demonstrating quantitatively accurate performance across this limited volume. Currently, we only expect quantitative recovery for samples that meet the first Born approximation condition (). This 12 μm microsphere sample approximately satisfies the condition (). Second, for any given one-dimensional trace through a microsphere center, we would ideally expect a perfect rect function (between to ). This is unlike 2D FP, which reconstructs the phase delay though each sphere and forms a parabolically shaped phase measurement (due to the varying thickness of each sphere along the optical axis). While FPT resolves an approximate step function through the center of the sphere along the lateral () dimension, it does not along the axial () direction. This is caused by the limited volume of 3D k-space that FPT measures (i.e., the limited bandpass or “missing cone” of information surrounding the axis). While our stationary sample/detector setup cannot avoid this missing cone, various methods are available to computationally fill it in .
Finally, we compare FPT with two alternative techniques for 3D imaging in Fig. 4(c). First, we use the same dataset to perform 2D FP and then holographically refocus its reconstructed optical field. We obtain this FP reconstruction using the same number of images () and follow the procedure in Ref.  after focusing the objective lens at the axial center of the 12 μm microspheres. The “out-of-focus noise” above and below the plane of the microsphere, created by digital propagation of the complex field via the angular spectrum method, noticeably hides its spherical shape. Second, we interpret the same raw image set as a light field and perform light-field refocusing . While the refocused light field approximately resolves the outline of microsphere along , it does not offer a quantitative picture of the sample interior, nor a measure of its complex index of refraction. The areas above the microsphere are very bright due to its lensing effect (i.e., the light field displays the optical intensity at each plane and thus displays high energy where the microsphere focuses light).
We verify the axial resolution of FPT in Fig. 5 using a sample containing two closely separated layers of 2 μm microspheres () distributed across the surface of a glass slide with oil in between (). The axial separation between the two microsphere layers, measured from the center of each sphere along , is 3.9 μm [i.e., the separation between the microscope slide surfaces is 5.9 μm; see Fig. 5(a)]. This almost matches the expected axial resolution limit of 3.7 μm for the FPT microscope.
Conventional microscope images of the sample, using the center LED for illumination, are in Figs. 5(b) and 5(c). Here, we focus on the center of the two layers () as well as the top microsphere layer () in an attempt to distinguish the two separate layers. At the top of each image (where microspheres in the two layers overlap), it is especially hard to resolve each sphere or determine which sphere is in a particular layer. These challenges are due in part to the limited amount of information contained within the optical intensity at each plane, as opposed to the sample’s complex refractive index.
Next, we return the focus to the plane and implement FPT. We display three slices of our 3D scattering potential reconstruction in Figs. 5(d)–5(f). Here, we show the absolute values of the potential near the plane of the top layer, at the center, and near the plane of the bottom layer. The originally indistinguishable spheres within the top and bottom layers are now clearly resolved in each -plane. Due to the system’s limited axial resolution, the reconstruction at the middle plane () still shows the presence of spheres from both layers. Comparing Figs. 5(b) and 5(c) with Figs. 5(e) and 5(f), it is clear that the axial resolution of FPT is sharper than manual refocusing. Not only is each sphere layer distinguishable (as predicted theoretically), but we now also have quantitative information about the sample’s complex refractive index.
B. Biological Experiments
For our first biological demonstration, we reconstruct a Trichinella spiralis parasite in 3D (see Fig. 6 and Visualization 1 for the complete tomogram). Since the worm extended along a larger distance than the width of our detector, we performed FPT twice and shifted the FOV between to capture the left and right sides of the worm. We then merged each tomographic reconstruction together with a simple averaging operation (matching that from FP , 10% overlap). The total captured volume here is . If our setup included a digital detector that occupied the entire microscope FOV, the fixed imaging volume would be , and no movement would be needed for this example.
A thresholded 3D reconstruction of the parasite index is at the top of Fig. 6 (real component, threshold applied at after normalized to 1, under-sampled for clarity). The maximum real index variation across the tomogram before normalization is approximately 0.06, which can be seen without thresholding in Visualization 1. Its 3D curved trajectory is especially clear in the three separate -slices of the reconstructed tomogram in Fig. 6(a). The two downward bends in the parasite body are lower than the upward bend in the middle, as well as at its front and back ends. It is very challenging to resolve these depth-dependent sample features by simply refocusing a standard microscope. Figure 6(b) displays such an attempt, where the same three planes are brought into focus manually. Since the sample is primarily transparent, in-focus areas in each standard image actually exhibit minimal contrast, as marked by arrows in Fig. 6(b). We plot the intensity through a fixed worm section (black dash) in each of the three insets. The intensity contrast drops by over a factor of 2 at in-focus locations, which will pose a significant challenge to any depth segmentation technique (e.g., focal stack deconvolution ). Since FPT effectively offers 3D phase contrast, points along the parasite within its reconstruction voxels instead show maximum contrast, which enables direct segmentation via thresholding, as shown in the plots in Fig. 6(a).
For our second 3D biological example, we tomographically reconstruct a starfish embryo at its larval stage [see Fig. 7(a) and Visualization 2]. Here, we again show three different closely spaced -slices of the reconstructed scattering potential (, no thresholding applied). Each -slice contains sample features that are not present in the adjacent -slices. For example, the large oval structure in the upper left of the plane, which is a developing stomach, nearly completely disappears in the plane. Now at this -slice, however, small structures, which we expect to be developing mesenchyme cells  and various epithelial cells , clearly appear in the lower right. We confirm the presence of these structures with a differential interference contrast (DIC) confocal microscope in Fig. 7(c) (Zeiss LSM 510, 0.8 NA, , 0.2 μm scan step). A DIC confocal scan is one of the few possible imaging options for this thick and primarily transparent sample, but suffers from a much smaller FOV (approximately 4% of the total effective FPT FOV). It also does not quantitatively measure the refractive index and requires mechanical scanning. Finally, we attempt to refocus through the embryo using a standard microscope () in Fig. 7(b). Both the particular plane of the developing stomach and even the presence of the mesenchyme cells are completely missing from the refocused images. This is due to the inability of the standard microscope to segment each particular plane of interest, the inability to accurately reconstruct transparent structures without a phase contrast mechanism, and an inferior lateral resolution with respect to FPT.
We have performed diffraction tomography using intensity measurements captured with a standard microscope and an LED illuminator. The current system offers a lateral resolution of approximately 400 nm at the Nyquist–Shannon sampling limit (550 nm at the Sparrow limit and 800 nm full period limit) and an axial resolution of 3.7 μm at the sampling limit. The maximum axial extent attempted thus far was 110 μm along , and we demonstrated quantitative measurement of the complex index of refraction through several types of thick specimen with contiguous features.
To improve the experimental setup, an alternative LED array geometry that enables a higher angle of illumination will increase the resolution. Also, we set the number of captured images here to match the data redundancy required by ptychography . However, we have observed that reconstructions are successful with much fewer images than otherwise expected. Along with using a multiplexed illumination strategy , this may help speed up the tomogram capture time. In addition, we did not explicitly account for the finite LED spectral bandwidth or attempt polychromatic capture, which can potentially provide additional information about volumetric samples [32,35]. Finally, we set our reconstruction range along the -axis somewhat arbitrarily at 110 μm. We expect the ability to further extend this axial range in the future.
Subsequent work should also examine connections between FPT, multi-slice-based techniques for ptychography [33,34], and machine learning for 3D reconstruction . While prior work already specifies sample conditions under which the first Born  and multi-slice  approximations remain accurate, it is not yet clear if this translates directly to reconstructions that require phase retrieval. By merging these approaches, it may be possible to increase the domain of sample validity beyond what is currently achieved by each technique independently.
Finally, FPT may also adopt alternative computational tools to help improve ptychographic DT under the first Born approximation. We used the well-known alternating projections phase retrieval update. Other solvers based upon convex optimization  or alternative gradient descent techniques [56,57] may perform better in the presence of noise. Alternative approximations besides first Born approximation (e.g., Rytov ) are also available to simplify the Born series. In addition, the resolution is currently impacted by the missing cone in 3D k-space, and various methods are available to fill this cone in by assuming the sample is positive only, sparse, or of a finite spatial support . Finally, methods exist to solve for the full Born series by taking into account the effects of multiple scattering [46,47]. Connecting this type of multiple scattering solver to FPT may aid with the reconstruction of increasingly turbid biological samples.
National Institutes of Health (NIH) (1R01AI096226-01); The Caltech Innovation Initiative (CI2) Program (13520135).
The authors would like to thank J. Brake, B. Judkewitz, and I. Papadopoulos for the helpful discussions and feedback.
See Supplement 1 for supporting content.
1. V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010). [CrossRef]
2. K. Chung and K. Deisseroth, “CLARITY for mapping the nervous system,” Nat. Methods 10, 508–513 (2013). [CrossRef]
3. F. Chen, P. W. Tillberg, and E. S. Boyden, “Expansion microscopy,” Science 347, 543–548 (2015). [CrossRef]
4. D. A. Agard, “Optical sectioning microscopy: cellular architecture in three dimensions,” Annu. Rev. Biophys. Bioeng. 13, 191–219 (1984). [CrossRef]
5. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013). [CrossRef]
6. S. R. P. Pavani and R. Piestun, “Three dimensional tracking of fluorescent microparticles using a photon-limited double-helix response system,” Opt. Express 16, 22048–22057 (2008). [CrossRef]
7. A. Dubois, L. Vabre, A. C. Boccara, and E. Beaurepaire, “High-resolution full-field optical coherence tomography with a Linnik microscope,” Appl. Opt. 41, 805–812 (2002). [CrossRef]
8. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. USA 109, 7175–7180 (2012). [CrossRef]
9. T. E. Matthews, M. Medina, J. R. Maher, H. Levinson, W. J. Brown, and A. Wax, “Deep tissue imaging using spectroscopic analysis of multiply scattered light,” Optica 1, 105–111 (2014). [CrossRef]
10. N. Streibl, “Three-dimensional imaging by a microscope,” J. Opt. Soc. Am. A 2, 121–127 (1985). [CrossRef]
11. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969). [CrossRef]
12. V. Lauer, “New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope,” J. Microsc. 205, 165–176 (2002). [CrossRef]
13. M. Debailleul, B. Simon, V. Georges, O. Haeberle, and V. Lauer, “Holographic microscopy and diffractive microtomography of transparent samples,” Meas. Sci. Technol. 19, 074009 (2008). [CrossRef]
14. Y. Cotte, F. Toy, P. Jourdain, N. Pavillon, D. Boss, P. Magistretti, P. Marquet, and C. Depeursinge, “Marker-free phase nanoscopy,” Nat. Photonics 7, 113–117 (2013). [CrossRef]
15. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009). [CrossRef]
16. K. Kim, Z. Yaqoob, K. Lee, J. W. Kang, Y. Choi, P. Hosseini, T. C. So, and Y. Park, “Diffraction optical tomography using a quantitative phase imaging unit,” Opt. Lett. 39, 6935–6938 (2014). [CrossRef]
17. A. J. Devaney, “Structure determination from intensity measurements in scattering experiments,” Phys. Rev. Lett. 62, 2385–2388 (1989). [CrossRef]
18. M. H. Maleki, A. J. Devaney, and A. Schatzberg, “Tomographic reconstruction from optical scattered intensities,” J. Opt. Soc. Am. A 9, 1356–1363 (1992). [CrossRef]
19. M. H. Maleki and A. J. Devaney, “Phase-retrieval and intensity-only reconstruction algorithms for optical diffraction tomography,” J. Opt. Soc. Am. A 10, 1086–1092 (1993). [CrossRef]
20. T. C. Wedberg and J. J. Stamnes, “Comparison of phase retrieval methods for optical diffraction tomography,” Pure Appl. Opt. 4, 39–54 (1995). [CrossRef]
21. T. Takenaka, D. J. N. Wall, H. Harada, and M. Tanaka, “Reconstruction algorithm of the refractive index of a cylindrical object from the intensity measurements of the total field,” Microwave Opt. Technol. Lett. 14, 182–188 (1997). [CrossRef]
22. G. Gbur and E. Wolf, “Diffraction tomography without phase information,” Opt. Lett. 27, 1890–1892 (2002). [CrossRef]
23. T. E. Gureyev, T. J. Davis, A. Pogany, S. C. Mayo, and S. W. Wilkins, “Optical phase retrieval by use of first Born and Rytov-type approximations,” Appl. Opt. 43, 2418–2430 (2004). [CrossRef]
24. M. A. Anastasio, D. Shi, Y. Huang, and G. Gbur, “Image reconstruction in spherical-wave intensity diffraction tomography,” J. Opt. Soc. Am. A 22, 2651–2661 (2005). [CrossRef]
25. Y. Huang and M. A. Anastasio, “Statistically principled use of in-line measurements in intensity diffraction tomography,” J. Opt. Soc. Am. A 24, 626–642 (2007). [CrossRef]
26. M. D’Urso, K. Belkebir, L. Crocco, T. Isernia, and A. Liftman, “Phaseless imaging with experimental data: facts and challenges,” J. Opt. Soc. Am. A 25, 271–281 (2008). [CrossRef]
27. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014). [CrossRef]
28. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]
29. S. O. Isikman, W. Bishara, S. Mavandadi, F. W. Yu, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011). [CrossRef]
30. T. E. Gureyev, D. M. Paganin, G. R. Myers, Y. I. Nesterets, and S. W. Wilkins, “Phase-and-amplitude computer tomography,” Appl. Phys. Lett. 89, 034102 (2006). [CrossRef]
31. A. V. Bronnikov, “Theory of quantitative phase-contrast computed tomography,” J. Opt. Soc. Am. A 19, 472–480 (2002). [CrossRef]
32. T. Kim, R. Zhou, M. Mir, S. D. Babacan, P. S. Carney, L. L. Goddard, and G. Popescu, “White light diffraction tomography of unlabeled live cells,” Nat. Photonics 8, 256–263 (2014). [CrossRef]
33. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). [CrossRef]
34. P. Li, D. J. Batey, T. B. Edo, and J. M. Rodenburg, “Separation of three-dimensional scattering effects in tilt-series Fourier ptychography,” Ultramicroscopy 158, 1–7 (2015). [CrossRef]
35. T. D. Gerke and R. Piestun, “Aperiodic volume optics,” Nat. Photonics 10, 1–6 (2010).
36. M. Dierolf, A. Menzel, P. Thibault, P. Schneider, C. M. Kewish, R. Wepf, O. Bunk, and F. Pfeiffer, “Ptychographic X-ray computed tomography at the nanoscale,” Nature 467, 436–439 (2010). [CrossRef]
37. A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29, 1606–1614 (2012). [CrossRef]
38. T. M. Godden, R. Suman, M. J. Humphry, J. M. Rodenburg, and A. M. Maiden, “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22, 12513–12523 (2014). [CrossRef]
39. C. T. Putkunz, M. A. Pfeifer, A. G. Peele, G. J. Williams, H. M. Quiney, B. Abbey, K. A. Nugent, and I. McNulty, “Fresnel coherent diffraction tomography,” Opt. Express 18, 11746–11753 (2010). [CrossRef]
40. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]
41. J. V. Roey, J. V. Donk, and P. E. Lagasse, “Beam-propagation method: analysis and assessment,” J. Opt. Soc. Am. 71, 803–810 (1981). [CrossRef]
42. D. M. Paganin, Coherent X-Ray Optics (Oxford University, 2006).
43. B. Chen and J. J. Stamnes, “Validity of diffraction tomography based on the first Born and the first Rytov approximations,” Appl. Opt. 37, 2996–3006 (1998). [CrossRef]
44. K. Jaganathan, Y. C. Eldar, and B. Hassibi, “Phase retrieval: an overview of recent developments,” arXiv:1510.07713v1 (2015).
45. K. Nugent, “Coherent methods in the X-ray sciences,” Adv. Phys. 59, 1–99 (2010). [CrossRef]
46. Y. M. Wang and W. C. Chew, “An iterative solution of the two-dimensional electromagnetic inverse scattering problem,” Int. J. Imaging Syst. Technol. 1, 100–108 (1989). [CrossRef]
47. G. A. Tsihrintzis and A. J. Devaney, “Higher-order diffraction tomography: reconstruction algorithms and computer simulation,” IEEE Trans. Image Process. 9, 1560–1572 (2000). [CrossRef]
48. O. Bunk, M. Dierolf, S. Kynde, I. Johnson, O. Marti, and F. Pfeiffer, “Influence of the overlap parameter on the convergence of the ptychographical iterative engine,” Ultramicroscopy 108, 481–487 (2008). [CrossRef]
49. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23, 3472–3491 (2015). [CrossRef]
50. S. Marchesini, “A unified evaluation of iterative projection algorithms for phase retrieval,” Rev. Sci. Instrum. 78, 011301 (2007). [CrossRef]
51. K. C. Tam and V. Perezmendez, “Tomographical imaging with limited-angle input,” J. Opt. Soc. Am. 71, 582–592 (1981). [CrossRef]
52. G. Hamanaka, M. Matsumoto, M. Imoto, and H. Kaneko, “Mesenchyme cells can function to induce epithelial cell proliferation in starfish embryos,” Dev. Dyn. 239, 818–827 (2010). [CrossRef]
53. A. Agassiz, Embryologic of the Starfish, in Vol. 5 of L. Agassiz Contributions to the Natural History of the United States (Cambridge, 1864), 76 p.
54. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unserand, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015). [CrossRef]
55. R. Horstmeyer, R. C. Chen, X. Ou, B. Ames, J. A. Tropp, and C. Yang, “Solving ptychography with a convex relaxation,” New. J. Phys. 17, 053044 (2015). [CrossRef]
56. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23, 4856–4866 (2015). [CrossRef]
57. L. H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33214–33240 (2015). [CrossRef]