High-content biological microscopy targets high-resolution imaging across large fields-of-view, often achieved by computational imaging approaches. Previously, we demonstrated 2D multimodal high-content microscopy via structured illumination microscopy (SIM) with resolution the diffraction limit, using speckle illumination from Scotch tape. In this work, we extend the method to 3D by leveraging the fact that the speckle illumination is in fact a 3D structured pattern. We use both a coherent and an incoherent imaging model to develop algorithms for joint retrieval of the 3D super-resolved fluorescent and complex-field distributions of the sample. Our reconstructed images resolve features beyond the physical diffraction-limit set by the system’s objective and demonstrate 3D multimodal imaging with m resolution over a volume of m.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
High-content optical microscopy is a driving force for large-scale biological study in fields such as drug discovery and systems biology. With fast imaging speeds over large fields-of-view (FOV) and high spatial resolutions [1–8], one can visualize rare cell phenotypes and dynamics. The traditional solution for 2D high-content microscopy is to mechanically scan samples through the limited FOV of a high-NA (i.e. high resolution) imaging objective and then digitally stitch the images together. However, this scheme is limited in imaging speed due to the large-distance translations of the sample, as well as the need for auto-refocusing at each position . These issues are further compounded when extending this high-content imaging strategy to 3D.
Recently, computational imaging has demonstrated efficient strategies for high-content 2D microscopy. In contrast with slide scanning, these strategies often employ a low-NA imaging objective to acquire low-resolution (large-FOV) measurements, then use computational techniques like synthetic aperture [10–12] and super-resolution (SR) [13–18] to digitally reconstruct a high-resolution image. This eliminates the requirement for large-distance mechanical scanning in high-content imaging, which results in faster acquisition and more cost-effective optical setups, while also relaxing the sample’s auto-refocusing requirements due to the low-NA objective’s longer depth-of-field (DOF) [19–36]. Examples of such approaches include lensless microscopy [19–21] and Fourier ptychography [22–28] for coherent absorption and quantitative phase imaging. For incoherent fluorescent imaging, micro-lenslet arrays [29–32], Talbot plane scanning [33–35], diffuse media , or meta-surfaces  have also been demonstrated. Among these examples, 3D high-content imaging capability has only been demonstrated in the coherent imaging context (quantitative phase and absorption) by Fourier ptychography [25, 27].
Our previous work demonstrated multimodal coherent (quantitative phase) and incoherent (fluorescence) imaging for high-content 2D microscopy . Multimodal imaging is important for biological studies requiring cross-correlative analysis [39–43]. Structured illumination microscopy (SIM) [10, 16, 17, 44] with speckle illumination [36, 45–53] was used to encode 2D SR quantitative phase and fluorescence. However, because propagating speckle contains 3D features, it also encodes 3D information. Considering speckle patterns as random interference of multiple angled plane waves, the scattered light from interactions with the sample carries 3D phase (coherent) information, similar to the case of non-random angled illumination in diffraction tomography [54–57] and 3D Fourier ptychography [25, 27]. Simultaneously, the fluorescent (incoherent) light excited by the 3D speckle pattern encodes 3D SR fluorescence information as in the case of 3D SIM . Combining these, we propose a method for 3D SR quantitative phase and fluorescence microscopy using speckle illumination.
Experimentally, we position a Scotch tape patterning element just before the sample, mounted on a translation stage to generate a translating speckle field that illuminates the sample (Fig. 1). Because the speckle grain size is smaller than the PSF of the low-NA imaging objective (which provides large-FOV), the coherent scattered light from the speckle-sample interaction encodes 3D SR quantitative phase information. In addition to lateral scanning of the Scotch tape, axial sample scanning is necessary to efficiently capture 3D SR fluorescence information. Nonlinear optimization methods based on the 3D coherent beam propagation model [25, 59–61] and the 3D incoherent imaging model  were formulated to reconstruct the 3D speckle field and imaging system aberrations, which are subsequently used to reconstruct the sample’s 3D SR quantitative phase and fluorescence distributions. Since the Scotch tape is directly before the sample, the illumination NA is not limited by the objective lens, allowing for lateral resolution gain across the entire FOV. This framework enables us to achieve 3D imaging at sub-micron lateral resolution and micron axial resolution across a half-millimeter FOV.
We start from the concept of 3D coherent and incoherent transfer functions (TFs), using the Born (weak scattering) assumption , to analyze the information encoding process. We then lay out our 3D coherent and incoherent imaging models and derive the corresponding inverse problems to extract SR quantitative phase and fluorescence from the measurements.
First, we introduce linear space-invariant relationships between raw measurements and 3D coherent scattering and incoherent fluorescence [54, 58, 62, 63], by invoking the Born (weak scattering) approximation . These relationships enable us to define TFs for the coherent and incoherent imaging processes. The supports of these TFs in 3D Fourier space determine how much spatial frequency content of the sample can be passed through the system (i.e. the 3D diffraction-limited resolution).
In a coherent imaging system with on-axis plane-wave illumination, the TF describes the relationship between the sample’s scattering potential and the measured 3D scattered field, taking the shape of a spherical cap in 3D Fourier space (Fig. 2(a)). In an incoherent imaging system, the TF is the autocorrelation of the coherent system’s TF , relating the sample’s fluorescence distribution to the 3D measured intensity. It takes the shape of a torus (Fig. 2(b)). The spatial frequency bandwidth of these TFs are summarized in Table 1, where the lateral resolution of the system is proportional to the lateral bandwidth of the TF. The incoherent TF has 2× greater lateral bandwidth than the coherent TF. Axial bandwidth generally depends on the lateral spatial frequency, so axial resolution is specified in terms of the best-case. Note that the axial bandwidth of the coherent TF is zero, which means there is zero axial resolution for coherent imaging; hence the poor depth sectioning ability in 3D holographic imaging [41, 56, 64].
SIM enhances resolution by creating beat patterns. When a 3D structured pattern modulates the sample, the sample’s sub-diffraction features create lower-frequency beat patterns which can be directly measured and used to reconstruct a SR image of the sample via post-processing [17, 58]. This process is generally applicable to both coherent and incoherent imaging [40–43], enabling 3D SR multimodal imaging. Mathematically, a modulation between the sample contrast and the illumination pattern in real space can be interpreted as a convolution in Fourier space. This convolution result is then passed through the 3D TF defined in Fig. 2(a,b). The effective support of information going into the measurements can be estimated by conducting cross-correlations between the 3D TFs and the Fourier content of the illumination patterns, as shown in Fig. 2(c,e) and 2(d,f) for coherent and incoherent systems, respectively. The lateral and axial spatial frequency bandwidth of both illumination and 3D SIM Fourier supports for coherent and incoherent imaging are summarized in Table 1. Assuming approximately equal excitation and emission wavelengths, the achievable lateral resolution gain of 3D SIM (ratio between lateral bandwidths of 3D SIM and 3D TF) is for both coherent and incoherent imaging. Axially, coherent SIM builds up the spatial frequency bandwidth in the axial direction, and incoherent SIM can achieve axial resolution gain with a factor of .
In this work, because the Scotch tape does not pass through an objective, it is able to create high-resolution speckle illumination such that , enabling lateral resolution gain without sacrificing FOV . From the TF analysis, we also see that information beyond diffraction-limit in the axial dimension is obtainable. The next sections outline our computational scheme for 3D SR phase and fluorescence reconstruction. To provide higher-quality reconstructions and more robust operation, our algorithm jointly estimates the illumination speckle field, system pupil function (aberrations), the sample’s 3D transmittance function, and the sample’s 3D fluorescence distribution.
2.1. 3D super-resolution phase imaging
We adopt a multi-slice coherent scattering model to describe the 3D multiple-scattering process [25, 59–61] and solve for 3D SR quantitative phase. Our system captures intensity at two focus planes, and , for every speckle-scanned point . With these measurements and the multi-slice model, we are able to reconstruct the sample’s 3D SR complex-field and the scattered field inside the 3D sample, which is used in the fluorescence inverse problem.
2.1.1. Forward model for 3D coherent imaging
Figure 3(a) illustrates the 3D multi-slice coherent imaging model. Plane-wave illumination of the Scotch tape, positioned at the -th scanned point, , creates speckle field , where is the lateral spatial coordinate. This speckle field propagates a distance to the sample. The field interacting with the first layer of the sample is described as:65] is the spatial frequency coordinate, and is a cropping operator that selects the part of the speckle field that illuminates the sample. To model scattering and propagation inside the sample, the multi-slice model treats the 3D sample as multiple slices of complex transmittance function, (), where m is the slice index number. As the field propagates through each slice, it first multiplies with the 2D transmittance function at that slice, then propagates to the next slice. The spacing between slices is modeled as uniform media of thickness . Hence, at each layer we have:
After passing through all the slices, the output scattered field, , propagates to the focal plane to form and gets imaged onto the sensor (with defocus z), forming our measured intensity:
2.1.2. Inverse problem for 3D coherent imaging
We take the intensity measurements from both coherent cameras, , and the scanning trajectory, (calculated via standard rigid-body 2D registration [38, 66]), as inputs to jointly estimate the sample’s 3D SR transmittance function, , as well as the illumination complex-field, , and the system’s coherent PSF, , including aberrations.
Based on the forward model in the previous section, we formulate the inverse problem as:
Here we adopt an amplitude-based cost function, ec, which minimizes the difference between the measured and estimated coherent amplitude in the presence of noise . In order to solve this optimization problem, we use a sequential gradient descent algorithm [67, 68]. The gradient based on each single measurement is calculated and used to update the sample’s transmittance function, illumination speckle field, and coherent PSF. A whole iteration of variable updates is complete after running through all the measurements. In Appendix A, we provide a detailed derivation of the gradients and in Appendix B we lay out our reconstruction algorithm.
2.2. 3D super-resolution fluorescence imaging
Reconstruction of 3D SR images for the fluorescence channel involves an incoherent multi-slice forward model (Fig. 3(b)) and a joint inverse problem solver. The coherent result provides a good starting estimate of the 3D speckle intensity throughout the sample, which, together with the fluorescent channel’s raw data, is used to reconstruct the sample’s 3D SR fluorescence distribution and the system’s aberrations at the emission wavelength, .
2.2.1. Forward model for 3D fluorescence imaging
The 3D fluorescence distribution is also modeled by multiple slices of 2D distributions, (), as shown in Fig. 3(b). Each layer is illuminated by the m-th layer’s excitation intensity, , for Scotch tape position . The excited fluorescent light is mapped onto the sensor through 2D convolutions with the incoherent PSF at different defocus distances, . The sum of contributions from different layers form the measured fluorescence intensity:
2.2.2. Inverse problem for 3D fluorescence imaging
The fluorescence inverse problem takes as input the raw fluorescence intensity measurements, , the registered scanning trajectory, , and the 3D estimates from the coherent model, in order to estimate the sample’s 3D SR fluorescence distribution and aberrations at the emission wavelength. We also refine the speckle field estimate using the fluorescence measurements.
Based on the incoherent forward model, our 3D SR fluorescence inverse problem is:Appendix A and B, respectively.
3. Experimental results
Figure 1 shows the experimental setup. A green laser beam (BeamQ, 532 nm, 200 mW) is collimated through a single lens and illuminates the layered Scotch tape element, creating a speckle pattern at the sample. The number of layers of Scotch tape sets the degree of scattering; we use 16 layers here. The layered Scotch tape and the sample are mounted on a 3-axis closed-loop piezo-stage (Thorlabs, MAX311D) and a 1-axis open-loop piezo-stage (Thorlabs, NFL5DP20), respectively, to enable lateral speckle scanning and axial sample scanning. The separation between the tape and the sample is approximately 1 mm, which is the minimal distance we can achieve for high-angle and high-power illumination without physically touching the sample. The transmitted diffracted and fluorescent light from the sample then travels through the subsequent 4f system formed by the objective lens (Nikon, CFI Achro 20×, NA=0.4) and a tube lens. The coherent and fluorescent light have different wavelengths and are optically separated by a dichroic mirror (Thorlabs, DMLP550R), after which the fluorescence is further spectrally filtered before being imaged onto Sensor-F (PCO.edge 5.5). The coherent light is ND-filtered and then split by a beam-splitter onto two sensors (FLIR, BFS-U3-200S6M-C). Sensor-C1 is in focus, while Sensor-C2 is defocused by 3 mm, enabling efficient phase retrieval across a broad swath of spatial frequencies, according to the phase transfer function .
Successful reconstruction relies on appropriate choices for the scanning range and step size . Generally, the translation step size should be 2-3× smaller than the targeted resolution and the total translation range should be larger than the diffraction-limited spot size of the original system. Our system has detection NA of 0.4 and targeted resolution of 500 nm, so a 36 × 36 Cartesian scanning path with a step size of 180 nm is appropriate for 2D SR reconstruction. For coherent imaging, since there is zero axial bandwidth in the coherent TF (Fig. 2(a)), the sample’s complete diffraction information is projected axially and encoded in the measurement. This enables SR reconstruction of the sample’s 3D quantitative phase from just the translating speckle. Incoherent imaging, however, has optical sectioning due to its torus-shaped TF (Fig. 2(b)); hence, fluorescent light that is outside the DOF of the objective will have weak contrast. In order to reconstruct 3D fluorescence with high fidelity, we add axial scanning to our acquisition scheme .
A direct combination of lateral xy-scanning of the speckle and axial z-scaning of the sample will result in measurements for both channels, where Nz is the number of axial scan positions. Fortunately, there is a high-degree of redundancy in this data. As previously stated, the 3D coherent information does not require axial scanning, and the speckle pattern measured from the coherent channel is used to initialize the fluorescent reconstruction. Thus, only minor refinements are needed for faithful fluorescent reconstruction.
To save acquisition time, we use an interleaving scanning scheme, alternating between axial sample scanning and lateral speckle scanning (Fig. 1). We laterally scan the speckle pattern through positions, while incrementing the z position for each patch of 12 × 12 positions. The 36 × 36 Cartesian speckle scanning path is divided into 9 blocks of 12 × 12 sub-scanning paths. Each sub-scanning path is associated with a z-scan position. This means the distance from incident speckle field to sample is
3.1. 3D super-resolution demonstration
With a 0.4 NA objective, our system’s native lateral resolution is 1.33 μm for coherent imaging and 760 nm for fluorescence (Table 1). The intrinsic DOF is infinite for coherent imaging and 7.3 μm for fluorescence imaging. In order to characterize the resolution capability of our method, we begin by imaging a sample with features below both diffraction limits - a mono-layer of fluorescent polystyrene microspheres with diameter 700 nm. We use a z-scan step size of 1 μm across8 μm range, fully covering the thickness of the sample. 15 axial layers are assigned to the transmittance function, separated by 1.7 μm based on Nyquist sampling of the expected axial resolution for our 3D reconstruction, resulting an overall reconstructed axial range that spans the sum of the axial scanning range and the two axial ranges of the effective 3D PSF.
Figure 4 shows that our 3D reconstructions (400×400×15 voxels with voxel size of m) clearly resolve the sub-diffraction individual microspheres and demonstrate better sectioning ability in both coherent and fluorescent channels compared to standard widefield imaging (without deconvolution). In the reconstruction, the average lateral peak-to-peak distance of these microspheres is around 670 nm, which is smaller than the nominal size of each microsphere. This is likely due to vertical staggered stacking of the microspheres. Given that our lateral resolution is at least 670 nm, we do break the lateral diffraction limit for both coherent and fluorescent channels, and the coherent channel achieves lateral resolution improvement. Axially, we demonstrate 6 μm resolution for both channels, which is beyond the axial diffraction limit for both channels. The coherent channel improves the axial resolution from no sectioning ability to 6 μm. Given lateral resolution of 670 nm in the coherent channel, we can deduce the illumination NA of this speckle to be >0.4, which suggests the speckle intensity grain size is smaller than 670 nm.
3.2. 3D large-FOV multimodal demonstration
Next we use the same setup to demonstrate 3D multimodal imaging for our full sensor area (FOV ∼314μm × 500μm). As shown previously, our method achieves ∼0.6×0.6×6μm resolution for both fluorescence and phase imaging over an axial range of ∼24 μm. This corresponds to ∼14 Mega-voxels of information. Our experiments are only a prototype; this technique is scalable to the Gigavoxel range with a higher-throughput objective and higher illumination NA.
Figure 5 shows the full-sensor 3D quantitative phase and fluorescence reconstructions ( voxels with voxel size of m) of a multi-size sample (mixed 2 μm, 4 μm fluorescent, and 3 μm non-fluorescent polystyrene microspheres). We adopt the same z-scan step size and number of slices as in Fig. 4. Zoom-ins on 2 regions of interest (ROIs) display 4 axial layers for each. The arrows highlight 2μm fluorescent microspheres, which defocus more quickly than the larger ones. The locations of the fluorescent microspheres match well in both channels. However, there are some locations in the fluorescence reconstruction where 4 μm microspheres collapse because the immersing media is dissolving the beads over time.
Finally, we demonstrate our technique on human colorectal adenocarcinoma (HT-29) cells fluorescently tagged with AlexaFluor phalloidin ( voxels with voxel size of m), which labels F-actinfilaments (sample preparation details in Appendix C). We use a z-scan step size of 1.6 μm across a 12.8 μm range and reconstruct 19 axial layers, separated by 1.7 μm. Figure 6 shows the full-sensor 3D quantitative phase and fluorescence reconstructions, with zoom-ins on 2 ROIs. The sample’s morphological features, as visualized with quantitative phase, match well with the F-actin visualization of the fluorescent channel. This is expected since F-actin filaments are generally known to encapsulate the cell body.
Unlike traditional 3D SIM or 3D quantitative phase methods which use expensive spatial-light-modulators (SLMs) [70, 71] or galvonemeter/MEMs mirrors [57, 72, 73], our technique is relatively simple and inexpensive. Layered Scotch tape efficiently creates speckle patterns with , which is hard to achieve with traditional patterning approaches to high-content imaging (e.g. lenslet array or grating masks [29–35]). Furthermore, the random structured illumination conveniently multiplexes both phase and fluorescence information into the system’s aperture, enabling us to achieve multimodal 3D SR.
One limitation of our technique is that the fluorescent reconstruction relies on the recovered 3D speckle from the coherent imaging channel, so mismatch between the two channels can result in artifacts that degrade resolution. Indeed, the SR gain we achieve experimentally in the fluorescent channel does not match that achieved in the coherent channel. We attribute this mainly to mismatch in axial alignment between the coherent and fluorescent cameras, since the long DOF of the objective made it difficult to axially align the cameras to within the axial resolution limit of the high-resolution speckle pattern. In addition, our 3D coherent reconstruction suffers from coherent noise due to system instabilities during the acquisition process. Specifically, 3D phase information is encoded into the speckle-like (high dynamic range) features within the measurements, which are affected by Poisson noise. These factors reduce performance in both the 3D phase and fluorescence reconstructions.
Another limitation is the relatively long acquisition time - translations of the Scotch tape results in seconds (without hardware optimization). The number of acquisitions could potentially be reduced with further investigation of the redundancy in the data, which would also reduce computational processing time for the reconstruction, which currently takes ∼6 hours on a NVIDIA, TITAN Xp GPU with MATLAB, for each m patch. Cloud computing could also parallelize the reconstruction by patches.
We have presented a 3D SIM multimodal (phase and fluorescence) technique using Scotch tape as the patterning element. The Scotch tape efficiently generates high-resolution 3D speckle patterns over a large volume, which multiplexes 3D super-resolution phase and fluorescence information into our low-NA imaging system. A computational optimization algorithm based on 3D coherent and incoherent imaging models is developed to both solve the inverse problem and self-calibrate the unknown 3D random speckle illumination and the system’s aberrations. The result is 3D sub-diffraction fluorescence reconstruction and 3D sub-diffraction phase reconstruction with lateral resolution enhancement. The method is potentially scalable for Gigavoxel imaging.
Appendix A Gradient derivation
A.1. Vectorial notation
In order to derive the gradient to solve for multivariate optimization problem in Eq. (4) and (6), it is more convenient to represent our 3D coherent and fluorescent model in the linear algebra vectorial notation in the following sections.Eq. (9) with the TF vector, , replaced by the pupil vector , and
Next we use this vectorial model to represent the coherent and fluorescent cost functions for a single intensity measurement as
A.2. Gradient derivation
A.2.1. Gradient derivation for 3D coherent imaging
To optimize Eq. (4) for , , , we need to take the derivative of the coherent cost function with respect to them. We first express the gradients of all the transmittance function vectors, as
As for the gradient of the pupil function, , we have
A.2.2. Gradient derivation for 3D fluorescence imaging
To optimize Eq. (6) for , , , we need to take the derivative of the fluorescent cost function with respect to each. First, we express the gradient for the fluorescence distribution vectors from different layers, as
Then, we would like to derive the gradient of the speckle field, , as
As for the gradient of the pupil function at the fluorescent wavelength, , we can express as
Appendix B Reconstruction algorithm
B.1. Initialization of the variables
Since we use a gradient-based algorithm to solve, we must initialize each output variable, ideally as close as possible to the solution, based on prior knowledge.
For 3D coherent reconstructions, the targeted variables are transmittance function, , incident speckle field, , and pupil function, . We have no prior knowledge of the transmittance function or pupil function, so we set for and to be a circle function with radius defined by . This initializes with a completely transparent sample andnon-aberrated system. If the sample is mostly transparent, the amplitude of our incident speckle field is the overlay of all the in-focus shifted coherent intensities:
For 3D fluorescence reconstruction, the targeted variables are sample fluorescence distribution, , incident field, , and pupil function at the emission wavelength, . We have no prior knowledge of the system’s aberrations, so we set to be a circle function with radius defined by . For the incident speckle field, we use the estimated speckle field from the coherent reconstruction as our initialization. The key to a successful 3D fluorescence reconstruction with this dataset is an initialization of the sample’s 3D fluorescence distribution using the correlation-based SIM solver [53, 74–78] that gives us an approximate result to start with. We adapt the correlation-based solver in our case for rough 3D SR fluorescence estimation. The basic idea is that we use the knowledge of illumination speckle intensity from the coherent reconstruction to compute the correlation between the speckle intensity and our fluorescence measurement. This correlation is stronger when the speckle intensity lines up with the fluorescent light generated by this excitation in the measurement. Each layer of the estimated speckle intensity gates out out-of-focus fluorescent light in the measurement, so we could get a rough estimate of the 3D fluorescent sample. Mathematically, we express this correlation as
To understand why this correlation gives a good estimation of the 3D fluorescent sample, we go through a more detailed derivation with a short-hand notation Δ to denote the operation . Then, we examine one component of Eq. (22):
B.2. Reconstruction algorithm
3D coherent reconstruction takes about 40 iterations, while the 3D fluorescence reconstruction takes around 25 iterations to reach convergence.
Appendix C Sample preparation
The sample shown in Fig. 4 is a monolayer of 700 nm diameter polystyrene microspheres (Thermofischer, R700), prepared by placing microsphere dilutions (60 uL stock-solution/500 uL isopropyl alcohol) onto #1.5 coverslips and then allowing to air dry. Water is subsequently placed on the coverslip to reduce the index-mismatch of the microspheres to the air. An adhesive spacer followed by another #1.5 coverslip was placed on top of the original coverslip to assure a uniform sample layer for imaging.
The sample used in Fig. 5 is a mixture of 2 μm (Thermofischer, F8826) and 4 μm (Thermofischer, F8858) fluorescently-tagged ( nm) and 3 μm non-fluorescent (Sigma-Aldrich, LB30) polystyrene microspheres. We follow a similar procedure as before, except that the dilution is composed of 60 uL stock solution of each type of microspheres and 500 uL isopropyl alcohol. Since the microspheres are larger in size, we adopt high-index oil ( at λ = 532 nm) for sample immersion.
Figure 6 uses a sample of HT-29 cells grown in DMEM with 10% FBS, trypsonized with 1× trypsin, passaged twice a week into 100mm dishes at 1/5, 1/6, 1/8 dilutions and stored in a 37C 5% CO incubator. For imaging, HT-29 cells were grown on glass coverslips (12mm diameter, No. 1 thickness; Carolina Biological Supply Co.) and fixed with 3% paraformaldehyde for 20min. Fixed cells were blocked and permeabilized in phosphate buffered saline (PBS; Corning Cellgro) with 5% donkey serum (D9663, Sigma-Aldrich), 0.3% Triton X-100 (Fisher Scientific) for 30 minutes. Cells were incubated with Alexa Fluor 546 Phalloidin (A22283, ThermoFisher Scientific) for 1 hour, washed 3 times with PBS, and mounted onto a second glass coverslip (24×50mm, No. 1.5 thickness; Fisher Scientific) and immobilized with sealant (Cytoseal 60; Thermo Scientific).
STROBE: A National Science Foundation Science & Technology Center (DMR 1548924); Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative (GBMF4562); Chan Zuckerberg Biohub; Ruth L. Kirschstein National Research Service Award (F32GM129966).
The authors declare that there are no conflicts of interest related to this article.
2. M. H. Kim, Y. Park, D. Seo, Y. J. Lim, D.-I. Kim, C. W. Kim, and W. H. Kim, “Virtual microscopy as a practical alternative to conventional microscopyin pathology education,” Basic Appl. Pathol. 1, 46–48 (2008). [CrossRef]
3. F. R. Dee, “Virtual microscopy in pathology education,” Human Pathol 40, 1112–1121 (2009). [CrossRef]
5. J. C. Yarrow, G. Totsukawa, G. T. Charras, and T. J. Mitchison, “Screening for cell migration inhibitors via automated microscopy reveals a Rho-kinase Inhibitor,” Chem. Biol. 12, 385–395 (2005). [CrossRef] [PubMed]
6. V. Laketa, J. C. Simpson, S. Bechtel, S. Wiemann, and R. Pepperkok, “High-content microscopy identifies new neurite outgrowth regulators,” Mol. Biol. Cell 18, 242–252 (2007). [CrossRef]
8. U. S. Eggert, A. A. Kiger, C. Richter, Z. E. Perlman, N. Perrimon, T. J. Mitchison, and C. M. Field, “Parallel chemical genetic and genome-wide RNAi screens identify cytokinesis inhibitors and targets,” PLoS Biol. 2, e379 (2004). [CrossRef] [PubMed]
10. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit. II,” J. Opt. Soc. Am. 57, 932–941 (1967). [CrossRef]
13. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19, 780–782 (1994). [CrossRef] [PubMed]
14. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006). [CrossRef] [PubMed]
16. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). [CrossRef]
21. A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Scientific reports 3: 1717 (2013). [CrossRef]
22. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photon. 7, 739–745 (2013). [CrossRef]
23. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef] [PubMed]
24. L. Tian, Z. Liu, L. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2, 904–911 (2015). [CrossRef]
25. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). [CrossRef]
26. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3, 827–835 (2016). [CrossRef]
28. A. Pan, Y. Zhang, K. Wen, M. Zhou, J. Min, M. Lei, and B. Yao, “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26, 23119–23131 (2018). [CrossRef] [PubMed]
32. A. Orth, M. J. Tomaszewski, R. N. Ghosh, and E. Schonbrun, “Gigapixel multispectral microscopy,” Optica 2, 654–662 (2015). [CrossRef]
34. S. Pang, C. Han, J. Erath, A. Rodriguez, and C. Yang, “Wide field-of-view Talbot grid-based microscopy for multicolor fluorescence imaging,” Opt. Express 21, 14555–14565 (2013). [CrossRef] [PubMed]
35. S. Chowdhury, J. Chen, and J. Izatt, “Structured illumination fluorescence microscopy using Talbot self-imaging effect for high-throughput visualization,” arXiv 1801.03540 (2018).
36. K. Guo, Z. Zhang, S. Jiang, J. Liao, J. Zhong, Y. C. Eldar, and G. Zheng, “13-fold resolution gain through turbid layer via translated unknown speckle illumination,” Biomed. Opt. Express 9, 260–274 (2018). [CrossRef] [PubMed]
37. M. Jang, Y. Horie, A. Shibukawa, J. Brake, Y. Liu, S. M. Kamali, A. Arbabi, H. Ruan, A. Faraon, and C. Yang, “Wavefront shaping with disorder-engineered metasurfaces,” Nat. Photon. 12, 84–90 (2018). [CrossRef]
40. S. Chowdhury, W. J. Eldridge, A. Wax, and J. A. Izatt, “Structured illumination multimodal 3D-resolved quantitative phase and fluorescence sub-diffraction microscopy,” Biomed. Opt. Express 8, 2496–2518 (2017). [CrossRef] [PubMed]
41. S. Chowdhury, W. J. Eldridge, A. Wax, and J. A. Izatt, “Structured illumination microscopy for dualmodality 3D sub-diffraction resolution fluorescence and refractive-index reconstruction,” Biomed. Opt. Express 8, 5776–5793 (2017). [CrossRef]
42. M. Schürmann, G. Cojoc, S. Girardo, E. Ulbricht, J. Guck, and P. Müller, “Three-dimensional correlative single-cell imaging utilizing fluorescence and refractive index tomography,” J. Biophoton. 2017, e201700145 (2017).
43. S. Shin, D. Kim, K. Kim, and Y. Park, “Super-resolution three-dimensional fluorescence and optical diffraction tomography of live cells using structured illumination generated by a digital micromirror device,” arXiv 1801.00854 (2018).
44. D. Li, L. Shao, B.-C. Chen, X. Zhang, M. Zhang, B. Moses, D. E. Milkie, J. R. Beach, J. A. Hammer, M. Pasham, T. Kirchhausen, M. A. Baird, M. W. Davidson, P. Xu, and E. Betzig, “Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics,” Science 349, aab3500 (2015). [CrossRef]
45. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. L. Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photon. 6, 312–315 (2012). [CrossRef]
46. R. Ayuk, H. Giovannini, A. Jost, E. Mudry, J. Girard, T. Mangeat, N. Sandeau, R. Heintzmann, K. Wicker, K. Belkebir, and A. Sentenac, “Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm,” Opt. Lett. 38, 4723–4726 (2013). [CrossRef] [PubMed]
47. J. Min, J. Jang, D. Keum, S.-W. Ryu, C. Choi, K.-H. Jeong, and J. C. Ye, “Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery,” Scientific Reports 3, 2075:1–6 (2013). [CrossRef]
48. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express 22, 20856–20870 (2014). [CrossRef] [PubMed]
49. H. Yilmaz, E. G. V. Putten, J. Bertolotti, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Speckle correlation resolution enhancement of wide-field fluorescence imaging,” Optica 2, 424–429 (2015). [CrossRef]
50. A. Jost, E. Tolstik, P. Feldmann, K. Wicker, A. Sentenac, and R. Heintzmann, “Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction,” PLoS ONE 10, e0132174 (2015). [CrossRef] [PubMed]
51. A. Negash, S. Labouesse, N. Sandeau, M. Allain, H. Giovannini, J. Idier, R. Heintzmann, P. C. Chaumet, K. Belkebir, and A. Sentenac, “Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations,” J. Opt. Soc. Am. A 33, 1089–1094 (2016). [CrossRef]
52. S. Labouesse, M. Allain, J. Idier, S. Bourguignon, A. Negash, P. Liu, and A. Sentenac, “Joint reconstruction strategy for structured illumination microscopy with unknown illuminations,” ArXiv: 1607.01980 (2016).
54. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Optics Communications 1, 153–156 (1969). [CrossRef]
55. V. Lauer, “New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope,” J. Microscopy 205, 165–176 (2002). [CrossRef]
56. M. Debailleul, B. Simon, V. Georges, O. Haeberlé, and V. Lauer, “Holographic microscopy and diffractive microtomography of transparent samples,” Meas. Sci. Technol. 19, 074009 (2008). [CrossRef]
57. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009). [CrossRef] [PubMed]
58. M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys J. 94, 4957–4970 (2008). [CrossRef] [PubMed]
59. J. M. Cowley and A. F. Moodie, “The scattering of electrons by atoms and crystals. I. A new theoretical approach,” Acta Crystallographica 10, 609–619 (1957). [CrossRef]
60. A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29, 1606–1614 (2012). [CrossRef]
62. C. J. R. Sheppard, Y. Kawata, S. Kawata, and M. Gu, “Three-dimensional transfer functions for high-aperture systems,” J. Opt. Soc. Am. A 11, 593–598 (1994). [CrossRef]
63. M. Gu, Advanced Optical Imaging Theory (Springer, 2000). [CrossRef]
64. M. Debailleul, V. Georges, B. Simon, R. Morin, and O. Haeberlé, “High-resolution three-dimensional tomographic diffractive microscopy of transparent inorganic and biological samples,” Opt. Lett. 34, 79–81 (2009). [CrossRef]
65. J. W. Goodman, Introduction to Fourier Optics (Roberts & Co., 2005).
67. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33213–33238 (2015). [CrossRef]
68. L. Bottou, “Large-scale machine learning with stochastic gradient descent,” International Conference on Computational Statistics pp. 177–187 (2010).
69. Z. Jingshan, R. A. Claus, J. Dauwels, L. Tian, and L. Waller, “Transport of intensity phase imaging by intensity spectrum fitting of exponentially spaced defocus planes,” Opt. Express 22, 10661–10674 (2014). [CrossRef] [PubMed]
70. R. Förster, H.-W. Lu-Walther, A. Jost, M. Kielhorn, K. Wicker, and R. Heintzmann, “Simple structured illumination microscope setup with high acquisition speed by using a spatial light modulator,” Opt. Express 22, 20663–20677(2014). [CrossRef] [PubMed]
71. S. Chowdhury, W. J. Eldridge, A. Wax, and J. Izatt, “Refractive index tomography with structured illumination,” Optica 4, 537–545 (2017). [CrossRef]
72. D. Dan, M. Lei, B. Yao, W. Wang, M. Winterhalder, A. Zumbusch, Y. Qi, L. Xia, S. Yan, Y. Yang, P. Gao, T. Ye, and W. Zhao, “DMD-based LED-illumination Super-resolution and optical sectioning microscopy,” Scientific Reports 3, 1116 (2013). [CrossRef] [PubMed]
74. T. Tanaami, S. Otsuki, N. Tomosada, Y. Kosugi, M. Shimizu, and H. Ishida, “High-speed 1-frame/ms scanning confocal microscope with a microlens and Nipkow disks,” Appl. Opt. 41, 4704–4708 (1996). [CrossRef]
75. J. G. Walker, “Non-scanning confocal fluorescence microscopy using speckle illumination,” Opt. Commun. 189, 221–226 (2001). [CrossRef]
76. S.-H. Jiang and J. G. Walker, “Experimental confirmation of non-scanning fluorescence confocal microscopy using speckle illumination,” Opt. Commun. 238, 1–12 (2004). [CrossRef]
77. J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13, 6075–6078 (2005). [CrossRef]