Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Computational structured illumination for high-content fluorescence and phase microscopy

Open Access Open Access

Abstract

High-content biological microscopy targets high-resolution imaging across large fields-of-view (FOVs). Recent works have demonstrated that computational imaging can provide efficient solutions for high-content microscopy. Here, we use speckle structured illumination microscopy (SIM) as a robust and cost-effective solution for high-content fluorescence microscopy with simultaneous high-content quantitative phase (QP). This multi-modal compatibility is essential for studies requiring cross-correlative biological analysis. Our method uses laterally-translated Scotch tape to generate high-resolution speckle illumination patterns across a large FOV. Custom optimization algorithms then jointly reconstruct the sample’s super-resolution fluorescent (incoherent) and QP (coherent) distributions, while digitally correcting for system imperfections such as unknown speckle illumination patterns, system aberrations and pattern translations. Beyond previous linear SIM works, we achieve resolution gains of 4× the objective’s diffraction-limited native resolution, resulting in 700 nm fluorescence and 1.2 μm QP resolution, across a FOV of 2×2.7 mm 2, giving a space-bandwidth product (SBP) of 60 megapixels.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The space-bandwidth product (SBP) metric characterizes information content transmitted through an optical system; it can be thought of as the number of resolvable points in an image (i.e. the system’s field-of-view (FOV) divided by the size of its point spread function (PSF) [1, 2]). Typical microscopes collect images with SBPs of <20 megapixels, a practical limit set by the systems’ optical design and camera pixel count. For large-scale biological studies in systems biology and drug discovery, fast high-SBP imaging is desired [3–10]. The traditional solution for increasing SBP is to use an automated translation stage to scan the sample laterally, then stitch together high-content images. However, such capabilities are costly, have long acquisition times and require careful auto-focusing, due to small depth-of-field (DOF) and axial drift of the sample over large scan ranges [11].

Instead of using high-resolution optics and mechanically scanning the FOV, new approaches for high-content imaging use a low-NA objective (with a large FOV) and build up higher resolution by computationally combining a sequence of low-resolution measurements [12–25]. Such approaches typically illuminate the sample with customized patterns that encode high-resolution sample information into low-resolution features, which can then be measured. These methods reconstruct features smaller than the diffraction limit of the objective, using concepts from synthetic aperture [26–28] and super-resolution (SR) [29–34]. Though the original intent was to maximize resolution, it is important to note that by increasing resolution, SR techniques also increase SBP, and therefore have application in high-content microscopy. Eliminating the requirement for long-distance mechanical scanning means that acquisition is faster and less expensive, while focus requirements are also relaxed by the larger DOF of low-NA objectives.

Existing high-content methods generally use either an incoherent imaging model to reconstruct fluorescence [18–25], or a coherent model to reconstruct absorption and quantitative phase (QP) [12–17]. Both have achieved gigapixel-scale SBP (milli-/centi- meter scale FOV with sub-micron resolution). However, none have demonstrated cross-compatibility with both coherent (phase) and incoherent (fluorescence) imaging. Here, we demonstrate multi-modal high-content imaging via a computational imaging framework that allows super-resolution fluorescence and QP. Our method is based on structured illumination microscopy (SIM), which is compatible with both incoherent [26, 32, 33, 36] and coherent [37–42] sources of contrast [35, 43–45].

Though most SIM implementations have focused on super-resolution, some previous works have recognized its suitability for high-content imaging [18–24]. However, these predominantly relied on fluorescence imaging with calibrated illumination patterns, which are difficult to realize in practice because lens-based illumination has finite SBP. Here, we use random speckle illumination, generated by scattering through Scotch tape, in order to achieve both high-NA and large FOV illumination. Our method is related to blind SIM [46]; however, instead of using many random speckle patterns (which restricts resolution gain to ∼1.8×), we translate the speckle laterally, enabling resolution gains beyond that of previous methods [46–52] (see Appendix D). Previous works also use high-cost spatial-light-modulators (SLM) [53] or galvonemeter/MEMs mirrors [41, 54] for precise illumination, as well as expensive objective lenses for aberration correction. We eliminate both of these requirements by performing computational self-calibration, solving for the translation trajectory and the field-dependent aberrations of the system.

Our proposed framework enables three key advantages over existing methods:

  • resolution gains of 4× the native resolution of the objective (linear SIM is usually restricted to 2×) [46–52, 55, 56],
  • synergistic use of both the fluorescent (incoherent) and quantitative-phase (coherent) signal from the sample to enable multi-modal imaging,
  • algorithmic self-calibration to significantly relax hardware requirements, enabling low-cost and robust imaging.

In our experimental setup, the Scotch tape is placed just before the sample and mounted on a translation stage (Fig. 1). This generates disordered speckles at the sample that are much smaller than the PSF of the imaging optics, encoding SR information. Nonlinear optimization methods are then used to jointly reconstruct multiple calibration quantities: the unknown speckle illumination pattern, the translation trajectory of the pattern, and the field-dependent system aberrations (on a patch-by-patch basis). These are subsequently used to decode the SR information of both fluorescence and phase. Compared to traditional SIM systems that use high-NA objective lenses, our system utilizes a low-NA low-cost lens to ensure large FOV. The Scotch tape generated speckle illumination is not resolution-bound by any imaging lens; this is what allows us to achieve 4× resolution gains. The result is high-content imaging at sub-micron resolutions across millimeter scale regions. Various previous works have achieved cost-effectiveness, high-content (large SBP), or multiple modalities, but we believe this to be the first to simultaneously encompass all three.

 figure: Fig. 1

Fig. 1 Structured illumination microscopy (SIM) with laterally-translated Scotch tape as the patterning element, achieving 4× resolution gain. Our imaging system has both an incoherent arm, where Sensor-F captures raw fluorescence images (at the emission wavelength, λem=605 nm) for fluorescence super-resolution, and a coherent arm, where Sensor-C1 and Sensor-C2 capture images with different defocus (at the laser illumination wavelength, λex=532 nm) for both super-resolution phase reconstruction and speckle trajectory calibration. OBJ: objective, AP: adjustable iris-aperture, DM: dichroic mirror, SF: spectral filter, ND-F: neutral-density filter.

Download Full Size | PDF

2. Theory

SIM generally achieves super-resolution by illuminating the sample with a high spatial-frequency pattern that mixes with the sample’s information content to form low-resolution "beat" patterns (i.e. moire fringes). Measurements of these "beat" patterns allow elucidation of sample features beyond the diffraction-limited resolution of the imaging system. Maximum achievable resolution in SIM is set by the sum of the numerical apertures (NAs) of the illumination pattern, NAillum, and the imaging system, NAsys. Thus, SIM enables a resolution gain factor (over the system’s native resolution) of (NAillum+NAsys)/NAsys [33]. Theminimum resolvable feature size is inversely related to this bound, d1/(NAillum+NAsys).

Linear SIM typically maximizes resolution by using either: 1) a high-NA objective in epi-illumination configuration, or 2) two identical high-NA objectives in transmission geometry [33, 35]. Both result in a maximum of 2×resolution gain because NAillum=NAsys, which corresponds to an SBP increase by a factor of 4×. Given the relatively low native SBP of high-NA imaging lenses, such increases are not sufficient to qualify as high-content imaging. Though nonlinear SIM techniques can enable higher resolution gains [34], they require either fluorophore photo-switching or saturation capabilities, which can associate with photobleaching and low SNR, and are not compatible with coherent QP techniques.

In this work, we aim for >2× resolution gain; hence, we need the illumination NA to be larger than the detection NA, without using a high-resolution illumination lens (that would restrict the illumination FOV). To achieve this, we use a wide-area high-angle scattering element - layered Scotch tape - on the illumination side of the sample (Fig. 1). Multiple scattering within the tape creates a speckle pattern with finer features than the PSF of the imaging system, i.e. NAillum>NAsys. This means that spatial frequencies beyond 2× the objective’s cutoff are mixed into the measurements, which gives a chance to achieve resolution gains greater than two.

The following sections outline the algorithm that we use to reconstruct large SBP fluorescence and QP images from low-resolution acquisitions of a sample illuminated by a laterally-translating speckle pattern. Unlike conventional SIM reconstruction methods that use analytic linear inversion, our strategy relies instead on joint-variable iterative optimization, where both the sample and illumination speckle (which is unknown) are reconstructed [25, 55, 56].

2.1. Super-resolution fluorescence imaging

Fluorescence imaging requires an incoherent imaging model. The intensity at the sensor is a low-resolution image of the sample’s fluorescent distribution, obeying the system’s incoherent resolution limit, dsys=λem/2NAsys, where λem is the emission wavelength. The speckle pattern generated through the Scotch tape excites the fluorescent sample with features of minimum size dillum=λex/2NAillum, where λex is the excitation wavelength and NAillum is set by the scattering angles exiting the Scotch tape. Approximating the excitation and emission wavelengths as similar (λ=λexλem), the resolution limit of the SIM reconstruction is dSIMλ/2(NAsys+NAillum), with a resolution gain factor of dsys/dSIM. This factor is mathematically unbounded; however, it will be practically limited by the illumination NA and SNR (see Appendix D).

2.1.1. Incoherent forward model for fluorescence imaging

Plane-wave illumination of the Scotch tape, positioned at the l-th scan-point, rl, creates a speckle illumination pattern, pf(rrl), at the plane of the fluorescent sample, of(r), where subscript f identifies variables in the fluorescence channel. The fluorescent signal is imaged through the system to give an intensity image at the camera plane:

If,l(r)=[of(r)C{pf(rrl)}]hf(r),l=1,,Nimg,
where r is the 2D spatial coordinates (x,y), hf(r) is the system PSF, and Nimg is the total number of images captured. The subscript l describes the acquisition index.

In this formulation, of(r), hf(r), and If,l(r) are 2D M×M-pixel distributions. To accurately model different regions of the pattern translating into the object’s M×M FOV with incrementing rl, we initialize pf(r) as a N×N pixel 2D distribution, with N>M, and introduce a cropping operator C to select the M×M region of the scanning pattern that illuminates the sample.

2.1.2. Inverse problem for fluorescence imaging

We next formulate a joint-variable optimization problem to extract SR estimates of the sample, of(r), and illumination distributions, pf(r), from the raw fluorescence measurements, If,l(r), as well as refine the estimate of the system’s PSF [25] (aberrations) and speckle translation trajectory, rl. We start with a crude initialization from raw observations of the speckle made using the coherent imaging arm (more details in Sec. 2.3). Defining ff(of,pf,hf,r1,,rNimg) as a joint-variable cost function that measures the difference between the raw intensity acquisitions and the expected intensities from estimated variables via the forward model, we have:

minof,pf,hf,r1,,rNimgff(of,pf,hf,r1,,rNimg)=l=1Nimgff,l(of,pf,hf,rl),where   ff,l(of,pf,hf,rl)=r|If,l(r)[of(r)C{pf(rrl)}]hf(r)|2.

To solve, a sequential gradient descent [57, 58] algorithm is used, where the gradient is updated once for each measurement. The sample, speckle pattern, system’s PSF and scanning positions are updated by sequentially running through Nimg measurements within one iteration. After the sequential update, an extra Nesterov’s accelerated update [59] is included for both the sample and pattern estimate, to speed up convergence. Appendix A contains a detailed derivation of the gradient with respect to the sample, structured pattern, system’s PSF and the scanning position based on the linear algebra vectorial notation. The algorithm is described in Appendix B.

2.2. Super-resolution quantitative-phase imaging

In this section, we present our coherent model for SR quantitative-phase (QP) imaging. A key difference between the QP and fluorescence imaging processes is that the detected intensity at the image plane for coherent imaging is nonlinearly related to the sample’s QP [1, 38]. Thus, solving for a sample’s QP from a single intensity measurement is a nonlinear and ill-posed problem. To circumvent this, we use intensity meaurements from two planes, one in-focus and one out-of-focus, to introduce a complex-valued operator that couples QP variations into measurable intensity fluctuations, making the reconstruction well-posed [60, 61]. The defocused measurements are denoted by a new subscript variable z. Figure 1 shows our implementation, where two defocused sensors are positioned at z0 and z1 in the coherent imaging arm.

Generally, the resolution for coherent imaging is roughly half that of its incoherent counterpart [1]. For our QP reconstruction, the resolution limit is dSIM=λex/(NAsys+NAillum), where the coherent resolution of the native system and the speckle are dsys = λex/NAsys and dillum = λex/NAillum, respectively.

2.2.1. Coherent forward model for phase imaging

Assuming an object with 2D complex transmittance function oc(r) is illuminated by a speckle field, pc(r), where subscript c refers to the coherent imaging channel, positioned at the l-th scanning position rl, we can represent the intensity image formed via coherent diffraction as:

Ic,lz(r)=|[oc(r)C{pc(rrl)}]hc,z(r)|2=|gc,lz(r)|2,l=1,,Nimg,;z=z0,z1,
where gc,lz(r) and hc,z(r) are the complex electric-fields at the imaging plane and the system’s coherent PSF at defocus distance z, respectively. The comma in the subscript separates the channel index, c or f, from the scanning-position and acquisition-number indices, l and z. Nimg here indicates the total number of translations of the Scotch tape. The defocused PSF can be further broken down into hc,z(r)=hc(r)hz(r), where hc(r) is the in-focus coherent PSF and hz(r) is the defocus kernel. Similar to Section 2.1.1, Ic,lz(r), oc(r), and hc,z(r) are 2D distributions with dimensions of M×M pixels, while pc(r) is of size N×N pixels (N>M). C is a cropping operator that selects the sub-region of the pattern that interacts with the sample. The sample’s QP distribution is simply the phase of the object’s complex transmittance, oc(r).

2.2.2. Inverse problem for phase imaging

We now take the raw coherent intensity measurements, Ic,lz(r), and the registered trajectory, rlz, from both of the defocused coherent sensors (more details in Sec. 2.3) as input to jointly estimate the sample’s SR complex-transmittance function, oc(r), and illumination complex-field, pc(r), as well as the aberrations inherent in the system’s PSF, hc(r). The optimization also further refines the scanning trajectory, rlz. Based on the forward model, we formulate the joint inverse problem:

minimizeoc,pc,hc,r1z0,r1z1,,rNimgz0,rNimgz1fc(oc,pc,hc,r1z0,r1z1,,rNimgz0,rNimgz1)=l,zfc,lz(oc,pc,hc,rlz),where   fc,lz(oc,pc,hc,rlz)=r|Ic,lz(r)|[oc(r)C{pc(rrlz)}]hc,z(r)||2.

Here, we adopt an amplitude-based cost function, fc, which robustly minimizes the distance between the estimated and measured amplitudes in the presence of noise [57, 61, 62]. We optimize the pattern trajectories, rl,z0 and rl,z1, separately for each coherent sensor, in order to account for any residual misalignment or timing-mismatch (see Sec. 2.3). As in the fluorescence case, sequential gradient descent [57, 58] was used to solve this inverse problem.

2.3. Registration of coherent images

Knowledge of the Scotch tape scanning position, rl, reduces the complexity of the joint sample and pattern estimation problem and is necessary to achieve SR reconstructions with greater than 2× resolution gain. Because our fluorescent sample is mostly transparent, the main scattering component in the acquired raw data originates from the Scotch tape. Thus, using a sub-pixel registration algorithm [63] between successive coherent-camera acquisitions, which are dominated by the scattered speckle signal, is sufficient to initialize the scanning trajectory of the Scotch tape,

rlz=R[Ic,1z(r),Ic,lz(r)],
where R is the registration operator. These initial estimates of rlz are then updated, alongside of(r), oc(r), pf(r), and pc(r) using the inverse models described in Sec. 2.1.2 and 2.2.2. In the fluorescence problem described in Sec. 2.1.2, we only use the trajectory from the in-focus coherent sensor at z = 0 for initialization, so we omit the subscript z in rlz.

3. Experimental results

Figure 1 shows our experimental setup. A green laser beam (BeamQ, 532 nm, 200 mW) is collimated through a single lens. The resulting plane wave illuminates the layered Scotch tape (4 layers of 3M 810 Scotch Tape, S-9783), creating a speckle pattern at the sample. The Scotch tape is mounted on a 3-axis piezo-stage (Thorlabs, MAX311D) to enable lateral speckle scanning. The transmitted light from the sample then travels through a 4f system formed by the objective lens (OBJ) and a single lens. In order to control the NA of our detection system (necessary for our verification experiment), an extra 4f system with an adjustable iris-aperture (AP) in the Fourier space is added. Then, the coherent and fluorescent light are optically separated bya dichroic mirror (DM, Thorlabs, DMLP550R), since they have different wavelengths. The fluorescence is further spectrally filtered (SF) before imaging onto Sensor-F (PCO.edge 5.5). The (much brighter) coherent light is ND-filtered and then split by another beam-splitter before falling on the two defocused coherent sensors, Sensor-C1 and Sensor-C2 (FLIR, BFS-U3-200S6M-C). Sensor-C1 is focused on the sample, while Sensor-C2 is defocused by 0.8 mm.

For our initial verification experiments, we use a 40× objective (Nikon, CFI Achro 40×) with NA=0.65 as our system’s microscope objective (OBJ). Later high-content experimental demonstrations switch to a 4× objective (Nikon, CFI Plan Achro 4×) with NA=0.1.

3.1. Super-resolution verification

3.1.1. Fluorescence super-resolution verification

We start with a proof-of-concept experiment to verify that our method accurately reconstructs a fluorescent sample at resolutions greater than twice the imaging system’s diffraction-limit. To do so, we use the higher-resolution objective (40×, NA 0.65) and a tunable Fourier-space iris-aperture (AP) that allows us to artificially reduce the system’s NA (NAsys), and therefore, resolution. With the aperture mostly closed (to NAsys=0.1), we acquire a low-resolution SIM dataset, which is then used to computationally reconstruct a super-resolved image of the sample with resolution corresponding to an effective NA = 0.4. This reconstruction is then compared to the widefield image of the sample acquired with the aperture open to NAsys=0.4, for validation.

 figure: Fig. 2

Fig. 2 Verification of fluorescence super-resolution with 4× resolution gain. Widefield images, for comparison, were acquired at (a) 0.1 NA and (e) 0.4 NA by adjusting the aperture size. (b) The Scotch tape speckle pattern creates much higher spatial frequencies (∼0.35 NA) than the 0.1 NA detection system can measure. (c) Using the 0.1 NA aperture, we acquire low-resolution fluorescence images for different lateral positions of the Scotch tape. (d) The reconstructed SIM image contains spatial frequencies up to ∼0.4 NA and is in agreement with (e) the deconvolved widefield image with the system operating at 0.4 NA.

Download Full Size | PDF

Results are shown in Fig. 2, comparing our method against widefield fluorescence images at NAs of 0.1 and 0.4, with no Scotch tape in place. The sample is a monolayer of 1 μm diameter microspheres, with center emission wavelength λem=605 nm. At 0.1 NA, the expected resolution is λem/2NA 3.0 μm and the microspheres are completely unresolvable. At 0.4 NA, the expected resolution is λem/2NA 0.76 μm and the microspheres are well-resolved. With Scotch tape and 0.1 NA, we acquire a set of measurements as we translate the speckle pattern in 267 nm increments on a 26 × 26 rectangular grid - Nimg=676 acquisitions total (details in Sec. 4).

Figure 2(d) shows the final SR reconstruction of the fluorescent sample in real space, along with the amplitude of its Fourier spectrum. Individual microspheres can be clearly resolved, and results match well with the 0.4 NA deconvolved widefield image (Fig. 2(e)). Fourier-space analysis confirms our resolution improvement factor to be 4×, which suggests that the Scotch tape produces NAillum0.3. To verify, we fully open the aperture and observe that the speckle pattern contains spatial frequencies up to NAillum0.35 (Fig. 2(b)).

3.1.2. Coherent super-resolution verification

To quantify super-resolution in the coherent imaging channel, we use the low-resolution objective (4×, NA 0.1) to image a USAF1951 resolution chart (Benchmark Technologies). This phase target provides different feature sizes with known phase values, so is a suitable calibration target to quantify both the coherent resolution and the phase sensitivity of our technique.

Results are shown in Fig. 3. The coherent intensity image (Fig. 3(a)) acquired with 0.1 NA (no tape) has low resolution (5.32 μm), so hardly any features can be resolved. In Fig. 3(b), we show the “ground truth” QP distribution at 0.4 NA, as provided by the manufacturer.

 figure: Fig. 3

Fig. 3 Verification of coherent quantitative phase (QP) super-resolution with 4× resolution gain. (a) Low-resolution intensity image and (b) “ground truth” phase at NA=0.4, for comparison. (c) Raw acquisitions of the speckle-illuminated sample intensity from two focus planes, collected with 0.1 NA. (d) Reconstructed SR amplitude and QP, demonstrating 4× resolution gain.

Download Full Size | PDF

After inserting the Scotch tape, it was translated in 400 nm increments on a 36 × 36 rectangular grid, giving Nimg=1296 total acquisitions (details in Sec. 4) at each of the two defocused coherent sensors (Fig. 3(c)). Figure 3(d,e) shows the SR reconstruction for the amplitude and phase of this sample, resolving features up to group 9 element 5 (1.23 μm separation). Thus, our coherent reconstruction has a 4× resolution gain compared to the brightfield intensity image.

3.2. High-content multi-modal microscopy

Of course, artificially reducing resolution in order to validate our method required using a moderate-NA objective, which precluded imaging over the large FOVs allowed by low-NA objectives. In this section, we demonstrate high-content fluorescence imaging with the low-resolution, large FOV objective (4×, NA 0.1) to visualize a 2.7×3.3 mm 2 FOV (see Fig. 4(a)). We note that this FOV is more than 100× larger than that allowed by the 40× objective used in the verification experiments, so is suitable for large SBP imaging.

Within the imaged FOV for our 1 μm diameter microsphere monolayer sample, we zoom in to four regions-of-interest (ROI), labeled ①, ②, ③, and ④. Widefield fluorescence imaging cannot resolve individual microspheres, as expected. Using our method, however, gives a factor 4× resolution gain across the whole FOV and enables resolution of individual microspheres. Thus, the SBP of the system, natively ∼5.3 mega-pixels of content, was increased to ∼85 mega-pixels, a factor of 42=16×. Though this is still not in the Gigapixel range, this technique is scalable and could reach that range with a higher-SBP objective and sensors.

We next include the QP imaging channel to demonstrate high-content multimodal imaging, as shown in Fig. 5. The multimodal FOV is smaller (2×2.7 mm 2 FOV) than that presented in Fig. 4 because our coherent detection sensors have a lower pixel-count than our fluorescence detection sensor. Figure 5 includes zoom-ins of three ROIs to visualize the multimodal SR.

As expected, the widefield fluorescence image and the on-axis coherent intensity image do not allow resolution of individual 2 μm microspheres, since the theoretical resolution for fluorescence imaging is λem/2NAsys3μm and for QP imaging is λex/NAsys5μm. However, our SIM reconstruction with 4× resolution gain enables clear separation of the microspheres in both channels. Our fluorescence and QP reconstructions match well, which is expected since the fluorescent and QP signal originate from identical physical structures in this particular sample.

The full-FOV reconstructions (Fig. 4 and 5) are obtained by dividing the FOV into small patches, reconstructing each patch, then stitching together the high-content images. Patch-wise reconstruction is computationally favorable because of its low-memory requirement, but also allows us to correct field-dependent aberrations. Since we process each patch separately using our self-calibration algorithm, we solve for each patch’s PSF independently and correct the local aberrations digitally. The reconstruction takes approximately 15 minutes for each channel on a high-end GPU (NVIDIA, TITAN Xp) for a patch with FOV of 110×110 μ m 2.

 figure: Fig. 4

Fig. 4 Reconstructed super-resolution fluorescence with 4× resolution gain across the full FOV (See Visualization 1). Four zoom-ins of regions-of-interest (ROIs) are compared to their widefield counterparts.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Reconstructed multimodal (fluorescence and quantitative phase) high-content imaging (See Visualization 2 and Visualization 3). Zoom-ins for three ROIs compare the widefield, super-resolved fluorescence, coherent intensity, and super-resolved phase reconstructions.

Download Full Size | PDF

4. Discussion

Unlike many existing high-content imaging techniques, one benefit of our method is its easy compatibility for simultaneous QP and fluorescence imaging. This arises from SIM’s unique ability to multiplex both coherent and incoherent signals into the system aperture [35]. Furthermore, existing high-content fluorescence imaging techniques that use micro-lens arrays [18–23] are resolution-limited by the physical size of the lenslets, which typically have NAillum<0.3. Recent work [24] has introduced a framework in which gratings with sub-diffraction slits allow sub-micron resolution across large FOVs - however, this work is heavily limited by SNR, due to the primarily opaque grating, as well as tight required axial alignment. Though the Scotch tape used in our proof-of-concept prototype also induced illumination angles within a similar range as micro-lens arrays (NAillum0.35), we could in future use a stronger scattering media to achieve NAillum1.0, enabling further SR and thus larger SBP.

The main drawback of our technique is that we use around 1200 translations of the Scotch tape for each reconstruction, which results in long acquisition times (∼ 180 seconds for shifting, pausing, and capturing) and higher photon requirements. Heuristically, for both fluorescence and QP imaging, we found that a sufficiently large scanning range (larger than 2 low-NA diffraction limited spot sizes) and finer scan steps (smaller than the targeted resolution) can reduce distortions in the reconstruction. Tuning such parameters to minimize the number of acquisitions without degrading reconstruction quality is thus an important subject for future endeavors.

5. Conclusion

We have presented a large-FOV multimodal SIM fluorescence and QP imaging technique. We use Scotch tape to efficiently generate high-resolution features over a large FOV, which can then be measured with both fluorescent and coherent contrast using a low-NA objective. A computational optimization-based self-calibration algorithm corrected for experimental uncertainties (scanning-position, aberrations, and random speckle pattern) and enabled super-resolution fluorescence and quantitative phase reconstruction with factor 4× resolution gain.

Appendix A: Gradient derivation

A.1. Vectorial notation

A.1.1. Fluorescence imaging vectorial model

In order to solve the multivariate optimization problem in Eq. (2) and (4) and derive the gradient of the cost function, it is more convenient to consider a linear algebra vectorial notation of the forward models. The fluorescence SIM forward model in Eq. (1) can be alternatively expressed as

If,l=Hfdiag(S(rl)pf)of,
where If,l, Hf, S(rl), pf, and of designate the raw fluorescent intensity vector, diffraction-limit low-pass filtering operation, pattern translation/cropping operation, N2×1 speckle pattern vector, and M2×1 sample’s fluorescent distribution vector, respectively. The 2D-array variables described in (1) are all reshaped into column vectors here. Hf and S(rl) can be further broken down into their individual vectorial components:
Hf=FM1diag(h˜f)FM,S(rl)=QFN1diag(e(rl))FN,
where h˜f is the OTF vector and e(rl) is the vectorization of the exp (j2πurl) function, where u is spatial frequency. The notation diag(a) turns a n×1 vector, a, into an n×n diagonal matrix with diagonal entries from the vector entries. FN and FM denote the N×N-point and M×M-point 2D discrete Fourier transform matrix, respectively, and Q is the M2×N2 cropping matrix.

With this vectorial notation, the cost function for a single fluorescence measurement is

ff,l(of,pf,h˜f,rl)=ff,lTff,l=If,lHfdiag(S(rl)pf)of22,
where ff,l=If,lHfdiag(S(rl)pf)of is the cost vector and T denotes the transpose operation.

A.1.2. Coherent imaging vectorial model

As with the fluorescence vectorial model, we can rewrite Eq. (3) using vectorial notation:

Ic,lz=|gc,lz|2,
where
gc,lz=Hc,zdiag(S(rlz)pc)ocHc,z=FM1diag(h˜c)diag(h˜z)FM.

oc and pc are the M2×1 sample transmittance function vector and N2×1 structured field vector, respectively. h˜c and h˜z are the system pupil function and the deliberate defocus pupil function, respectively. With this vectorial notation, we can then express the cost function for a single coherent intensity measurement as

fc,lz(oc,pc,h˜c,rlz)=fc,lzTfc,lz=Ic,lz|gc,lz|22,
where fc,lz=Ic,lz|gc,lz| is the cost vector for the coherent intensity measurement.

A.2. Gradient derivation

A.2.1. Gradient derivation for fluorescence imaging

To optimize Eq. (2) for the variables of, pf, h˜f and rl, we first derive the necessary gradients of the fluorescence cost function. Consider taking the gradient of ff,l with respect to of, we can represent the 1×M2 gradient row vector as

ff,lof=(ff,lff,l)(ff,lof)=(2ff,lT)ft(Hfdiag(S(rl)pf).

Turning the row gradient vector into a M2×1 column vector in order to update the object vector in the right dimension, we the final gradient becomes

offf,l=(ff,lof)T=2diag(S(rl)pf)HfTff,l.

To compute the gradient of pf, we first rewrite the cost vector ff,l as

ff,l=If,lHfdiag(o)S(rl)pf.

Now, we can write the gradient of the cost function with respect to the pattern vector in row and column vector form as

ff,lpf=(ff,lff,l)(ff,lpf)=(2ff,lT)(Hfdiag(of)S(rl))pfff,l=(ff,lpf)T=2S(rl)Tdiag(of)HfTff,l.

Similar to the derivation of the pattern function gradient, it is easier to work with the rewritten form of the cost vector expressed as

ff,l=If,lFM1diag(FMdiag(S(rl)pfof))h˜f.

The gradient of the cost function with respect to the OTF vector in the row and column vector form are expressed, respectively, as

ff,lh˜f=(ff,lff,l)(ff,lh˜f)=(2ff,lT)(FM1diag(FMdiag(S(rl)pfof))) h˜fff,l=(ff,lh˜f)=2diag(FMdiag(S(rl)pfof)¯)FMff,l,
where a¯ denotes entry-wise complex conjugate operation on any general vector a. One difference between this gradient and the previous one is that the variable to solve, h˜f, is now a complex vector. When turning the gradient row vector of a complex vector into a column vector, we have to take a Hermitian operation, , on the row vector following the conventions in [64]. We will have more examples of complex variables in the coherent model gradient derivation.

For taking the gradient of the scanning position, we again rewrite the cost vector ff,l:

ff,l=IlHfdiag(of)QFN1diag(FNpf)e(rl).

We can then write the gradient of the cost function with respect to the scanning position as

ff,lql=(ff,lff,l)(ff,le(rl))(e(rl)ql)=(2ff,lT)(Hfdiag(of)QFN1diag(FNpf))(diag(j2πuq)e(rl)),
where q is either the x or y spatial coordinate component of rl. uq is the N2×1 vectorial notation of the spatial frequency function in the q direction.

To numerically evaluate these gradients, we represent them in the functional form as:

offf,l(of,pf,hf,rl)=2pf(rrl)[hf*(r)(If,l(r)[of(r)C{pf(rrl)}]hf(r))],pfff,l(of,pf,hf,rl)=2δ(r+rl)P{of(r)[hf*(r)(If,l(r)[of(r)C{pf(rrl)}]hf(r))]},h˜fff,l(of,pf,hf,rl)=2(F{of(r)C{pf(rrl)}})*F{If,l(r)[of(r)C{pf(rrl)}]hf(r)},qlff,l(of,pf,hf,rl)=2{r(If,l(r)[of(r)C{pf(rrl)}]hf(r))hf(r)[of(r)C{pf(rrl)ql}]},
where a* stands for complex conjugate of any general function, a, F is the Fourier transform operator, and P is a zero-padding operator that pads an M×M image to size N×N pixels. In this form, If,l(r),of(r), and hf(r) are 2D M×M images, while pf(r) is a N×N image. The gradients for the sample and the structured pattern are of the same size as of(r) and pf(r), respectively. Ideally, the gradient of the the scanning position in each direction is a real number. However, due to imperfect implementation of the discrete differentiation in each direction, the gradient will have small imaginary value that will be dropped in the update of the scanning position.

A.2.2. Gradient derivation for coherent imaging

For the coherent imaging case, we will derive the gradients of the cost function in Eq. (11) with respect to the sample transmittance function oc, speckle field pc, pupil function h˜c, and the scanning position rlz. First, we take the gradient of fc,lz with respect to oc, we then have the gradient in the row and column vector forms as

fc,lzoc=(fc,lzfc,lz)(fc,lzgc,lz)(gc,lzoc)=(2fc,lzT)(12diag(gc,lz¯|gc,lz|))(Hc,zdiag(S(rlz))pc)ocfc,lz=(fc,lzoc)=diag(S(rlz)pc¯)Hc,zdiag(gc,lz|gc,lz|)fc,lz,
where the gc,lz|gc,lz| operation denotes entry-wise division between the two vectors, gc,lz and |gc,lz|. In addition, the detailed calculation of fc,lzgc,lz can be found in the Appendix of [57].

Next, we take the gradient with respect to the pattern field vector, pc, and write down the corresponding row and column vectors as

fc,lzpc=(fc,lzfc,lz)(fc,lzgc,lz)(gc,lzpc)=(2fc,lzT)(12diag(gc,lz¯|gc,lz|))(Hc,zdiag(oc)S(rlz))pcfc,lz=(fc,lzpc)=S(rlz)diag(oc¯)Hc,zdiag(gc,lz|gc,lz|)fc,lz.

In order to calculate gc,lzpc, we need to reorder the dot multiplication of oc and S(rlz)pc as we did in deriving the gradient of the pattern for fluorescence imaging.

In order to do aberration correction, we will need to estimate the system pupil function, h˜c. The gradient with respect to the pupil function can be derived as,

fc,lzh˜c=(fc,lzfc,lz)(fc,lzgc,lz)(gc,lzh˜c)=(2fc,lzT)(12diag(gc,lz¯|gc,lz|))(FM1diag[FMdiag(S(rlz)pc)oc]diag(h˜z))h˜cfc,lz=(fc,lzh˜c)=diag(h˜z¯)diag[FMdiag(S(rlz)pc)oc¯]FMdiag(gc,lz|gc,lz|)fc,lz.

In the end, the gradient of the scanning position for refinement can be derived as

fc,lzqlz=(fc,lzfc,lz)[(fc,lzgc,lz)(gc,lze(rlz))(e(rlz)ql)+(fc,lzgc,lz¯)(gc,lz¯e(rlz)¯)(e(rlz)¯ql)]=2(fc,lzfc,lz)Re{(fc,lzgc,lz)(gc,lze(rlz))(e(rlz)ql)}=2(2fc,lzT)Re{(12diag(gc,lz¯|gc,lz|))(Hc,zdiag(oc)QFN1diag(FNpc))(diag(j2πuq)e(rlz))}=2Re{fc,lzTdiag(gc,lz¯|gc,lz|)Hc,zdiag(oc)QFN1diag(FNpc)diag(j2πuq)e(rlz)},
where q is either the x or y spatial coordinate component of rlz.

In order to numerically evaluate these gradients, we represent them, as we did for the gradients of the fluorescence model, into functional forms:

ocfc,lz(oc,pc,hc,rlz)=pc*(rrlz)[hc,z*(r)((Ic,lz(r)|gc,lz(r)|1)gc,lz(r))]pcfc,lz(oc,pc,hc,rlz)=δ(r+rlz)P{oc*(r)[hc,z*(r)((Ic,lz(r)|gc,lz(r)|1)gc,lz(r))]}h˜cfc,lz(oc,pc,hc,rlz)=h˜z*(u)F{pc(rrlz)oc(r))}*F{(Ic,lz(r)|gc,lz(r)|1)gc,lz(r)}qlzfc,lz(oc,pc,hc,rlz)=2Re{r[(Ic,lz(r)|gc,lz(r)|1)gc,lz*(r)][hc,z(r)(oc(r)C{pc(rrlz)qlz})]}.

Appendix B: Reconstruction algorithm

With the derivation of the gradients in Appendix A, we summarize here the reconstruction algorithm for fluorescence imaging and coherent imaging.

B.1. Algorithm for fluorescence imaging

First, we initialize the sample, of(r), with the mean image of all the structure illuminated images, If,l(r), which is approximately a widefield diffraction-limited image. As for the structured pattern, pf(r), we initialize it with a all-one image. The initial OTF, h˜f(u), is set as a non-aberrated incoherent OTF. Initial scanning positions are from the registration of the in-focus coherent speckle images, Ic,lz(r) (z = 0).

In the algorithm, Kf is the total number of iterations (Kf = 100 is generally enough for convergence). At every iteration, we sequentially update the sample, structured pattern, system’s OTF and the scanning position using each single frame from l=1 to l=Nimg. A Nesterov acceleration step is applied on the sample and the structured pattern at the end of each iteration. The detailed algorithm is summarized in Algorithm 1.

Tables Icon

Algorithm 1. Fluorescence imaging reconstruction

B.2. Algorithm for coherent imaging

For coherent imaging, we initialize oc(r) with all ones. The pattern, pc(r), is initialized with the mean of the square root of registered coherent in-focus intensity stack. The pupil function is initialized with a circ function (2D function filled with ones within the defined radius) with the radius defined by the objective NA. In the end, we initialize the scanning position, rlz, from the registration of the intensity stacks, Ic,lz, for respective focal planes.

For the coherent imaging reconstruction, we use a total number of Kc30 iterations to converge. We sequentially update oc(r), pc(r), hc(r), and rl,(l=1,,Nimg) for each defocused plane (total number of defocused planes is Nz) per iteration. Unlike for our fluorescence reconstructions, we do not use the extra Nesterov’s acceleration step in the QP reconstruction.

Tables Icon

Algorithm 2. Coherent imaging reconstruction

Appendix C: Sample preparation

Results presented in this work targeted super-resolution of 1 μm and 2 μm diameter polystyrene microspheres (Thermofischer) that were fluorescently tagged to emit at a center wavelength of λem=605 nm. Monolayer samples of these microspheres were prepared by placing microsphere dilutions (60 uL stock-solution/500 uL isopropyl alcohol) onto #1.5 coverslips and then allowing to air-dry. High-index oil (nm(λ)=1.52 at λ = 532 nm) was subsequently placed on the coverslip to index-match the microspheres. An adhesive spacer followed by another #1.5 coverslip was placed on top of the original coverslip to assure a uniform sample layer for imaging.

Appendix D: Posedness of the problem

In this paper, we illuminate the sample with an unknown speckle pattern to encode both large-FOV and high-resolution information into our measurement. To decode the high-resolution information, we need to jointly estimate the speckle pattern and the sample. This framework shares similar characteristics with the work on blind SIM first introduced by [46], where completely random speckle patterns were sequentially illuminated onto the sample. Unfortunately, the reconstruction formulation proposed in that work is especially ill-posed due to randomness between the illumination patterns, i.e., if Nimgraw images are taken, there would be Nimg+1 unknown variables to solve for (Nimg illumination patterns and 1 sample distribution). To better condition this problem, priors based on speckle statistics [46–52] and sample sparsity [48, 51] can be introduced, pushing blind SIM to 2s resolution gain. However, to implement high-content microscopy using SIM, we desire a resolution gain of >2×. Even with priors, we found that this degree of resolution gain was not experimentally achievable with uncorrelated and random speckle illuminations, due to the reconstruction formulation being so ill-posed.

In this work, we improve the posedness of the problem by illuminating with a translating speckle pattern, as opposed to randomly changing speckle patterns. Because each individual illumination pattern at the sample is a laterally shifted version of every other illumination pattern, the posedness of the reconstruction framework dramatically increases. Previous works [25, 55, 56] have also demonstrated this concept to effectively achieve beyond 2× resolution gain.

Appendix E: Self-calibration analysis

In Sec. 2.1.2 and 2.2.2, we presented the inverse problem formulation for super-resolution fluorescence and QP. We note that those formulations also included terms to self-calibrate for unknowns in the system’s experimental OTF and the illumination pattern’s scan-position. Here we demonstrate how these calibrations are important for our reconstruction quality.

 figure: Fig. 6

Fig. 6 Algorithmic self-calibration significantly improves fluorescence super-resolution reconstructions. Here, we compare the resconstructed fluorescence image, speckle intensity, and OTF with no correction, OTF correction, and both OTF correction and scanning position correction. The right panel shows the overlay of the uncorrected and corrected scanning position trajectories.

Download Full Size | PDF

To demonstrate the improvement in our fluorescence imaging reconstruction due to the self-calibration algorithm, we select a region of interest from the dataset presented in Fig. 4. Figure 6 shows the comparison of the SR reconstruction with and without self-calibration. The SR reconstruction with no self-calibration contains severe artifacts in reconstructions of both the speckle illumination pattern and the sample’s fluorescent distribution. With OTF correction, dramatic improvements in the fluorescence SR image are evident. OTF correction is especially important when imaging across a large FOV (Fig. 4 and 5) due to space-varying aberrations.

Further self-calibration to correct for errors in the initial estimate of the illumination pattern’s trajectory enables further refinement of the SR reconstruction. We see that this illumination trajectory demonstrates greater smoothness after undergoing self-calibration. We fully expect that this calibration step to have important ramifications in cases where the physical translation stage is of lower stability or more inaccurate incremental translation.

 figure: Fig. 7

Fig. 7 Algorithmic self-calibration significantly improves coherent super-resolution reconstructions. We show a comparison of reconstructed amplitude, phase, speckle amplitude, and phase of the pupil function with no correction, pupil correction, and both pupil correction and scanning position correction. The right panel shows the overlay of scannning position trajectory for the in-focus and defocused cameras before and after correction.

Download Full Size | PDF

We also test how the self-calibration affects our phase reconstruction, using the same dataset as in Fig. 3. Similar to the conclusion from the fluorescence self-calibration demonstration, pupil correction (coherent OTF) plays an important role in reducing SR reconstruction artifacts as shown in Fig. 7. The reconstructed pupil phase suggests that our system aberration is mainly caused by astigmatism. Further refinement of the trajectory of the illumination pattern improves the SR resolution by resolving one more element (group 9 element 6) of the USAF chart. Paying more attention to the uncorrected and corrected illumination trajectory, we find that the self-calibrated trajectory of the illumination pattern tends to align the trajectories from the two coherent cameras. We also notice that the trajectory from the quantitative-phase channels seems to jitter more compared to the fluorescence channel. We hypothesize that this is due to longer exposure time for each fluorescence acquisition, which would average out the jitter.

Funding

Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative (GBMF4562); Ruth L. Kirschstein National Research Service Award (F32GM129966).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. J. Goodman, Introduction to Fourier optics (Roberts & Co., 2005).

2. A. W. Lohmann, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space-bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13, 470–473 (1996). [CrossRef]  

3. B. Mccullough, X. Ying, T. Monticello, and M. Bonnefoi, “Digital microscopy imaging and new approaches in toxicologic pathology,” Toxicol Pathol. 32 (suppl 2), 49–58 (2004). [CrossRef]   [PubMed]  

4. M. H. Kim, Y. Park, D. Seo, Y. J. Lim, D.-I. Kim, C. W. Kim, and W. H. Kim, “Virtual microscopy as a practical alternative to conventional microscopyin pathology education,” Basic Appl. Pathol. 1, 46–48 (2008). [CrossRef]  

5. F. R. Dee, “Virtual microscopy in pathology education,” Human Pathol 40, 1112–1121 (2009). [CrossRef]  

6. R. Pepperkok and J. Ellenberg, “High-throughput fluorescence microscopy for systems biology,” Nat. Rev. Mol. Cell Biol. 7, 690–696 (2006). [CrossRef]   [PubMed]  

7. J. C. Yarrow, G. Totsukawa, G. T. Charras, and T. J. Mitchison, “Screening for cell migration inhibitors via automated microscopy reveals a Rho-kinase Inhibitor,” Chem. Biol. 12, 385–395 (2005). [CrossRef]   [PubMed]  

8. V. Laketa, J. C. Simpson, S. Bechtel, S. Wiemann, and R. Pepperkok, “High-content microscopy identifies new neurite outgrowth regulators,” Mol. Biol. Cell 18, 242–252 (2007). [CrossRef]  

9. A. Trounson, “The production and directed differentiation of human embryonic stem cells,” Endocr. Rev. 27(2), 208–219 (2006). [CrossRef]   [PubMed]  

10. U. S. Eggert, A. A. Kiger, C. Richter, Z. E. Perlman, N. Perrimon, T. J. Mitchison, and C. M. Field, “Parallel chemical genetic and genome-wide RNAi screens identify cytokinesis inhibitors and targets,” PLoS Biol. 2, e379 (2004). [CrossRef]   [PubMed]  

11. V. Starkuviene and R. Pepperkok, “The potential of high-content high-throughput microscopy in drug discovery,” Br. J. Pharmacol 152, 62–71 (2007). [PubMed]  

12. W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” PNAS 98, 11301–11305 (2001). [CrossRef]   [PubMed]  

13. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18, 11181–11191 (2010). [CrossRef]   [PubMed]  

14. A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Scientific reports 3: 1717 (2013). [CrossRef]  

15. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photon. 7, 739–745 (2013). [CrossRef]  

16. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]   [PubMed]  

17. L. Tian, Z. Liu, L. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2, 904–911 (2015). [CrossRef]  

18. S. Pang, C. Han, M. Kato, P. W. Sternberg, and C. Yang, “Wide and scalable field-of-view Talbot-grid-based fluorescence microscopy,” Opt. Lett. 37, 5018–5020 (2012). [CrossRef]   [PubMed]  

19. A. Orth and K. Crozier, “Microscopy with microlens arrays: high throughput, high resolution and light-field imaging,” Opt. Express 20, 13522–13531 (2012). [CrossRef]   [PubMed]  

20. A. Orth and K. Crozier, “Gigapixel fluorescence microscopy with a water immersion microlens array,” Opt. Express 21, 2361–2368 (2013). [CrossRef]   [PubMed]  

21. S. Pang, C. Han, J. Erath, A. Rodriguez, and C. Yang, “Wide field-of-view Talbot grid-based microscopy for multicolor fluorescence imaging,” Opt. Express 21, 14555–14565 (2013). [CrossRef]   [PubMed]  

22. A. Orth and K. B. Crozier, “High throughput multichannel fluorescence microscopy with microlens arrays,” Opt. Express 22, 18101–18112 (2014). [CrossRef]   [PubMed]  

23. A. Orth, M. J. Tomaszewski, R. N. Ghosh, and E. Schonbrun, “Gigapixel multispectral microscopy,” Optica 2, 654–662 (2015). [CrossRef]  

24. S. Chowdhury, J. Chen, and J. Izatt, “Structured illumination fluorescence microscopy using Talbot self-imaging effect for high-throughput visualization,” arXiv: 1801.03540 (2018).

25. K. Guo, Z. Zhang, S. Jiang, J. Liao, J. Zhong, Y. C. Eldar, and G. Zheng, “13-fold resolution gain through turbid layer via translated unknown speckle illumination,” Biomed. Opt. Express 9, 260–274 (2018). [CrossRef]   [PubMed]  

26. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit. II,” J. Opt. Soc. Am. 57, 932–941 (1967). [CrossRef]  

27. C. J. Schwarz, Y. Kuznetsova, and S. R. J. Brueck, “Imaging interferometric microscopy,” Opt. Lett. 28, 1424–1426 (2003). [CrossRef]   [PubMed]  

28. M. Kim, Y. Choi, C. Fang-Yen, . Y. Sung Ramachandra, R. Dasari, Michael S. Feld, and W. Choi, “High-speed synthetic aperture microscopy for live cell imaging,” Opt. Lett. 36, 148–150 (2011). [CrossRef]   [PubMed]  

29. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19, 780–782 (1994). [CrossRef]   [PubMed]  

30. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006). [CrossRef]   [PubMed]  

31. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nature Methods 3, 793–795 (2006). [CrossRef]   [PubMed]  

32. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). [CrossRef]  

33. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” Journal of Microscopy 198, 82–87 (2000). [CrossRef]   [PubMed]  

34. M. G. L. Gustafson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” PNAS 102, 13081–13086 (2005). [CrossRef]  

35. S. Chowdhury, W. J. Eldridge, A. Wax, and J. A. Izatt, “Structured illumination multimodal 3D-resolved quantitative phase and fluorescence sub-diffraction microscopy,” Biomed. Opt. Express 8, 2496–2518 (2017). [CrossRef]   [PubMed]  

36. D. Li, L. Shao, B.-C. Chen, X. Zhang, M. Zhang, B. Moses, D. E. Milkie, J. R. Beach, J. A. Hammer, M. Pasham, T. Kirchhausen, M. A. Baird, M. W. Davidson, P. Xu, and E. Betzig, “Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics,” Science 349, aab3500 (2015). [CrossRef]  

37. P. von Olshausen and A. Rohrbach, “Coherent total internal reflection dark-field microscopy: label-free imaging beyond the diffraction limit,” Opt. Lett. 38, 4066–4069 (2013). [CrossRef]   [PubMed]  

38. S. Chowdhury, A.-H. Dhalla, and J. Izatt, “Structured oblique illumination microscopy for enhanced resolution imaging of non-fluorescent, coherently scattering samples,” Biomed. Opt. Express 3, 1841–1854 (2012). [CrossRef]   [PubMed]  

39. S. Chowdhury and J. A. Izatt, “Structured illumination quantitative phase microscopy for enhanced resolution amplitude and phase imaging,” Biomed. Opt. Express 4, 1795–1805 (2013). [CrossRef]   [PubMed]  

40. P. Gao, G. Pedrini, and W. Osten, “Structured illumination for resolution enhancement and autofocusing in digital holographic microscopy,” Opt. Lett. 38, 1328–1330 (2013). [CrossRef]   [PubMed]  

41. K. Lee, K. Kim, G. Kim, S. Shin, and Y. Park, “Time-multiplexed structured illumination using a DMD for optical diffraction tomography,” Opt. Lett. 42, 999–1002 (2017). [CrossRef]   [PubMed]  

42. S. Chowdhury, W. J. Eldridge, A. Wax, and J. Izatt, “Refractive index tomography with structured illumination,” Optica 4, 537–545 (2017). [CrossRef]  

43. S. Chowdhury, W. J. Eldridge, A. Wax, and J. A. Izatt, “Structured illumination microscopy for dualmodality 3D sub-diffraction resolution fluorescence and refractive-index reconstruction,” Biomed. Opt. Express 8, 5776–5793 (2017). [CrossRef]  

44. M. Schürmann, G. Cojoc, S. Girardo, E. Ulbricht, J. Guck, and P. Müller, “Three-dimensional correlative single-cell imaging utilizing fluorescence and refractive index tomography,” J. Biophoton. p. e201700145 (2017).

45. S. Shin, D. Kim, K. Kim, and Y. Park, “Super-resolution three-dimensional fluorescence and optical diffraction tomography of live cells using structured illumination generated by a digital micromirror device,” arXiv p. 1801.00854 (2018).

46. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. L. Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photon. 6, 312–315 (2012). [CrossRef]  

47. R. Ayuk, H. Giovannini, A. Jost, E. Mudry, J. Girard, T. Mangeat, N. Sandeau, R. Heintzmann, K. Wicker, K. Belkebir, and A. Sentenac, “Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm,” Opt. Lett. 38, 4723–4726 (2013). [CrossRef]   [PubMed]  

48. J. Min, J. Jang, D. Keum, S.-W. Ryu, C. Choi, K.-H. Jeong, and J. C. Ye, “Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery,” Scientific Reports 3, 2075: 1–6 (2013). [CrossRef]  

49. A. Jost, E. Tolstik, P. Feldmann, K. Wicker, A. Sentenac, and R. Heintzmann, “Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction,” PLoS ONE 10, e0132174 (2015). [CrossRef]   [PubMed]  

50. A. Negash, S. Labouesse, N. Sandeau, M. Allain, H. Giovannini, J. Idier, R. Heintzmann, P. C. Chaumet, K. Belkebir, and A. Sentenac, “Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations,” J. Opt. Soc. Am. A 33, 1089–1094 (2016). [CrossRef]  

51. S. Labouesse, M. Allain, J. Idier, S. Bourguignon, A. Negash, P. Liu, and A. Sentenac, “Joint reconstruction strategy for structured illumination microscopy with unknown illuminations,” ArXiv: 1607.01980 (2016).

52. L.-H. Yeh, L. Tian, and L. Waller, “Structured illumination microscopy with unknown patterns and a statistical prior,” Biomed. Opt. Express 8, 695–711 (2017). [CrossRef]   [PubMed]  

53. R. Förster, H.-W. Lu-Walther, A. Jost, M. Kielhorn, K. Wicker, and R. Heintzmann, “Simple structured illumination microscope setup with high acquisition speed by using a spatial light modulator,” Opt. Express 22, 20663–20677(2014). [CrossRef]   [PubMed]  

54. D. Dan, M. Lei, B. Yao, W. Wang, M. Winterhalder, A. Zumbusch, Y. Qi, L. Xia, S. Yan, Y. Yang, P. Gao, T. Ye, and W. Zhao, “DMD-based LED-illumination Super-resolution and optical sectioning microscopy,” Scientific Reports 3, 1116 (2013). [CrossRef]   [PubMed]  

55. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express 22, 20856–20870 (2014). [CrossRef]   [PubMed]  

56. H. Yilmaz, E. G. V. Putten, J. Bertolotti, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Speckle correlation resolution enhancement of wide-field fluorescence imaging,” Optica 2, 424–429 (2015). [CrossRef]  

57. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33213–33238 (2015). [CrossRef]  

58. L. Bottou, “Large-scale machine learning with stochastic gradient descent,” International Conference on Computational Statistics pp. 177–187 (2010).

59. Y. Nesterov, “A method for solving the convex programming problem with convergence rate O(1/k2),” Dokl. Akad. Nauk SSSR 269, 543–547 (1983).

60. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984). [CrossRef]  

61. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

62. R. W. Gerchberg and W. O. Saxton, “Phase determination for image and diffraction plane pictures in the electron microscope,” Optik 34, 275–284 (1971).

63. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33, 156–158 (2008). [CrossRef]   [PubMed]  

64. K. Kreutz-Delgado, “The Complex Gradient Operator and the CR-Calculus,” arXiv:0906.4835v1 (2009).

Supplementary Material (3)

NameDescription
Visualization 1       This is a 2D structured illumination microscopy fluorescence reconstruction of 1um beads using random speckle illumination. The full field of view is 2.7x3.3 mm^2 and the resolution is around 700 nm.
Visualization 2       This is the full field of view fluorescence channel reconstruction of the computational structured illumination microscopy with 2um beads using random speckle illumination. The field of view is around 2x2.7 mm^2 and the resolution is around 700 nm.
Visualization 3       This is the full field of view phase reconstruction of the computational structured illumination microscopy with 2um beads using random speckle illumination. The field of view is around 2x2.7 mm^2 and the resolution is around 1.23 um.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Structured illumination microscopy (SIM) with laterally-translated Scotch tape as the patterning element, achieving 4× resolution gain. Our imaging system has both an incoherent arm, where Sensor-F captures raw fluorescence images (at the emission wavelength, λ em = 605 nm) for fluorescence super-resolution, and a coherent arm, where Sensor-C1 and Sensor-C2 capture images with different defocus (at the laser illumination wavelength, λ ex = 532 nm) for both super-resolution phase reconstruction and speckle trajectory calibration. OBJ: objective, AP: adjustable iris-aperture, DM: dichroic mirror, SF: spectral filter, ND-F: neutral-density filter.
Fig. 2
Fig. 2 Verification of fluorescence super-resolution with 4× resolution gain. Widefield images, for comparison, were acquired at (a) 0.1 NA and (e) 0.4 NA by adjusting the aperture size. (b) The Scotch tape speckle pattern creates much higher spatial frequencies (∼0.35 NA) than the 0.1 NA detection system can measure. (c) Using the 0.1 NA aperture, we acquire low-resolution fluorescence images for different lateral positions of the Scotch tape. (d) The reconstructed SIM image contains spatial frequencies up to ∼0.4 NA and is in agreement with (e) the deconvolved widefield image with the system operating at 0.4 NA.
Fig. 3
Fig. 3 Verification of coherent quantitative phase (QP) super-resolution with 4× resolution gain. (a) Low-resolution intensity image and (b) “ground truth” phase at NA=0.4, for comparison. (c) Raw acquisitions of the speckle-illuminated sample intensity from two focus planes, collected with 0.1 NA. (d) Reconstructed SR amplitude and QP, demonstrating 4× resolution gain.
Fig. 4
Fig. 4 Reconstructed super-resolution fluorescence with 4× resolution gain across the full FOV (See Visualization 1). Four zoom-ins of regions-of-interest (ROIs) are compared to their widefield counterparts.
Fig. 5
Fig. 5 Reconstructed multimodal (fluorescence and quantitative phase) high-content imaging (See Visualization 2 and Visualization 3). Zoom-ins for three ROIs compare the widefield, super-resolved fluorescence, coherent intensity, and super-resolved phase reconstructions.
Fig. 6
Fig. 6 Algorithmic self-calibration significantly improves fluorescence super-resolution reconstructions. Here, we compare the resconstructed fluorescence image, speckle intensity, and OTF with no correction, OTF correction, and both OTF correction and scanning position correction. The right panel shows the overlay of the uncorrected and corrected scanning position trajectories.
Fig. 7
Fig. 7 Algorithmic self-calibration significantly improves coherent super-resolution reconstructions. We show a comparison of reconstructed amplitude, phase, speckle amplitude, and phase of the pupil function with no correction, pupil correction, and both pupil correction and scanning position correction. The right panel shows the overlay of scannning position trajectory for the in-focus and defocused cameras before and after correction.

Tables (2)

Tables Icon

Algorithm 1 Fluorescence imaging reconstruction

Tables Icon

Algorithm 2 Coherent imaging reconstruction

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

I f , l ( r ) = [ o f ( r ) C { p f ( r r l ) } ] h f ( r ) , l = 1 , , N img ,
min o f , p f , h f , r 1 , , r N img f f ( o f , p f , h f , r 1 , , r N img ) = l = 1 N img f f , l ( o f , p f , h f , r l ) , where    f f , l ( o f , p f , h f , r l ) = r | I f , l ( r ) [ o f ( r ) C { p f ( r r l ) } ] h f ( r ) | 2 .
I c , l z ( r ) = | [ o c ( r ) C { p c ( r r l ) } ] h c , z ( r ) | 2 = | g c , l z ( r ) | 2 , l = 1 , , N img , ; z = z 0 , z 1 ,
minimize o c , p c , h c , r 1 z 0 , r 1 z 1 , , r N img z 0 , r N img z 1 f c ( o c , p c , h c , r 1 z 0 , r 1 z 1 , , r N img z 0 , r N img z 1 ) = l , z f c , l z ( o c , p c , h c , r l z ) , where    f c , l z ( o c , p c , h c , r l z ) = r | I c , l z ( r ) | [ o c ( r ) C { p c ( r r l z ) } ] h c , z ( r ) | | 2 .
r l z = R [ I c , 1 z ( r ) , I c , l z ( r ) ] ,
I f , l = H f diag ( S ( r l ) p f ) o f ,
H f = F M 1 diag ( h ˜ f ) F M , S ( r l ) = Q F N 1 diag ( e ( r l ) ) F N ,
f f , l ( o f , p f , h ˜ f , r l ) = f f , l T f f , l = I f , l H f diag ( S ( r l ) p f ) o f 2 2 ,
I c , l z = | g c , l z | 2 ,
g c , l z = H c , z diag ( S ( r l z ) p c ) o c H c , z = F M 1 diag ( h ˜ c ) diag ( h ˜ z ) F M .
f c , l z ( o c , p c , h ˜ c , r l z ) = f c , l z T f c , l z = I c , l z | g c , l z | 2 2 ,
f f , l o f = ( f f , l f f , l ) ( f f , l o f ) = ( 2 f f , l T ) f t ( H f diag ( S ( r l ) p f ) .
o f f f , l = ( f f , l o f ) T = 2 diag ( S ( r l ) p f ) H f T f f , l .
f f , l = I f , l H f diag ( o ) S ( r l ) p f .
f f , l p f = ( f f , l f f , l ) ( f f , l p f ) = ( 2 f f , l T ) ( H f diag ( o f ) S ( r l ) ) p f f f , l = ( f f , l p f ) T = 2 S ( r l ) T diag ( o f ) H f T f f , l .
f f , l = I f , l F M 1 diag ( F M diag ( S ( r l ) p f o f ) ) h ˜ f .
f f , l h ˜ f = ( f f , l f f , l ) ( f f , l h ˜ f ) = ( 2 f f , l T ) ( F M 1 diag ( F M diag ( S ( r l ) p f o f ) ) )   h ˜ f f f , l = ( f f , l h ˜ f ) = 2 diag ( F M diag ( S ( r l ) p f o f ) ¯ ) F M f f , l ,
f f , l = I l H f diag ( o f ) Q F N 1 diag ( F N p f ) e ( r l ) .
f f , l q l = ( f f , l f f , l ) ( f f , l e ( r l ) ) ( e ( r l ) q l ) = ( 2 f f , l T ) ( H f diag ( o f ) Q F N 1 diag ( F N p f ) ) ( diag ( j 2 π u q ) e ( r l ) ) ,
o f f f , l ( o f , p f , h f , r l ) = 2 p f ( r r l ) [ h f * ( r ) ( I f , l ( r ) [ o f ( r ) C { p f ( r r l ) } ] h f ( r ) ) ] , p f f f , l ( o f , p f , h f , r l ) = 2 δ ( r + r l ) P { o f ( r ) [ h f * ( r ) ( I f , l ( r ) [ o f ( r ) C { p f ( r r l ) } ] h f ( r ) ) ] } , h ˜ f f f , l ( o f , p f , h f , r l ) = 2 ( F { o f ( r ) C { p f ( r r l ) } } ) * F { I f , l ( r ) [ o f ( r ) C { p f ( r r l ) } ] h f ( r ) } , q l f f , l ( o f , p f , h f , r l ) = 2 { r ( I f , l ( r ) [ o f ( r ) C { p f ( r r l ) } ] h f ( r ) ) h f ( r ) [ o f ( r ) C { p f ( r r l ) q l } ] } ,
f c , l z o c = ( f c , l z f c , l z ) ( f c , l z g c , l z ) ( g c , l z o c ) = ( 2 f c , l z T ) ( 1 2 diag ( g c , l z ¯ | g c , l z | ) ) ( H c , z diag ( S ( r l z ) ) p c ) o c f c , l z = ( f c , l z o c ) = diag ( S ( r l z ) p c ¯ ) H c , z diag ( g c , l z | g c , l z | ) f c , l z ,
f c , l z p c = ( f c , l z f c , l z ) ( f c , l z g c , l z ) ( g c , l z p c ) = ( 2 f c , l z T ) ( 1 2 diag ( g c , l z ¯ | g c , l z | ) ) ( H c , z diag ( o c ) S ( r l z ) ) p c f c , l z = ( f c , l z p c ) = S ( r l z ) diag ( o c ¯ ) H c , z diag ( g c , l z | g c , l z | ) f c , l z .
f c , l z h ˜ c = ( f c , l z f c , l z ) ( f c , l z g c , l z ) ( g c , l z h ˜ c ) = ( 2 f c , l z T ) ( 1 2 diag ( g c , l z ¯ | g c , l z | ) ) ( F M 1 diag [ F M diag ( S ( r l z ) p c ) o c ] diag ( h ˜ z ) ) h ˜ c f c , l z = ( f c , l z h ˜ c ) = diag ( h ˜ z ¯ ) diag [ F M diag ( S ( r l z ) p c ) o c ¯ ] F M diag ( g c , l z | g c , l z | ) f c , l z .
f c , l z q l z = ( f c , l z f c , l z ) [ ( f c , l z g c , l z ) ( g c , l z e ( r l z ) ) ( e ( r l z ) q l ) + ( f c , l z g c , l z ¯ ) ( g c , l z ¯ e ( r l z ) ¯ ) ( e ( r l z ) ¯ q l ) ] = 2 ( f c , l z f c , l z ) Re { ( f c , l z g c , l z ) ( g c , l z e ( r l z ) ) ( e ( r l z ) q l ) } = 2 ( 2 f c , l z T ) Re { ( 1 2 diag ( g c , l z ¯ | g c , l z | ) ) ( H c , z diag ( o c ) Q F N 1 diag ( F N p c ) ) ( diag ( j 2 π u q ) e ( r l z ) ) } = 2 Re { f c , l z T diag ( g c , l z ¯ | g c , l z | ) H c , z diag ( o c ) Q F N 1 diag ( F N p c ) diag ( j 2 π u q ) e ( r l z ) } ,
o c f c , l z ( o c , p c , h c , r l z ) = p c * ( r r l z ) [ h c , z * ( r ) ( ( I c , l z ( r ) | g c , l z ( r ) | 1 ) g c , l z ( r ) ) ] p c f c , l z ( o c , p c , h c , r l z ) = δ ( r + r l z ) P { o c * ( r ) [ h c , z * ( r ) ( ( I c , l z ( r ) | g c , l z ( r ) | 1 ) g c , l z ( r ) ) ] } h ˜ c f c , l z ( o c , p c , h c , r l z ) = h ˜ z * ( u ) F { p c ( r r l z ) o c ( r ) ) } * F { ( I c , l z ( r ) | g c , l z ( r ) | 1 ) g c , l z ( r ) } q l z f c , l z ( o c , p c , h c , r l z ) = 2 Re { r [ ( I c , l z ( r ) | g c , l z ( r ) | 1 ) g c , l z * ( r ) ] [ h c , z ( r ) ( o c ( r ) C { p c ( r r l z ) q l z } ) ] } .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.