Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D high-density localization microscopy using hybrid astigmatic/ biplane imaging and sparse image reconstruction

Open Access Open Access

Abstract

Localization microscopy achieves nanoscale spatial resolution by iterative localization of sparsely activated molecules, which generally leads to a long acquisition time. By implementing advanced algorithms to treat overlapping point spread functions (PSFs), imaging of densely activated molecules can improve the limited temporal resolution, as has been well demonstrated in two-dimensional imaging. However, three-dimensional (3D) localization of high-density data remains challenging since PSFs are far more similar along the axial dimension than the lateral dimensions. Here, we present a new, high-density 3D imaging system and algorithm. The hybrid system is implemented by combining astigmatic and biplane imaging. The proposed 3D reconstruction algorithm is extended from our state-of-the art 2D high-density localization algorithm. Using mutual coherence analysis of model PSFs, we validated that the hybrid system is more suitable than astigmatic or biplane imaging alone for 3D localization of high-density data. The efficacy of the proposed method was confirmed via simulation and real data of microtubules. Furthermore, we also successfully demonstrated fluorescent-protein-based live cell 3D localization microscopy with a temporal resolution of just 3 seconds, capturing fast dynamics of the endoplasmic recticulum.

© 2014 Optical Society of America

1. Introduction

Localization-based super-resolution microscopy (STORM/(f)PALM [13]) sacrifices temporal resolution for increased spatial resolution. Specifically, the overlapping fluorescence emission of single molecules is separated in time using controlled blinking. In return for an order of magnitude increase in spatial resolution, the earlier localization microscopy studies were limited to a time resolution of several minutes [4]. This is more than an order of magnitude slower than widefield diffraction-limited fluorescence microscopy (10–100 ms), and was too slow to image dynamics in biological systems.

In conventional localization microscopy, only a sparse subset of probes are active during the exposure time of a single camera frame; this ensures a low probability of PSF overlap. Under the assumption that the PSFs are well isolated, each candidate localization is fitted to a single model PSF, with a resolution on the order of tens of nanometers. A super-resolution (SR) image is then reconstructed by acquiring a large number of frames of raw data (typical > 1000) and iteratively localizing numerous probes.

The temporal resolution can be significantly improved using high-density imaging [59]. Specifically, this technique records data at a higher molecular density, meaning that fewer frames are required to reconstruct a single SR image. Since this increases the degree of overlap among PSFs in an image, more sophisticated image analysis is required to successfully localize overlapped PSFs. In 2D localization, several approaches have been proposed. Multi-emitter fitting methods [5,6] simultaneously fit multiple model PSFs to the data. Alternatively, localization can be performed by image-deconvolution using a sparse recovery (or compressed sensing) algorithm [79]. This approach introduces sparsity-inducing priors on a sub-pixel discrete grid to recover a deconvolved image containing only a sparse number of non-zero sub-pixels. The latter approach has been demonstrated to recover more localizations with higher precision, but at the cost of higher computational complexity. By coupling high-density imaging with the increased acquisition speed of sCMOS cameras, it is now possible to obtain 2D super-resolution microscopy images in as little as 0.5 seconds using fluorescent proteins, and 30 ms using organic dyes [10].

However, high density localization in three dimensions remains challenging. 3D super-resolution can be achieved by introducing axial information into the PSF, for example, by astigmatic [11] or biplane imaging [12]. Even if the conventional PSF models used in astigmatic or biplane imaging have sufficient axial asymmetry to extract the axial position at low density, PSFs are far more similar along the axial dimension than the lateral dimensions. This makes the extraction of axial information from overlapped PSFs much harder than 2D localizations alone. Babcock et al. [13] extended 2D DAOSTORM [5] via astigmatic imaging, and showed mild improvements with a fixed, sectioned sample of 100 nm thickness. Compressed sensing algorithms have recently been applied to both biplane imaging [14], and a system using the double helical PSF [15]. As with the 2D case [7], both studies applied a discrete formulation with sparsity priors. However, these algorithms were experimentally demonstrated only in fixed samples labelled with organic dyes (and thus with very high photon counts), rather than the more difficult case of live cell super-resolution imaging, which usually requires labeling with fluorescent proteins that have significantly reduced the photon counts.

In this paper, to improve 3D localization performance in high-density imaging, we increased the axial asymmetry of the PSF by utilizing a “hybrid” combination of astigmatic and biplane imaging.The hybrid approach can be easily implemented on a standard fluorescent microscope using commercially available components. In addition, by extending the state-of-the art high-density algorithm [9] developed by our group, we designed a new algorithm, FALCON3D, for 3D localization of high-density data. The algorithm is based on a continuous formulation that already has been demonstrated to reduce spatial bias compared to other grid based reconstruction formulations [7, 14]. By performing mutual coherence analysis of PSFs, we show that the proposed system is better for 3D localization compared to the astigmatic or biplane imaging system alone. We also confirm that the efficacy of the proposed algorithm using both simulated and experimental data sets. In addition, we demonstrate 3D live-cell super-resolution imaging of the endoplasmic recticulum at a temporal resolution of 3 seconds, which clearly exhibits fast dynamics.

2. Methods

2.1. Hybrid optical setup

High-density 3D STORM/PALM imaging was performed by combining two different approaches for 3D single molecule imaging: astigmatic [11] and biplane [12] imaging (Fig. 1). Fluorophores were excited by a 641 nm laser (Coherent Cube), filtered with a Chroma ET620/60 bandpass. The sample was illuminated in widefield mode by reflecting the laser off a dichroic mirror (Chroma T660LPXR), and through a 1.40 NA oil immersion objective (Olympus) mounted on an inverted microscope (Olympus IX-71). Emitted fluorescence was filtered using a Chroma ET700/75M bandpass. The hybrid imaging geometry was achieved by coupling the emitted fluorescence into a commercial multi-channel beamsplitter (Cairn Optosplit). Fluorescence was spit into two channels using a 50/50 beamsplitter (Chroma). An f=−1000 mm cylindrical concave lens (Thorlabs LK1002RM-A) was placed in one channel, and an f = 1000 mm spherical convex lens (Thorlabs LB1409-A) was placed in the other. The two channels were imaged side-by-side on an EMCCD camera (Andox iXon DU-897). An open hardware TIRF-based autofocus system was used to maintain focus during acquisition (“pgFocus”, Biomedical Imaging Group, University of Massachusetts Medical School, (http://big.umassmed.edu/). Hardware control was performed using the MicroManager [16] software.

 figure: Fig. 1

Fig. 1 Hybrid imaging system implemented by inserting a cylindrical lens in one channel of a biplane imaging system. Experimentally measured PSF model on the right side. Obj: objective lens, DM: dichroic mirror, EM: emission filter, CL: cylindrical lens, BS: beam splitter.

Download Full Size | PDF

2.2. Conventional approaches

Let K activated molecules in the sample at a given instant be described as,

S(ρ)=k=1Kckδ(ρρk),
where ck is a number of the emitted photons from the k-th source and ρk = (xk, yk, zk) denotes location of the k-th source. The fluorescence signal from each molecule arrives at the camera after blurring through PSF of the optical system, and background fluorescence b and inherent camera noise e are added. Accordingly, the resulting measurements for each channel of the hybrid system are given by
f1[n]=(k=1Kckh1(nρk))+b1(n)+e1(n),f2[n]=(k=1Kckh2(nρk))+b2(n)+e2(n).
Here, the camera images f1 and f2 correspond to the each channel, respectively, and n represents n-th pixel position. The background auto-fluorescence b is assumed to be locally uniform, and the random noise contributions e include Poisson shot-noise, Gaussian read noise, and quantization errors. Under assumptions that the camera images are obtained by low-density activation; the number of simultaneous emitters K is known, each activated molecule can be localized independently by means of least-square fitting with given PSFs of two channels; h1 and h2, respectively. In general, the localization is performed by alternating the minimization of the following least-square cost function with respect to ck, ρk and b under a non-negativity constraint of ck:
n[(f1[n]b1[n]k=1Kckh1(nρk))2+(f2[n]b2[n]k=1Kckh2(nρk))2].
For high density imaging, however, such nonlinear least square fitting is not efficient since K is unknown due to the PSF overlaps. Therefore, rather than using the least squares fitting, a penalized least square formulation using a sparsity promoting prior has been used to reconstruct high-density data [7, 8, 14, 15]. Specifically,
f=Hc+b
where f=[f1T;f2T]T2N×1, H=[H1T;H2T]T2N×M, b=[b1T;b2T]T2N×1, fi = [fi[n1], ···, fi[nN]]T, bi = [bi[n1], ···, bi[nN]]T, and a voxel grid c = [c[m1], ···, c[mM]]T. Here, H1 and H2 are convolution matrices constructed by the PSFs h1 and h2, respectively. N is the number of camera pixels in each channel, and M is the number of voxels in c. Although molecules are densely activated that produces overlaps of PSFs, the spatial occupancy of the probes are relatively sparse compared to the whole voxels. Thus, this sparseness can be utilized to reconstruct a high density dataset by minimizing a sparsity penalized least square cost functional with respect to c:
minc0fHcb22+ϕ(c).
Here, ϕ(c) is a sparsity penalty term such as λ‖ · ‖1 [7], promoting c to have only few nonzero coefficients. However, the discrete formulation is computationally expensive, especially in a case of 3D localization, because voxel size should be made small enough to provide super-resolution imaging, which results in millions of voxels.

2.3. FALCON3D

The aforementioned limitations of the discrete formulation can be reduced by using continuous formulation [9]. Here, a probe position ρk is represented as the sum of the closest m-th voxel mk and a small continuous displacement εk from the voxel:

ρk=mk+εk.
Then, a PSF shifted from a voxel can be approximated by only considering linear terms of the Taylor series expansion as follows,
h(xρk)h(xmk)εkTh(xmk).
Based on the approximation, (Eq. (3)) we can write
J(c,ε)=n(f1[n]b1k=1Kc[mk][h1(nmk)εkTh1(nmk)])2+n(f2[n]b2k=1Kc[mk][h2(nmk)εkTh2(nmk)])2.

This functional is not only quadratic with respect to c or ε respectively, but also preserves continuity of the reconstruction space. Now, high-density data is reconstructed by introducing this continuous formulation into the sparsity-promoting penalized least square formulation (Eq. (4)).

FALCON3D

  1. Deconvolution with sparsity priors ϕ(c) on a 100×100×100 nm voxel grid
    minc>0fHcb22+ϕ(c)
  2. Take local maxima as an initial localization
  3. Initial values of the localization are refined by alternatively updating ε and c on a finer voxel space of 20×20×20 nm3
    minc>0,|ε|ΔJ(c,ε).

The proposed algorithm, FALCON3D (FAst 3D Localization algorithm based on a CONtinuous-space), has the following three steps. High-density data are first deconvolved on a coarse voxel grid of 100×100×100 nm. This deconvolution is performed in the same manner as 2D FALCON [9]. The background b is also iteratively estimated during the deconvolution step by only using the lowest frequency wavelets of residual images (fHc). Then, an initial localization is obtained by applying center-of-mass on the local maxima of the deconvolved volume. Finally, we refined the initial values by alternatively updating the initial positions ε and photons c on a finer voxel space of 20×20×20 nm. If the updated εk is so large that Taylor approximation of the PSF becomes inaccurate, ε[mk] and c[mk] are handled by the nearest voxel point. By utilizing the continuous formulation, the deconvolution step can be performed cost-effectively compared to the discrete formulation since the relatively coarser grid is enough to get an initial localization. In addition, localization accuracy is enhanced by minimizing the bias from the discrete formulation.

2.4. Alignment between channels and PSF calibration

Before reconstructing data, images from the two channels should be aligned as a preprocessing step. Two channel alignment was performed using a “NanoGrid” calibration slide (Miraloma Tech, USA), which is a metal film containing a square array of 200 nm holes at 2000 nm spacing. The NanoGrid was illuminated in transmission mode using brightfield illumination. We laterally shifted the grid sample and took several snapshots for the better field of view of the sample. The alignment is conducted by a locally weighted mean (LWM) transform [17] which can assimilate non-linear matching between the channels. Each hole was localized in each channel using least-square fitting, which enabled us to accurately pair grid hole localizations from the different channels. Specifically, the astigmatic channel is aligned to the other defocused channel by using an obtained LWM transform via a MATLAB built-in function (imtransform). Since the non-astigmatic channel has an isotropic magnification, a small image distortion problem that can be caused by a non-isotropic magnification of the astigmatic imaging is resolved. We validated that registration errors were small compared to the localization accuracy (Fig. 2).

 figure: Fig. 2

Fig. 2 Channel alignment analysis: an LWM transform T is constructed after collecting paired points (ρ1, ρ2) from the two channels, where ρi = (xi, yi). Registration errors were calculated by e = ρ1T (ρ2).

Download Full Size | PDF

3D PSF models were experimentally measured. We prepared a sample of 100 nm beads (Invitrogen Tetraspeck), and took several z-stacks of beads by shifting the sample along axial direction with a step-size of 20 nm. After applying the channel-alignment to the stacks, small stacks that correspond to PSFs of 30 beads were cropped. The small stacks were 20 × up-sampled by cubic spline interpolation. Then, we averaged them by aligning the maximum values from the up-sampled stacks.

3. Results

3.1. Mutual coherence analysis of PSF models

We examined the theoretical performance of our hybrid approach compared with astigmatism alone, or biplane alone, by comparing the mutual coherence of the model PSFs for each system. In compressed sensing, successful recovery of signal is highly dependent on conditions of a system response function, which corresponds to a PSF for the case of localization microscopy. A less spatially coherent PSF essentially means that it is easier to resolve adjacent localizations. This maximizes the recovery rate of a sparsity based algorithm, improving its high-density performance. Therefore, the key task in optimizing our experimental system for high density 3D localization is to have a system PSF that has as low coherence as possible.

Mutual coherence analysis is performed in 3D space by using the experimentally measured three PSF models (Fig. 3). Specifically, mutual correlation values between PSF1(0,0,z1) and PSF2(dx,dy,z2) were calculated over ranges of −1μm < dx, dy < 1μm and −600nm < z < 600nm. Here, the lower values represent the lower coherence of PSF model. All three modalities have relatively similar mutual-correlation values where the two PSFs are placed at the same z positions. The hybrid PSF is slightly less coherent in off-focal planes. However, there are significant differences of mutual-correlation values in axial direction at the same lateral positions. The PSF from astigmatic imaging is highly coherent in the axial direction, showing the highest values in whole ranges. Although the PSF of biplane imaging has lower mutual-correlation values than that of astigmatic imaging, the hybrid system shows further improvements, especially in the ranges highlighted by arrows in Fig. 3. For example, when two molecules are located at (x, y,−300 nm) and (x, y,+300 nm) respectively, the combined system is more likely to resolve the molecules. Thus, the hybrid system has higher axial coverage as compared to other modalities; this is better for constructing a high-density dataset.

 figure: Fig. 3

Fig. 3 PSF analysis: mutual correlation values between PSF1(0,0,z1) and PSF2(dx,dy,z2) over ranges of −1μm < dx, dy < 1μm and −600nm < z1, z1 < 600nm. The left three columns correspond to lateral analysis of mutual coherence, and the rightmost column represents the axial analysis.

Download Full Size | PDF

We note that a previous study [18] investigated theoretical minimum errors of multi-source localization for astigmatic, biplane and hybrid PSF models in terms of the Cramer-Rao lower bound, and found only a small difference in terms of maximum localization precision. In that study, it was assumed that the number of activated sources in each image is known; however, correctly estimating the number of activated sources is key to accurate high-density localization. In the localization-based analyses, the advantage of the hybrid method relative to astigmatism or biplane alone is that the lower coherence of the PSF makes it easier to estimate the number and locations of activated sources.

3.2. Simulations

For quantitative analysis, we generated sets of high-density data over a wide range of (0.1 – 7 μm−2), and compared our high-density algorithm (FALCON3D) with the least-square fitting method. In the simulation, N molecules were randomly distributed in a 10μm×10μm×800nm volume. By using an experimentally measured PSF model, the molecules were imaged through the two channels, producing two images of 100 × 100 pixels with added constant background photons. We generated 50 frames of simulated measurements at each imaging density. Assuming an EMCCD (Electron multiplying charge-coupled device) camera, Poisson shot noise, which is dominant, was generated. Then, this noise was multiplied by an additional EM (Electron multiplying) gain factor of 1.4. In addition, Gaussian readout noise with a unit variance was added. We took account of three photon-emission rates. In a high-SNR (signal to noise ratio) simulation, each channel has emitters yielding 2, 500 photons per molecule per frame (1, 000 standard deviation) on average and uniform background photons per pixel count of 40. Mid-SNR and low-SNR simulation have the following photon parameters in each channel: (mean, standard deviation) = (500, 200) with 10 uniform background and (250, 100) with 5 uniform background, respectively. The photon-emission statistics follow log-normal distributions.

The images were reconstructed by using FALCON3D and the least-square fitting method. The least-square method used a 7×7 window to fit local peaks of each channel. If two local peaks from the channels are closer than 2 pixels, they were counted as one molecule; otherwise they were considered as two molecules. We are aware that there exist various implementations of the least-square fitting optimized for localization precision at the expense of the recall rate. For example, local maxima from slightly overlapped PSFs can be discarded, or localization results are filtered out based on their goodness of fit. However, despite their improved algorithmic accuracy, the poor recall rate of these rejection schemes makes them unsuitable for high-density imaging. Thus, these rejection schemes were not considered in our experiments.

Performance was evaluated based on localization accuracy and recall rates. Specifically, each localization is matched to the closest ground-truth, and only molecules having Euclidean error distance below 400nm were included in the analysis. The localization accuracy is represented in terms of the root mean square (RMS) of localization errors for lateral and axial directions, respectively. For the highest photon-emission simulation (Fig. 4(a–c)), the proposed algorithm achieved up to 3.5 times higher recall rates and provided noticeably improved localization accuracy compared to the least-square fitting method. Improvements in recall rates were also significant in the mid-SNR (Fig. 4(d–f)) and low SNR cases (Fig. 4(g–i)).

 figure: Fig. 4

Fig. 4 Simulation analysis in comparison with least-square fitting method with molecular activation density from 0.1 to 7 μm−2. Analysis results of the highest-photon emission rates (a–c), mid-photon emission rates (d–f) and the lowest-photon emission rates (g–i): lateral localization accuracy (a,d,g), axial localization accuracy (b,e,h) and recall rates (c,f,i). At each imaging density, the analysis was repeated 50 times, and the error bar denotes standard deviation.

Download Full Size | PDF

3.3. Experimental results

The proposed method was used for both STORM and PALM data. For quantitative evaluation, we labeled the alpha-tubulin subunit of the microtubules (MTs) with Alexa 647 in fixed cells. Briefly, African green monkey kidney cells (COS-7) were cultured in DMEM supplemented with 10% FBS (Sigma-Aldrich) in a cell culture incubator (37°C and 5%CO2). Cells were plated at a confluency of 200, 000 cells/well ∼ 24 hr before fixation on sterile, clean 25 mm cover glass (Menzel). Cells were rinsed with PBS (37°C), pre-extracted by rinsing for 15 s with a solution (37°C) containing 0.5 % Triton-X in BRB80 (80 mM PIPES, 1 mM MgCl2, 1 mM EGTA, adjusted to pH 6.8 with KOH) and supplemented with 4 mM EGTA. Then, cells were fixed with methanol (−20°C) for 10 min, rinsed with PBS and blocked with 5 % BSA (in PBS) for 30 min. Finally, cells were incubated with a mouse anti-alpha-tubulin primary antibody (Sigma, T5168) for 90 min, rinsed again with PBS and then incubated with a goat anti-mouse Alexa 647 F(ab)2 secondary antibody fragment (Life Technologies, A-21237) for 45 min. Both antibodies were diluted to a ratio of 1:1000 in a solution containing 1 % BSA and 0.1 % Triton-X in PBS. For high-density and bright photoswitching, we used a Cy-clooctatetraene (COT)-based buffer as previously described [19]. Briefly, the buffer was made up of a solution containing 10 mM PBS-Tris pH 7.5, 10 mM Mercaptoethylamine (MEA) (Sigma-Aldrich 30070), 50 mM ß-mercaptoethanol (BME) (Sigma-Aldrich M6250), 2 mM COT (Sigma-Aldrich 138924), 2.5 mM Protocatechuic acid (PCA) (Sigma-Aldrich 37580) and 50 nM Protocatechuic dioxygenase (PCD) (Sigma-Aldrich P8279). High-density frames were obtained with a 30 ms exposure time; the density was controlled with 405 nm illumination. In this level of the density, even PSFs from a single tubule are usually overlapped as shown in Fig. 5(a).

 figure: Fig. 5

Fig. 5 3D super-resolution reconstruction of fixed microtubule data. Alpha tubulin subunits of microtubules (MTs) labeled with Alexa 647 in COS-7 cells were imaged with a 30 ms exposure time. (a) a single raw high-density image. (b) a conventional wide-field image acquired from 150 raw frames. (c) a SR image of LS fitting and (d) a SR image of FALCON3D by using 150 frames of raw high density data. (e–g) close-up images from the regions of solid white boxes in (b–d), respectively. (h–j) close-up images from the regions of dotted white boxes in (b–d), respectively. (k) lateral and axial line profiles of the microtubules highlighted by the yellow arrows in (c–d). (l) lateral and axial line profiles, respectively, measured from (g). (m) lateral and axial line profiles respectively, measured from (j). The colors in (l) and (m) correspond to the profiles of microtubules highlighted by the arrows of the same colors in (g) and (j), respectively. The widths of the microtubules are indicated as full width at half maximum. Scale bars are 2 μm in (a–d), and 500nm in (e–j)

Download Full Size | PDF

We reconstructed SR images using 150 raw frames (Fig. 5). FALCON3D localized about 4 times more molecules than least-square fitting, which confirmed simulation results. FALCON3D reconstructed tubular structures better, compared to the least-square fitting, as validated from line profiles. For line profiles of a single tubule highlighted by a yellow marker (Fig. 5(k)), our method had 29 nm and 61 nm in the lateral and axial directions, respectively in terms of full width at half maximum(FWHM). The least-square fitting method had an artificially narrow lateral FWHM of 21 nm which is smaller than a true diameter (25 nm) of microtubule augmented by the antibodies. In addition, the axial distribution of LS was biased about 50 nm. The differences were more distinct in dense and complex MT regions that have higher imaging densities. As shown in Fig. 5(e–n), several MTs with different axial positions were well resolved with reliable spatial profiles by using the proposed method (Fig. 5(l) and (m)). However, the least-square fitting method failed to reconstruct these tubules by missing many activated molecules in the central regions. Moreover, the least-square fitting method frequently failed to resolve closely spaced MTs.

We also performed high-density localization of the endoplasmic reticulum (ER) in live COS 7 cells (Fig 6, Media 2). For the imaging, cells were plated 24 hr prior to transfection at a confluency of 150, 000 cells/well as described above. Cells were transfected with Rtn4a-tdEos using Fugene-6 and imaged 24 hr post transfection. SR movies were reconstructed where each single SR frame was generated by using 100 raw frames that were acquired over 3 seconds. Fast changing morphological features were shown in the SR images by the proposed method while the least-square fitting method fitting failed to process the high-density data as shown in Fig 6(a–c). Specifically, fast remodeling of the complex ER network is observed, as highlighted in Fig. 6(d–e). In addition, dynamic motions of tubule junctions such as the emergence or shrinkage of tubules were well recovered. Additionally, we measured ER tubule profiles highlighted by the yellow arrow in Fig. 6. The profiles showed 50 and 110 nm diameter in lateral and axial dimensions, respectively, consistent with reported sizes [20, 21]. A corresponding analysis of the data with the least squares algorithm ( Media 2) shows much lower recall and resolution.

 figure: Fig. 6

Fig. 6 3D super-resolution reconstruction of live ER data. The endoplasmic reticulum proteins, reticulon-4 fused to tdEos in a COS 7 cell were imaged with a 30 ms exposure time. (a) conventional wide-field images generated by averaging 100 frames of raw high-density data from a non-astigmatism channel for each image, (b) SR images of LS fitting (see Media 2), (c) SR images of FALCON3D (see Media 1) with a temporal resolution of 3 seconds. (d–f) close-up images corresponding to the regions of white boxes (d), green boxes (e), and yellow boxes (f), respectively. Scale bars in (a–c) are 2 μm, and scale bars in (d–f) are 500nm

Download Full Size | PDF

4. Discussion

We developed a hybrid optical system for high-density 3D localization microscopy, and proposed a new sparse recovery algorithm using continuous domain formulation for analysis of these microscopy data. The mutual coherence of the hybrid system PSF revealed the following: “regular” biplane alone still shows better performance than astigmatism for high density localization, consistent with previous results [1315]. However, the hybrid combination of astigmatism and biplane improves performance even further. Compared with alternative approaches based on phase modulation techniques [22], our method has the advantage of being easy to implement on a standard inverted microscope with commercially available components.

Using simulation, we showed that our method gives recall rates of > 70 % in the limits of both high and low photon counts at high imaging density (3–5 mol/ μm−2), while maintaing high localization precision (Fig. 4). We tested the experimental performance of our method for Alexa647-labelled microtubules in fixed cells, and demonstrated that the combination of hybrid imaging and FALCON3D significantly improves recall and localization precision in all three dimensions compared with astigmatism and sparse emitter fitting alone [13].

Finally, we demonstrated that our method facilitates high speed time-resolved 3D PALM by imaging the dynamics of the endoplasmic recticulum with nanoscale resolution (Fig. 6). For a sample expressing reticulin 4a fused to tdEos (Rtn4A-tdEos), we demonstrated 3D localization microscopy on fluorescent protein labelled structures with a time resolution of just 3 seconds per PALM image using just 100 raw frames. High speed 3D localization microscopy with a time resolution of 1 s [23] has been previously demonstrated in live cells for clathrin-coated pits labelled with organic dyes. However, these small structures intrinsically only requires few localizations for reconstruction, naturally resulting in faster temporal resolution. Furthermore, the challenges of specific live cell labeling and control of dye photophysics currently limit dye-based super-resolution imaging mainly to membrane-bound structures. Live cell super-resolution imaging of intracellular structures usually requires imaging of fluorescent protein fusions due to their high specificity; to our knowledge the temporal resolution of 3 seconds reported here represents the highest speed 3D localization microscopy achieved with fluorescent proteins.

There are several ways in which the FALCON3D algorithm could be extended. Although the Taylor approximation of the PSF reduces complexity by several orders of magnitude compared to the discrete sparsity-promoting formulation, the computation complexity and run time remains significant. For example, our MATLAB based implementation takes 40 seconds to process a 10μm × 10μm × 1μm volume of a raw camera frame with high activation density (5 μm−2) on an GPU (Nvidia GTX Titan). Since over 80 % of the computational cost is attributed to the deconvolution of the 3D volume, a faster optimization algorithm implementation can reduce the computation time. A parallelized implementation could be implemented by block-wise reconstruction [7, 14]; this would be at the expense of localization performance. Localization performance of FALCON3D could be improved by applying more detailed noise models. Using maximum-likelihood with a more advanced noise model could improve localization performance in the limit of low photon counts.

5. Conclusion

We developed a new method for fast live 3D localization microscopy based on a hybrid optical system combining astigmatic and biplane imaging, and proposed a new 3D localization algorithm using the sparsity driven continuous domain formulation, called FALCON3D. The proposed approach was demonstrated to have robust localization performance with both simulated data and real experimental high-density STORM/PALM data. Furthermore, we demonstrated fast 3D super-resolution imaging of fluorescent proteins in living cells, resolving the nanoscale reorganization of the endoplasmic recticulum with a time resolution of 3 seconds. To the best of our knowledge, this is the fastest reported fluorescent-protein-based 3D localization microscopy to date. This work will facilitate three dimensional imaging studies of nanoscale dynamics in living cells.

Appendix A: Supplementary Figures

 figure: Fig. 7

Fig. 7 Experimentally measured PSFs for astigmatic imaging (Astigmatism) and biplane imaging (Biplane)

Download Full Size | PDF

Acknowledgments

We thank Kyle Douglass (EPFL) and Karl Bellve (University of Massachusetts) for assistance with the pgFocus system. This work was supported by European Research Council Starting Grant 243016 (to S.J.H. and S.M.), Marie Curie Intra-European Fellowship PIEF-GA-2011- 297918 (to S.J.H.), the NCCR Chemical Biology (to L.C. and S.M.), and NRF-2013M3A9B2076548 from the Korean Government (to J.M. and J.C.Y.).

References and links

1. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy STORM,” Nat. Methods 3, 793–796 (2006). [CrossRef]   [PubMed]  

2. S. T. Hess, T. P. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91, 4258 (2006). [CrossRef]   [PubMed]  

3. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006). [CrossRef]   [PubMed]  

4. H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5, 417–423 (2008). [CrossRef]   [PubMed]  

5. S. J. Holden., S. Uphoff., and A. N. Kapanidis, “DAOSTORM: an algorithm for high-density super-resolution microscopy,” Nat. Methods 8, 279–280 (2011). [CrossRef]   [PubMed]  

6. F. Huang, S. L. Schwartz, J. M. Byars, and K. A. Lidke, “Simultaneous multiple-emitter fitting for single molecule super-resolution imaging,” Biomed. Opt. Express 2, 1377–1393 (2011). [CrossRef]   [PubMed]  

7. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster storm using compressed sensing,” Nat. Methods 9, 721–723 (2012). [CrossRef]   [PubMed]  

8. E. A. Mukamel, H. Babcock, and X. Zhuang, “Statistical deconvolution for superresolution fluorescence microscopy,” Biophys. J. 102, 2391–2400 (2012). [CrossRef]   [PubMed]  

9. J. Min, C. Vonesch, H. Kirshner, L. Carlini, N. Olivier, S. Holden, S. Manley, J. C. Ye, and M. Unser, “FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data,” Sci. Rep. 4, 4577 (2014). [CrossRef]  

10. F. Huang, T. M. P. Hartwich, F. E. Rivera-Molina, Y. Lin, W. C. Duim, P. D. U. Jane, J Long, J. R. Myers, M. A. Baird, W. Mothes, M. W. Davidson, D. Toomre, and J. Bewersdorf, “Video-rate nanoscopy using scmos camera-specific single-molecule localization algorithms,” Nat. Methods 10, 653–658 (2013). [CrossRef]   [PubMed]  

11. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008). [CrossRef]   [PubMed]  

12. M. F. Juette, T. J. Gould, M. D. Lessard, M. J. Mlodzianoski, B. S. Nagpure, B. T. Bennett, S. T. Hess, and J. Bewersdorf, “Three-dimensional sub–100 nm resolution fluorescence microscopy of thick samples,” Nat. Methods 5, 527–529 (2008). [CrossRef]   [PubMed]  

13. H. Babcock, Y. M. Sigal, and X. Zhuang, “A high-density 3D localization algorithm for stochastic optical reconstruction microscopy,” Optical Nanoscopy 1, 1–10 (2012). [CrossRef]  

14. L. Gu, Y. Sheng, Y. Chen, H. Chang, Y. Zhang, P. Lv, W. Ji, and T. Xu, “High-density 3D single molecular analysis based on compressed sensing,” Biophys. J. 106, 2443–2449 (2014). [CrossRef]   [PubMed]  

15. A. Barsic, G. Grover, and R. Piestun, “Three-dimensional super-resolution and localization of dense clusters of single molecules,” Sci. Rep. 4, 5388 (2014). [CrossRef]   [PubMed]  

16. A. Edelstein, N. Amodaj, K. Hoover, R. Vale, and N. Stuurman, Computer Control of Microscopes Using μManager (John Wiley & Sons, Inc., 2010).

17. L. S. Churchman, Z. Ökten, R. S. Rock, J. F. Dawson, and J. A. Spudich, “Single molecule high-resolution colocalization of cy3 and cy5 attached to macromolecules measures intramolecular distances through time,” Proc. Natl. Acad. Sci. USA 102, 1419–1423 (2005). [CrossRef]   [PubMed]  

18. S. Liu and K. A. Lidke, “A multiemitter localization comparison of 3d superresolution imaging modalities,” ChemPhysChem 15, 696–704 (2014). [CrossRef]  

19. N. Olivier, D. Keller, P. Gönczy, and S. Manley, “Resolution doubling in 3D-STORM imaging through improved buffers,” PloS one 8, e69004 (2013). [CrossRef]   [PubMed]  

20. J. Hu, W. A. Prinz, and T. A. Rapoport, “Weaving the web of ER tubules,” Cell 147, 1226–1231 (2011). [CrossRef]   [PubMed]  

21. E. L. Snapp, ER biogenesis: proliferation and differentiation. The Biogenesis of Cellular Organelles (Landes Bioscience and KluwerAcademic/Plenum Publishers, 2004).

22. M. D. Lew, S. F. Lee, M. Badieirostami, and W. Moerner, “Corkscrew point spread function for far-field three-dimensional nanoscale localization of pointlike objects,” Opt. Lett. 36, 202–204 (2011). [CrossRef]   [PubMed]  

23. S. A. Jones, S.-H. Shim, J. He, and X. Zhuang, “Fast, three-dimensional super-resolution imaging of live cells,” Nat. Methods 8, 499–505 (2011). [CrossRef]   [PubMed]  

Supplementary Material (2)

Media 1: AVI (781 KB)     
Media 2: AVI (506 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Hybrid imaging system implemented by inserting a cylindrical lens in one channel of a biplane imaging system. Experimentally measured PSF model on the right side. Obj: objective lens, DM: dichroic mirror, EM: emission filter, CL: cylindrical lens, BS: beam splitter.
Fig. 2
Fig. 2 Channel alignment analysis: an LWM transform T is constructed after collecting paired points (ρ1, ρ2) from the two channels, where ρi = (xi, yi). Registration errors were calculated by e = ρ1T (ρ2).
Fig. 3
Fig. 3 PSF analysis: mutual correlation values between PSF1(0,0,z1) and PSF2(dx,dy,z2) over ranges of −1μm < dx, dy < 1μm and −600nm < z1, z1 < 600nm. The left three columns correspond to lateral analysis of mutual coherence, and the rightmost column represents the axial analysis.
Fig. 4
Fig. 4 Simulation analysis in comparison with least-square fitting method with molecular activation density from 0.1 to 7 μm−2. Analysis results of the highest-photon emission rates (a–c), mid-photon emission rates (d–f) and the lowest-photon emission rates (g–i): lateral localization accuracy (a,d,g), axial localization accuracy (b,e,h) and recall rates (c,f,i). At each imaging density, the analysis was repeated 50 times, and the error bar denotes standard deviation.
Fig. 5
Fig. 5 3D super-resolution reconstruction of fixed microtubule data. Alpha tubulin subunits of microtubules (MTs) labeled with Alexa 647 in COS-7 cells were imaged with a 30 ms exposure time. (a) a single raw high-density image. (b) a conventional wide-field image acquired from 150 raw frames. (c) a SR image of LS fitting and (d) a SR image of FALCON3D by using 150 frames of raw high density data. (e–g) close-up images from the regions of solid white boxes in (b–d), respectively. (h–j) close-up images from the regions of dotted white boxes in (b–d), respectively. (k) lateral and axial line profiles of the microtubules highlighted by the yellow arrows in (c–d). (l) lateral and axial line profiles, respectively, measured from (g). (m) lateral and axial line profiles respectively, measured from (j). The colors in (l) and (m) correspond to the profiles of microtubules highlighted by the arrows of the same colors in (g) and (j), respectively. The widths of the microtubules are indicated as full width at half maximum. Scale bars are 2 μm in (a–d), and 500nm in (e–j)
Fig. 6
Fig. 6 3D super-resolution reconstruction of live ER data. The endoplasmic reticulum proteins, reticulon-4 fused to tdEos in a COS 7 cell were imaged with a 30 ms exposure time. (a) conventional wide-field images generated by averaging 100 frames of raw high-density data from a non-astigmatism channel for each image, (b) SR images of LS fitting (see Media 2), (c) SR images of FALCON3D (see Media 1) with a temporal resolution of 3 seconds. (d–f) close-up images corresponding to the regions of white boxes (d), green boxes (e), and yellow boxes (f), respectively. Scale bars in (a–c) are 2 μm, and scale bars in (d–f) are 500nm
Fig. 7
Fig. 7 Experimentally measured PSFs for astigmatic imaging (Astigmatism) and biplane imaging (Biplane)

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

S ( ρ ) = k = 1 K c k δ ( ρ ρ k ) ,
f 1 [ n ] = ( k = 1 K c k h 1 ( n ρ k ) ) + b 1 ( n ) + e 1 ( n ) , f 2 [ n ] = ( k = 1 K c k h 2 ( n ρ k ) ) + b 2 ( n ) + e 2 ( n ) .
n [ ( f 1 [ n ] b 1 [ n ] k = 1 K c k h 1 ( n ρ k ) ) 2 + ( f 2 [ n ] b 2 [ n ] k = 1 K c k h 2 ( n ρ k ) ) 2 ] .
f = Hc + b
min c 0 f Hc b 2 2 + ϕ ( c ) .
ρ k = m k + ε k .
h ( x ρ k ) h ( x m k ) ε k T h ( x m k ) .
J ( c , ε ) = n ( f 1 [ n ] b 1 k = 1 K c [ m k ] [ h 1 ( n m k ) ε k T h 1 ( n m k ) ] ) 2 + n ( f 2 [ n ] b 2 k = 1 K c [ m k ] [ h 2 ( n m k ) ε k T h 2 ( n m k ) ] ) 2 .
min c > 0 f Hc b 2 2 + ϕ ( c )
min c > 0 , | ε | Δ J ( c , ε ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.