Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single molecule light field microscopy

Open Access Open Access

Abstract

We introduce single molecule light field microscopy (SMLFM), a new class of three-dimensional (3D) single molecule localization microscopy. By segmenting the back focal plane of a microscope objective with an array of microlenses to generate multiple 2D perspective views, the same single fluorophore can be imaged from different angles. These views, in combination with a bespoke fitting algorithm, enable the 3D positions of single fluorophores to be determined from parallax. SMLFM achieves up to 20 nm localization precision throughout an extended $6\,\,\unicode{x00B5}{\rm m}$ depth of field. The capabilities of SMLFM are showcased by imaging membranes of fixed eukaryotic cells and DNA nanostructures below the optical diffraction limit.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

Single molecule localization microscopy (SMLM) has emerged as one of the most popular approaches to superresolution fluorescence imaging, in part due to the relative simplicity of its experimental implementation [1]. SMLM achieves superresolution through localization of the emitter position with subdiffraction limited precision in sparsely fluorescing specimens. In three-dimensional (3D) SMLM, axial information is usually obtained at the detriment of both precision and resolution resulting from a reduction in photon throughput (attributable to additional optical elements); intrinsically higher background from out-of-focus emitters; and extended point spread functions (PSFs) resulting in overlapping images of different emitters and ultimately a degradation in precision [2]. We introduce single molecule light field microscopy (SMLFM), a simple and highly efficient 3D superresolution imaging technique that combines the complementary strengths of SMLM and light field detection to achieve superresolution imaging throughout a continuous 3D volume.

Numerous approaches to extend the depth of field (DOF) of SMLM have been developed [36]. The most successful examples perform single-shot 3D imaging by modifying the shape of the intensity PSF to encode axial information. Astigmatic or rotating double-helix PSFs have DOFs ranging from 0.5 to 4 µm [7,8]. Self-bending PSFs such as Airy beams have also been used to encode the axial position in SMLM [9]. Recently, asymmetric Airy beam approaches have improved the technique’s axial range to 7 µm [10]. Large axial ranges have been achieved using other wavefront engineering approaches [11], such as Tetrapod (Saddle Point) [12] and secondary astigmatism [13]. However, in these cases, extracting the super-resolved positions of single molecules is generally more challenging than in 2D SMLM because the engineered PSFs (with the exception of the Airy beam) cannot accurately be approximated by 2D Gaussian functions. As a result, computationally expensive, maximum-likelihood approaches, that necessitate phase retrieval [14,15] or spline fitting [16] to generate finely tuned templates that account for system- and sample-induced aberrations are required. Furthermore, the large spatial footprint of these sculpted PSFs decreases the maximum achievable localization density, significantly decreasing throughput. Another approach called multifocal plane microscopy (MPM) images a discrete number of axial planes to different lateral positions on one or more detectors [1720], and can be combined with the aforementioned PSF engineering approaches. However, since an emitter located on a particular axial plane is in focus in a subset of these images, while contributing background to the others, only a fraction of the total number of detected photons contribute to the precision of each localization.

Light field microscopes based on refractive microlens arrays (MLAs) have PSFs composed of an array of spots, each of which resembles a 2D Gaussian function as is also the case for other pupil bisecting methods [21,22]. Each spot remains compact throughout the DOF, which is extended with respect to the corresponding widefield microscope since each microlens subtends a small fraction of the full aperture of the microscope objective. Existing optimized algorithms [23,24] can be used to estimate the location of the center of each foci with a precision much finer than its diffraction-limited width. It is well established that information captured within different microlenses can be combined to localize objects in 3D [25,26]. We demonstrate that the temporal sparsity of SMLM, which limits the probability of overlap between diffraction-limited images of distinct emitters, makes it an extremely attractive technique to combine with light field microscopy. We compare two SMLFM configurations tuned to different DOFs by characterizing their 3D precision using fluorescent beads in photon regimes encountered in bioimaging using popular labeling protocols. We demonstrate that SMLFM can accurately resolve features below the diffraction limit using nanorulers with 80 nm separation between binding sites. Finally, efficient detection and localization of single molecules in densely blinking specimens is demonstrated by imaging the membrane of fixed T-cells using both SMLFM configurations, achieving up to 25 3D localizations per frame.

 figure: Fig. 1.

Fig. 1. (a) Optical layout of a light field microscope with a microlens array positioned in a conjugate pupil plane. (b) The microlens array samples spatial and angular information from the wavefront, which exhibits asymmetric curvature about the primary image plane. Hence, two emitters located at $({{x_i},{y_i},{z_i}}\!)$ (red) and $({{x_i},{y_i}, - {z_i}}\!)$ (blue) are imaged to different positions in each perspective view. (c) Simulated point spread functions for two different light field microscope configurations, with different magnifications of the back focal plane and hence different effective microlens NA (depth of field).

Download Full Size | PDF

2. LIGHT FIELD MICROSCOPY

Light field microscopy (LFM) offers single-shot 3D imaging by simultaneously collecting light from a large, continuous DOF [27]. The emitter location is discriminated through wavefront sampling, generally using a refractive MLA. The MLA partitions a 2D detector into a 2D array of 2D measurements, such that each pixel can be mapped to a 4D space, known as the light field ${\cal L}(x,y,u,v)$. Light field measurements encode both the spatial location and arrival direction of incident photons, where $({x,y}\!)$ and $({u,v})$ denote spatial and angular coordinates, respectively. The MLA location in the detection path varies, but it is generally positioned in either a conjugate image plane or a conjugate pupil plane (also known as a back focal or Fourier plane). The MLA location dictates the sampling rates of the spatial and angular coordinates of the light field. When the MLA is placed in an image plane the microlenses themselves sample the spatial domain, generally resulting in a large pixel size. Since this configuration results in a loss of spatial resolution (observed most acutely at the image plane), a spatially varying 5D PSF, and an axially dependent partition of photons between foci [28], the MLA is optimally located in a plane conjugate to the pupil of the microscope objective for SMLM. This configuration is known as Fourier light field microscopy (FLFM) [2932]. FLFM provides more homogeneous resolution and signal-to-noise ratio (SNR) than traditional LFM configurations, albeit at the cost of the field of view (FOV) and DOF; however, as demonstrated in this work, the cellular dimensions required for SMLFM can be readily achieved with off-the-shelf components.

In FLFM, each microlens locally apertures the wavefront and generates a focused image, displaced in the direction of, and at a distance proportional to, the average gradient of the apertured wavefront. Hence, as illustrated in Fig. 1, axially displaced emitters are imaged to different positions in each perspective view [33]. The full optical model used to estimate the 3D emitter position is described in detail in Section 1 of Supplement 1, where Eqs. (1) and (2) are derived (c.f. Equations S1–S8). According to this optical model, which calculates the phase in the Fourier plane due to point source displacements from the focal point, the location of the foci in each subaperture, $({{x_{\textit{uv}}},{y_{\!\textit{uv}}}})$, is related to the 3D emitter position $({{x_i\!},{y_i\!},{z_i}})$ according to

$$\left({\begin{array}{*{20}{c}}{{x_{\textit{uv}}}}\\{{y_{\!\textit{uv}}}}\end{array}} \right) = \left({\begin{array}{*{20}{c}}1&0&{u\alpha\! \left({u,v} \right)}\\0&1&{v\alpha\! \left({u,v} \right)}\end{array}} \right)\left({\begin{array}{*{20}{c}}{{x_i}}\\{{y_i}}\\{{z_i}}\end{array}} \right),$$
where $\alpha (u,v)$ is defined as
$$\alpha (u,v) = \frac{{{\rm NA}}}{{{n_s}\sqrt {1 - {{\left({\frac{{{\rm NA}\rho}}{{{n_s}}}} \right)}^2}}}},$$
where ${\rho ^2} = {u^2} + {v^2} = 1$ at the pupil edge and 0 at the optical axis, $k$ is the free space wavenumber, ${n_s}$ is the sample refractive index, and ${\rm NA}$ is the numerical aperture of the microscope objective. $u\alpha (u,v){z_i}$ and $v\alpha (u,v){z_i}$ describe the disparity (parallax) between images in different perspective views. However, since each microlens bounds a finite area and $\alpha (u,v)$ varies nonlinearly with the pupil position, the disparity between 2D localizations in different perspective views is a function of the average wavefront gradient across a given microlens. Full details of how the average wavefront gradient is calculated are provided in Supplement 1 (c.f. Eqs. S1–S7).
 figure: Fig. 2.

Fig. 2. Summary of the algorithm used to estimate 3D emitter position in SMLFM. (a–b) Images of point emitters are detected and localized by Gaussian fitting using traditional 2D SMLM algorithms. Each localization is indexed by the view it appeared in (illustrated here by different shades of gray). Scale bar in (a) represents 15 µm. (a) (inset) Example of an image of a single molecule in a perspective view. (c) Localizations in different views corresponding to the same emitter are identified by applying the constraints of the optical model and removed from the pool. The process is iterated over until no more 2D localizations fit the optical model (d) The ordinary least-squares solution is calculated to give each 3D localization. (e) The 3D localizations are plotted to yield a super-resolved image.

Download Full Size | PDF

Given a sufficient number of photons, the center of each foci can be estimated with a precision much finer than its width by fitting a 2D Gaussian profile [3436]. A convenient feature of SMLFM is that existing algorithms and software packages [24] designed and optimized for traditional 2D SMLM can be applied to raw SMLFM data to yield a set of $n$ localizations $\{({{x_{\textit{uv}}},{y_{\textit{uv}}}})\}$. ThunderSTORM was used throughout this work [23]. Given this set of localizations, the 3D position of a point emitter, ${\textbf{x}} = ({x_i},{y_i},{z_i}\!)$, can be estimated as the solution to a linear set of equations of the form $A{\textbf{x}} = {\textbf{b}}$ (c.f. Equations S8 and S10 in Supplement 1), where ${\textbf{b}}$ is a vector representing the set of 2D localizations for a single emitter and $A$ describes the disparity between localizations in different perspective views. Since the 2D localizations are normally distributed, ordinary least squares can be used to solve $A{\textbf{x}} = {\textbf{b}}$ for the 3D emitter position. The accuracy of the optical model used for light field fitting was not found to significantly affect 3D localization precision, which is remarkably robust due to the circular symmetry of optical models of defocus. Localizations in diametrically opposed microlenses exhibit the same, radially flipped, inaccuracy, which results in errors in the estimated axial position with respect to the ground truth. This is demonstrated in Fig. S1 of Supplement 1 for the particular case of inadequately sampling the wavefront gradient. To ensure accurate axial position estimation in SMLFM, calibration scans should be used to verify results. Increasing the accuracy of the optical models reduces the fit error to values approaching the true precision, as determined by repeated experimental and simulated measurements. The fit error is calculated according to the conventional definition for ordinary least squares and is discussed more comprehensively in Section 1 of Supplement 1 (c.f. Eqs. S12–S15).

In most SMLM experiments, it is necessary to detect several hundred thousand localizations to generate high-resolution datasets and achieve Nyquist sampling of the underlying structure [37]. Hence, any viable 3D SMLM approach must be able to detect and localize multiple emitters in each frame. As demonstrated in Fig. 2(c), in SMLFM it is achieved by using Eqs. (1) and (2) to identify the most-likely subset of all 2D localizations in $\{({{x_{\textit{uv}}},{y_{\textit{uv}}}})\}$ that correspond to a single emitter. Briefly, the set of localizations $\{({{x_{\textit{uv}}},{y_{\textit{uv}}}})\}$ is ordered by decreasing photon number and increasing radial coordinate. Taking each member of this ordered set as a “seed” localization, candidate localizations in other perspective views (within a permitted disparity range set according to the DOF of the configuration) are identified and grouped. An ordinary least squares solution is calculated to $A{\textbf{x}} = {\textbf{b}}$, to yield ${\textbf{x}} = ({x_i},{y_i},{z_i}\!)$ for the largest and, hence most-likely, group of 2D localizations. If the 3D SMLFM localization is successful (in terms of having a lower fit error than the threshold), the corresponding group of 2D localizations are removed from the available pool. The process is repeated until no more localizations can be grouped and fitted.

Due to sample and system aberrations, the phase in the pupil plane cannot be entirely accounted for by point source displacements. Since intensity and angular information is captured in SMLFM, it is possible to directly measure aberrations using the 2D localizations themselves, similar to the method used in [38]. For all experimental data presented in this work, these aberrations were estimated by measuring the average residual disparity across the FOV, for emitters located within 0.5 µm of the focal plane. For each perspective view, the residual disparity was estimated as the vector between the measured 2D localizations and the projected localizations predicted by the forward optical model [Eq. (1)] for the fitted 3D position, $({x_i},{y_i},{z_i}\!)$. The residual disparity was subsequently subtracted from all localizations and the light field fitting algorithm rerun to recover the 3D position of point sources. The 3D SMLFM fitting procedure is summarized in Fig. 2. For further details regarding aberration correction, along with a summary of the parameters used for both 2D localization and light field fitting, refer to Section 1 and Table S1 in Supplement 1.

 figure: Fig. 3.

Fig. 3. (a–b) Lateral and axial Cramér–Rao lower bounds for configurations 1 and 2. The localization precision, calculated as the standard deviation of 10 repeated estimates is also plotted. The partially illuminated exterior microlenses are not included in the CRLB or localization precision estimates. (c–d) Comparison of the lateral and axial Cramér–Rao lower bounds for configurations 1 and 2 (calculated using all microlenses) with double helix and tetrapod point spread function engineering approaches for 6000 detected photons.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a) Images of fluorescent beads in different views for configuration 1 and configuration 2. The red circles illustrate the wavefront diameter in the BFP. The white lines indicate the microlens edges. In both configurations, the corner microlenses were partially illuminated and were not used for localization. Scale bars represent 1 µm. (b–c) Lateral and axial localization precision (circular markers) and fit error $({{\Delta _x},{\Delta _z}})$ as a function of the axial position ($z$) of an emitter for (b) configuration 1 and (c) configuration 2. (d) $1/\sqrt N$ fit of lateral and axial precision is plotted as a function of number of photons for configurations 1 and 2. Circular markers represent the average value of precision (for a bin width of 500 photons). The full dataset is plotted in Fig. S8 of Supplement 1. (e–f) 50 nm steps can be resolved using both configurations. (g–i) Histograms (12 nm bin width) of single molecule localizations for three representative nanorulers. Gaussian fits are also plotted, with estimates of average distance between emitters indicated for each nanoruler. (h) Inset: Image of a single nanoruler (scale bar represents 25 nm), ${\sigma _x}$ corresponds to the average standard deviation of the Gaussian fit.

Download Full Size | PDF

3. SMLFM OPTICAL DESIGN

A standard widefield microscope can be converted to a Fourier light field microscope by adding two components: a relay lens $({{\rm L_3}})$ and a MLA. ${{\rm L}_3}$ is placed in a $4\!f$ configuration with the tube lens, ${\rm L_2}$, which images the back focal (Fourier) plane onto the microlens array (MLA). An sCMOS sensor is located at the focal plane of the MLA. The performance of SMLFM is primarily determined by the MLA properties. Following the well-established guidance in terms of magnification and sampling to achieve optimal 2D localization precision, the pixel size in SMLFM should be approximately equal to the standard deviation of each PSF [35], which necessitates the use of small NA (long focal length) microlenses to achieve a suitable pixel size and FOV to image single cells. The DOF of SMLFM is dictated by the effective numerical aperture of each subaperture, defined as the portion of the objective NA spanned by each microlens. The axial sensitivity of SMLFM also depends on the microlens pitch since the disparity between 2D localizations in different perspective views is proportional to the average gradient of the apertured wavefront. Furthermore, one of the most crucial considerations in SMLFM is the division of emitted photons between perspective views, which affects the 2D localization precision. The SNR depends on the total number of microlenses that partition the pupil. To explore the interplay between the microlens pitch and the performance of SMLFM, two different configurations of the light field microscope were built and tested (hereafter referred to as configuration 1 and configuration 2).

Since the robustness of SMLFM to aberrations reduces the requirement for refractive index matching, an oil-immersion objective lens was used to maximize collection efficiency. The lens used also had a pupil diameter that was easily magnified by the tube lens and off-the-shelf achromatic lenses ${\rm L_3} = 75\;{\rm mm}$ (configuration 1) and ${{\rm L}_3} = 100\;{\rm mm}$ (configuration 2) to a diameter approximately equal to an integer number of microlenses. The same, square lattice MLA (SUSS MicroOptics, 18-00672) with a microlens pitch of $1015\,\,\unicode{x00B5}{\rm m}$ and a focal length of 26.1 mm was used in both prototypes. These two configurations have differing numbers of illuminated microlenses and magnification. For precise details of both configurations, refer to Table S1 in Supplement 1. The square lattice of the MLA used in our experiments results in partially illuminated microlenses, as illustrated in Figs. 1 and 4. Since this results in distorted PSFs, all data from these microlenses was excluded from analysis in this proof-of-principle work. As a result, the maximum photon throughput was reduced to maximum values of 65% (configuration 1) and 87% (configuration 2). In future iterations, a hexagonal array of microlenses could be used to provide better tessellation of the pupil and a higher photon throughput or, alternatively, data in the exterior microlenses could be deconvolved prior to the 2D localization step.

To evaluate the performance of the SMLFM localization algorithm in both configurations the Cramér–Rao lower bound (CRLB) [36] was computed for simulated images with an average of 6000 detected photons. Details of the simulations used to generate the data are presented in Section 2 of Supplement 1.

Figure 3 demonstrates that both configurations exhibit a theoretical precision below 20 nm throughout a 5 µm DOF. Furthermore, the precision achieved using the SMLFM localization algorithm outlined in Section 2 approaches the CRLB at the focal plane. While the CRLB does not predict isotropic precision, the axial and lateral localization precision estimates exhibit a close relationship. Discrepancies between the achieved localization and the CRLB increase with axial distance from the focal plane, but lateral and axial precision remain below 20 nm and 40 nm, respectively, throughout the axial range. It is primarily due to the fact that, for the same number of photons detected at the sensor, the average number of photons per light field fit decreases with an increasing axial emitter position. In this exploration of the feasibility of SMLFM, 3D point position estimation is based solely on the geometry of the constellation of 2D localizations in different perspective views for each emitter. It is anticipated that improvements in localization precision could be achieved by extending the optical model beyond the current analysis to include a greater number of parameters. In particular, changes in the lateral profile of each foci with an axial position could be incorporated. Furthermore, different perspective views could be weighted according to their information content and the simple least-squares fitting algorithm could be replaced with a maximum likelihood approach.

As expected, the two SMLFM configurations differ in terms of maximum precision and DOF. Configuration 1, having fewer lenses (and therefore a larger effective NA), achieves a slightly better lateral precision at the focal plane at the expense of DOF. However, both configurations achieve sub-5 nm lateral precision at the focal plane. The similarity of the curves plotted in Figs. 4 (a) and (b) suggest that the configuration with the largest DOF is preferable. A caveat is that decreasing the photon flux per microlens increases the noise floor below which 2D detection and localization become difficult (refer to Fig. S8 and S9 in Supplement 1). As such, the choice of MLA in SMLFM must take into account the achievable SNR with the depth of field required.

At photon fluxes routinely achievable using common labeling protocols for single molecule imaging, the theoretical performance of SMLFM is comparable to leading 3D SMLM techniques such as double helix and saddle point (tetrapod), as summarized in Figs. 3(c) and 3(d). Furthermore, detection and localization procedures used in double helix and saddle point SMLM are generally more complex than the 2D localization and least-squares light field fitting algorithm implemented for SMLFM since additional steps such as template matching are necessary. 3D localization in SMLFM is as straightforward as astigmatism, which is one of the most widely used 3D SMLM approaches due to its relative simplicity. The ease of implementation of in situ aberration correction in SMLFM is another key advantage with respect to other 3D SMLM techniques, where it is generally necessary to estimate the experimental pupil function (for instance, using phase retrieval) and incorporate it into the optical model to improve experimental localization precision.

 figure: Fig. 5.

Fig. 5. (a) Background subtracted camera frame of Alexa-647 captured using SMLFM (configuration 1). Scale bar represents 15 µm. (b) Insets: Images captured in each perspective view. (c) Histogram showing the distribution of the number of photons emitted per event. A median of 5569 photons were collected per event. (d) Photobleaching curves of Alexa-647 imaged with light field microscopy. The normalized, integrated intensity in each perspective view is plotted as a function of time. Traces from different perspective views are distinguished by color (Visualization 1 and Visualization 2).

Download Full Size | PDF

4. RESULTS AND DISCUSSION

To benchmark the performance of SMLFM, a 2D sample comprised of 100 nm fluorescent beads (TetraSpeck Fluorescent Microspheres Kit (T14792), ThermoFisher) immobilized on a coverslip ($\# 1.5$ thickness) was imaged. Data was acquired as the microscope stage was translated in fixed steps of 50 nm along the optical axis. For both configurations, 4000 net photons were detected on average across the entire axial range. The localization precision was calculated as the standard deviation of the fitted 3D position at each 50 nm step with 10 repeats at an exposure of 10 ms. Fig. 4 presents a summary of results. For configuration 1, isotropic lateral and axial localization precision was measured, remaining below 20 nm throughout a 3 µm imaging depth (below 50 nm over a 4 µm axial range). Similar to the simulations, configuration 2 exhibited a larger DOF with the isotropic lateral and axial precision remaining below 20 nm over an extended 5 µm range. At an average photon flux of 4000 per event, this value is competitive with other 3D localization techniques [11,12,39]. The data presented in Figs. 4(b) and 4(c), demonstrate that the fit error is a robust upper bound for the precision across all depths and can be used to evaluate the quality of each fit.

The relationship between SMLFM localization precision and the number of photons was measured by varying the laser power to explore a range of net detected photons. The 10 ms exposure time was kept constant. This data was acquired over an axial range of 4 µm (configuration 1) and 7 µm (configuration 2). The localization precision was again calculated as the standard deviation of the fitted 3D position across 20 repeats (10 repeats for configuration 2). A summary of results is presented in Fig. 4(d), a $1/\sqrt N$ fit of lateral and axial precision is plotted as a function of the number of photons for configurations 1 and 2. Circular markers representing the average value of precision (bin width of 500 photons) have also been plotted. The full dataset is plotted in Fig. S8 of Supplement 1. At low photon numbers, configuration 1 exhibits better performance than configuration 2, due to the higher number of photons per microlens, resulting in higher SNR and a better 2D localization precision. The $1/\sqrt N$ fit does not capture the behavior exhibited by configuration 2 at low photon numbers where 2D detection and localization become difficult due to the higher noise floor reached at around 1000 photons. However, at sufficiently high photon numbers, the performances of the two configurations become comparable. The precision floor of both configurations is approximately isotropic with values of 8 nm (configuration 1) and 10 nm (configuration 2), respectively. Furthermore, a linear, monotonic relationship between the fit and stage position was observed in the case of both configuration 1 and configuration 2, as shown in Figs. 4(e) and 4(f). Crucially, clear contrast can be observed in the 50 nm axial steps. To conclusively demonstrate that SMLFM can accurately resolve features below the diffraction limit, DNA origami nanorulers (GATTA-PAINT 80 nm, GATTAQuant GmbH) were imaged. 1D histograms of three representative nanorulers are plotted in Figs. 4(g)–4(h), which show an average spacing of $82.8 \pm 1.2\;{\rm nm} $ and a standard deviation of $21.4 \pm 0.3\;{\rm nm} $. Altogether, the experimental results presented in Fig. 4 confirm the viability of single molecule light field microscopy.

 figure: Fig. 6.

Fig. 6. Super-resolved images of Jurkat ${\rm T}$ cells captured with single molecule light field microscopy. (a) (i) Horizontal and vertical (ii) cross-sections through data acquired with configuration 1. Scale bar represents 2 µm. Inset: Line profiles plotted for $xy$ and $zy$ projections through a microvilli demonstrate that features can resolved beyond the diffraction limit in SMLFM. (b) Kernel density plots summarizing the characteristics of the acquired data. Data with fit error below 80 nm, indicated by the red dashed line in (ii) was filtered into the final visualizations. (i) Number of photons per light field fit for the filtered localizations. (ii) 3D fit error and axial position for all light field localizations. (iii) Axial locations of filtered localizations. (c) (i) Horizontal and vertical (ii) cross-sections through data acquired with configuration 2.

Download Full Size | PDF

The single molecule sensitivity of SMLFM was conclusively demonstrated by imaging Alexa-647 dispersed on a coverslip ($\# 1.5$ thickness) with a 70 ms exposure time. Fluorescent traces exhibiting discrete signal levels, characteristic of single molecule photobleaching events were observed [40]. Figure 5 shows an example of single step photobleaching of a fluorophore at the tail end of the distribution of typical localized molecules acquired in this experiment. Traces of the integrated intensity from images of the emitter in each perspective view are plotted. As expected, spatio-temporal correlations are observed between the measurements from different views.

To examine the superresolution structural imaging capabilities of SMLFM, we imaged the membrane of fixed Jurkat T-cells using point accumulation for imaging of nanoscale topography (PAINT), based on the stochastic binding of fluorescent wheat germ agglutinin [41]. Cells were imaged using a HILO illumination to reduce background. Datasets comprised of 45,000 to 150,000 images were acquired over 1 to 3 h. Refer to Supplement 1 for full details of the experimental parameters. Typical frames, which capture information throughout the depth of field, contained 45 2D localizations, corresponding to, on average, 13 3D light field localizations. After filtering by fit error (using an upper limit of 80 nm) experiments achieved averages between three and nine light field localizations per frame. The density of localizations practically achieved in our SMLFM experiments is $\approx\! 600,\!000$ localizations ($\ge\! 300,\!000$ post-filtering) over 45,000 frames with a FOV of ${15}\;{\unicode{x00B5}{\rm m}} \times {15}\;{\unicode{x00B5}{\rm m}}$. This high density is enabled by the ability to fit multiple molecules with the same lateral position but differing axial position and the relatively small extent of the SMLFM PSF. This localization density is competitive with other large DOF 3D SMLM approaches that have been practically demonstrated on whole cells to generate data over a DOF greater than $2\,\,\unicode{x00B5}{\rm m}$. For instance, saddle point, double helix PSF and MPM have been used successfully in such experiments to generate up to 33 total localizations per frame and up to three filtered localizations per frame [8,11,17]. In comparison, our results demonstrate 13 total light field localizations per frame and nine filtered light field localizations per frame. Visualizations of these filtered SMLFM localizations are shown in Fig. 6 for two such experiments, one for each SMLFM configuration. Horizontal and vertical projections through each cell demonstrate that the resolution of SMLFM is sufficient to resolve the 3D membrane contour and microvilli.

5. CONCLUSION

We have demonstrated the viability of SMLFM for scanless 3D superresolution imaging. Our results show that SMLFM can localize single molecules with a near-isotropic precision of 20 nm using only a few thousand emitted photons, a comparable performance to other 3D imaging techniques [36]. We have also demonstrated detection and 3D localization of single molecules in densely blinking specimens, achieving up to 25 light field localizations per frame in data sets of 40,000 to 150,000 frames. The mechanism that enables SMLFM, disparity between perspective views, is one that reveals the underlying wavefront structure and amplitude of the field in the pupil. Such data enables post-acquisition aberration correction without requiring phase retrieval or z-dependent calibration scans. This rich information coupled with the simple PSF footprint and the optical properties of MLAs result in SMLFM having the potential to offer highly accurate and precise multicolor 3D nanoscopy over whole eukaryotic cell volumes.

Funding

Engineering and Physical Sciences Research Council (EP/G037256/1, EP/L015455/1, EP/M003663/1, EP/R025398/1); Cambridge Commonwealth, European and International Trust; Higher Education Commission, Pakistan; Wellcome Trust (212936/Z/18/Z); Royal Society (RGF\EA\181021, uf120277, URF\R\180029).

Acknowledgment

The authors would like to thank Dr. Michael Shaw for useful discussions.

Kevin O’Holleran conceptualized the project. O’Holleran and Steven F. Lee supervised and administered the project. Ruth R. Sims, Sohaib Abdul Rehman, Leila Muresan, and O’Holleran developed the methodology. Sims, Abdul Rehman, Adam Clark, Martin O. Lenz, and O’Holleran conducted the experimental investigation. Edward W. Sanders and Aleks Ponjavic provided the cell samples and labeling methodology. Sims, Abdul Rehman, Sarah I. Benaissa, Ezra Bruggeman, Muresan, and O’Holleran performed formal analysis of data and developed software. Sims was responsible for visualization of results in the paper. Sims, Abdul Rehman, and O’Holleran wrote the paper. All authors reviewed and edited the final paper.

Disclosures

The authors declare no conflicts of interest.

 

See Supplement 1 for supporting content.

REFERENCES

1. J. Vangindertael, R. Camacho, W. Sempels, H. Mizuno, P. Dedecker, and K. P. F. Janssen, “An introduction to optical super-resolution microscopy for the adventurous biologist,” Methods Appl. Fluoresc. 6, 022003 (2018). [CrossRef]  

2. M. Lakadamyali, H. Babcock, M. Bates, X. Zhuang, and J. Lichtman, “3D multicolor super-resolution imaging offers improved accuracy in neuron tracing,” PLoS ONE 7, e30826 (2012). [CrossRef]  

3. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009). [CrossRef]  

4. P. Bon, J. Linarès-Loyez, M. Feyeux, K. Alessandri, B. Lounis, P. Nassoy, and L. Cognet, “Self-interference 3D super-resolution microscopy for deep tissue investigations,” Nat. Methods 15, 449–454 (2018). [CrossRef]  

5. Y. Shechtman, L. E. Weiss, A. S. Backer, S. J. Sahl, and W. E. Moerner, “Precise three-dimensional scan-free multiple-particle tracking over large axial ranges with tetrapod point spread functions,” Nano Lett. 15, 4194–4199 (2015). [CrossRef]  

6. A. von Diezmann, Y. Shechtman, and W. E. Moerner, “Three-dimensional localization of single molecules for super-resolution imaging and single-particle tracking,” Chem. Rev. 117, 7244–7275 (2017). [CrossRef]  

7. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008). [CrossRef]  

8. A. R. Carr, A. Ponjavic, S. Basu, J. McColl, A. M. Santos, S. Davis, E. D. Laue, D. Klenerman, and S. F. Lee, “Three-dimensional super-resolution in eukaryotic cells using the double-helix point spread function,” Biophys. J. 112, 1444–1454 (2017). [CrossRef]  

9. J. Shu, J. Vaughan, and X. Zhuang, “Isotropic 3D super-resolution imaging with a self-bending point spread function,” Nat. Photonics 8, 302–306 (2014). [CrossRef]  

10. Y. Zhou, P. Zammit, V. Zickus, J. M. Taylor, and A. R. Harvey, “Twin-Airy point-spread function for extended-volume particle localization,” Phys. Rev. Lett. 124, 198104 (2020). [CrossRef]  

11. A. Aristov, B. Lelandais, E. Rensen, and C. Zimmer, “ZOLA-3D allows flexible 3D localization microscopy over an adjustable axial range,” Nat. Commun. 9, 2409 (2018). [CrossRef]  

12. Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014). [CrossRef]  

13. Y. Zhou and G. Carles, “Precise 3D particle localization over large axial ranges using secondary astigmatism,” Opt. Lett. 45, 2466–2469 (2020). [CrossRef]  

14. R. McGorty, J. Schnitzbauer, W. Zhang, and B. Huang, “Correction of depth-dependent aberrations in 3D single-molecule localization and super-resolution microscopy,” Opt. Lett. 39, 275–278 (2014). [CrossRef]  

15. P. N. Petrov, Y. Shechtman, and W. E. Moerner, “Measurement-based estimation of global pupil functions in 3D localization microscopy,” Opt. Express 25, 7945–7959 (2017). [CrossRef]  

16. H. P. Babcock and X. Zhuang, “Analyzing single molecule localization microscopy data using cubic splines,” Sci. Rep. 7, 552 (2017). [CrossRef]  

17. B. Hajj, J. Wisniewski, M. El Beheiry, J. Chen, A. Revyakin, C. Wu, and M. Dahan, “Whole-cell, multicolor superresolution imaging using volumetric multifocus microscopy,” Proc. Natl. Acad. Sci. USA 111, 17480–17485 (2014). [CrossRef]  

18. M. F. Juette, T. J. Gould, M. D. Lessard, M. J. Mlodzianoski, B. S. Nagpure, B. T. Bennett, S. T. Hess, and J. Bewersdorf, “Three-dimensional sub–100 nm resolution fluorescence microscopy of thick samples,” Nat. Methods 5, 527–529 (2008). [CrossRef]  

19. A. Descloux, K. S. Grußmayer, E. Bostan, T. Lukes, A. Bouwens, A. Sharipov, S. Geissbuehler, A.-L. Mahul-Mellier, H. A. Lashuel, M. Leutenegger, and T. Lasser, “Combined multi-plane phase retrieval and super-resolution optical fluctuation imaging for 4D cell microscopy,” Nat. Photonics 12, 165–172 (2018). [CrossRef]  

20. S. Ram, P. Prabhat, J. Chao, E. Sally Ward, and R. J. Ober, “High accuracy 3D quantum dot tracking with multifocal plane microscopy for the study of fast intracellular dynamics in live cells,” Biophys. J. 95,6025–6043 (2008). [CrossRef]  

21. Y. Sun, J. D. McKenna, J. M. Murray, E. M. Ostap, and Y. E. Goldman, “Parallax: high accuracy three-dimensional single molecule tracking using split images,” Nano Lett. 9, 2676–2682 (2009). [CrossRef]  

22. D. Baddeley, M. B. Cannell, and C. Soeller, “Three-dimensional sub-100 nm super-resolution imaging of biological samples using a phase ramp in the objective pupil,” Nano Res. 4, 589–598 (2011). [CrossRef]  

23. M. Ovesný, P. Křížek, J. Borkovec, Z. Švindrych, and G. M. Hagen, “ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging,” Bioinformatics 30, 2389–2390 (2014). [CrossRef]  

24. D. Sage, T.-A. Pham, H. Babcock, T. Lukes, T. Pengo, J. Chao, R. Velmurugan, A. Herbert, A. Agrawal, S. Colabrese, A. Wheeler, A. Archetti, B. Rieger, R. Ober, G. M. Hagen, J.-B. Sibarita, J. Ries, R. Henriques, M. Unser, and S. Holden, “Super-resolution fight club: assessment of 2D and 3D single-molecule localization microscopy software,” Nat. Methods 16, 387–395 (2019). [CrossRef]  

25. E. Sanchez-Ortiga, G. Scrofani, G. Saavedra, and M. Martinez-Corral, “Optical sectioning microscopy through single-shot lightfield protocol,” IEEE Access 8, 14944–14952 (2020).

26. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef]  

27. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006). [CrossRef]  

28. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013). [CrossRef]  

29. A. Llavador, J. Sola-Pikabea, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Resolution improvements in integral microscopy with Fourier plane recording,” Opt. Express 24, 20792–20798 (2016). [CrossRef]  

30. G. Scrofani, J. Sola-Pikabea, A. Llavador, E. Sanchez-Ortiga, J. C. Barreiro, G. Saavedra, J. Garcia-Sucerquia, and M. Martínez-Corral, “FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples,” Biomed. Opt. Express 9, 335–346 (2018). [CrossRef]  

31. C. Guo, W. Liu, X. Hua, H. Li, and S. Jia, “Fourier light-field microscopy,” Opt. Express 27, 25573–25594 (2019). [CrossRef]  

32. L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, and Q. Wen, “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio),” eLife 6, e28158 (2017). [CrossRef]  

33. E. Adelson and J. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992). [CrossRef]  

34. N. Bobroff, “Position measurement with a resolution and noise-limited instrument,” Rev. Sci. Instrum. 57, 1152–1157 (1986). [CrossRef]  

35. R. E. Thompson, D. R. Larson, and W. W. Webb, “Precise nanometer localization analysis for individual fluorescent probes,” Biophys. J. 82, 2775–2783 (2002). [CrossRef]  

36. R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. 86, 1185–1200 (2004). [CrossRef]  

37. H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5, 417–423 (2008). [CrossRef]  

38. F. Xu, D. Ma, K. P. MacPherson, S. Liu, Y. Bu, Y. Wang, Y. Tang, C. Bi, T. Kwok, A. A. Chubykin, P. Yin, S. Calve, G. E. Landreth, and F. Huang, “Three-dimensional nanoscopy of whole cells and tissues with in situ point spread function retrieval,” Nat. Methods 17, 531–540 (2020). [CrossRef]  

39. G. T. Dempsey, J. C. Vaughan, K. H. Chen, M. Bates, and X. Zhuang, “Evaluation of fluorophores for optimal performance in localization-based super-resolution imaging,” Nat. Methods 8, 1027–1036 (2011). [CrossRef]  

40. C. Liesche, K. S. Grußmayer, M. Ludwig, S. Wörz, K. Rohr, D.-P. Herten, J. Beaudouin, and R. Eils, “Automated analysis of single-molecule photobleaching data by statistical modeling of spot populations,” Biophys. J. 109, 2352–2362 (2015). [CrossRef]  

41. W. R. Legant, L. Shao, J. B. Grimm, T. A. Brown, D. E. Milkie, B. B. Avants, L. D. Lavis, and E. Betzig, “High-density three-dimensional localization microscopy across large volumes,” Nat. Methods 13, 359–365 (2016). [CrossRef]  

Supplementary Material (3)

NameDescription
Supplement 1       Supplementary information
Visualization 1       1000 frames (background subtracted) of a 45,000 frame data set. Fixed Jurkat T-cell imaged in a Single Molecule Light Field Microscope and using point accumulation for imaging of nanoscale topography (PAINT).
Visualization 2       1000 frames (background subtracted) of a 150,000 frame data set. Fixed Jurkat T-cell imaged in a Single Molecule Light Field Microscope and using point accumulation for imaging of nanoscale topography (PAINT).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) Optical layout of a light field microscope with a microlens array positioned in a conjugate pupil plane. (b) The microlens array samples spatial and angular information from the wavefront, which exhibits asymmetric curvature about the primary image plane. Hence, two emitters located at $({{x_i},{y_i},{z_i}}\!)$ (red) and $({{x_i},{y_i}, - {z_i}}\!)$ (blue) are imaged to different positions in each perspective view. (c) Simulated point spread functions for two different light field microscope configurations, with different magnifications of the back focal plane and hence different effective microlens NA (depth of field).
Fig. 2.
Fig. 2. Summary of the algorithm used to estimate 3D emitter position in SMLFM. (a–b) Images of point emitters are detected and localized by Gaussian fitting using traditional 2D SMLM algorithms. Each localization is indexed by the view it appeared in (illustrated here by different shades of gray). Scale bar in (a) represents 15 µm. (a) (inset) Example of an image of a single molecule in a perspective view. (c) Localizations in different views corresponding to the same emitter are identified by applying the constraints of the optical model and removed from the pool. The process is iterated over until no more 2D localizations fit the optical model (d) The ordinary least-squares solution is calculated to give each 3D localization. (e) The 3D localizations are plotted to yield a super-resolved image.
Fig. 3.
Fig. 3. (a–b) Lateral and axial Cramér–Rao lower bounds for configurations 1 and 2. The localization precision, calculated as the standard deviation of 10 repeated estimates is also plotted. The partially illuminated exterior microlenses are not included in the CRLB or localization precision estimates. (c–d) Comparison of the lateral and axial Cramér–Rao lower bounds for configurations 1 and 2 (calculated using all microlenses) with double helix and tetrapod point spread function engineering approaches for 6000 detected photons.
Fig. 4.
Fig. 4. (a) Images of fluorescent beads in different views for configuration 1 and configuration 2. The red circles illustrate the wavefront diameter in the BFP. The white lines indicate the microlens edges. In both configurations, the corner microlenses were partially illuminated and were not used for localization. Scale bars represent 1 µm. (b–c) Lateral and axial localization precision (circular markers) and fit error $({{\Delta _x},{\Delta _z}})$ as a function of the axial position ( $z$ ) of an emitter for (b) configuration 1 and (c) configuration 2. (d)  $1/\sqrt N$ fit of lateral and axial precision is plotted as a function of number of photons for configurations 1 and 2. Circular markers represent the average value of precision (for a bin width of 500 photons). The full dataset is plotted in Fig. S8 of Supplement 1. (e–f) 50 nm steps can be resolved using both configurations. (g–i) Histograms (12 nm bin width) of single molecule localizations for three representative nanorulers. Gaussian fits are also plotted, with estimates of average distance between emitters indicated for each nanoruler. (h) Inset: Image of a single nanoruler (scale bar represents 25 nm), ${\sigma _x}$ corresponds to the average standard deviation of the Gaussian fit.
Fig. 5.
Fig. 5. (a) Background subtracted camera frame of Alexa-647 captured using SMLFM (configuration 1). Scale bar represents 15 µm. (b) Insets: Images captured in each perspective view. (c) Histogram showing the distribution of the number of photons emitted per event. A median of 5569 photons were collected per event. (d) Photobleaching curves of Alexa-647 imaged with light field microscopy. The normalized, integrated intensity in each perspective view is plotted as a function of time. Traces from different perspective views are distinguished by color (Visualization 1 and Visualization 2).
Fig. 6.
Fig. 6. Super-resolved images of Jurkat ${\rm T}$ cells captured with single molecule light field microscopy. (a) (i) Horizontal and vertical (ii) cross-sections through data acquired with configuration 1. Scale bar represents 2 µm. Inset: Line profiles plotted for $xy$ and $zy$ projections through a microvilli demonstrate that features can resolved beyond the diffraction limit in SMLFM. (b) Kernel density plots summarizing the characteristics of the acquired data. Data with fit error below 80 nm, indicated by the red dashed line in (ii) was filtered into the final visualizations. (i) Number of photons per light field fit for the filtered localizations. (ii) 3D fit error and axial position for all light field localizations. (iii) Axial locations of filtered localizations. (c) (i) Horizontal and vertical (ii) cross-sections through data acquired with configuration 2.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

( x uv y uv ) = ( 1 0 u α ( u , v ) 0 1 v α ( u , v ) ) ( x i y i z i ) ,
α ( u , v ) = N A n s 1 ( N A ρ n s ) 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.