Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

ZIMFLUX: Single molecule localization microscopy with patterned illumination in 3D

Open Access Open Access

Abstract

Three dimensional modulation-enhanced single-molecule localization techniques, such as ModLoc, offer advancements in axial localization precision across the entire field of view and axial capture range, by applying phase shifting to the illumination pattern. However, this improvement is limited by the pitch of the illumination pattern that can be used and requires registration between separate regions of the camera. To overcome these limitations, we present ZIMFLUX, a method that combines astigmatic point-spread-function (PSF) engineering with a structured illumination pattern in all three spatial dimensions. In order to achieve this we address challenges such as optical aberrations, refractive index mismatch, supercritical angle fluorescence (SAF), and imaging at varying depths within a sample, by implementing a vectorial PSF model. In scenarios involving refractive index mismatch between the sample and immersion medium, the astigmatic PSF loses its ellipticity at greater imaging depths, leading to a deterioration in axial localization precision. In contrast, our simulations demonstrate that ZIMFLUX maintains high axial localization precision even when imaging deeper into the sample. Experimental results show unbiased localization of 3D 80 nm DNA-origami nanostructures in SAF conditions with a 1.5-fold improvement in axial localization precision when comparing ZIMFLUX to conventional SMLM methods that rely solely on astigmatic PSF engineering.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Single-molecule localization microscopy (SMLM) has emerged as a powerful imaging technique, providing nanoscale resolution for visualizing subcellular structures [13]. SMLM encompasses all microscopic techniques that achieve super-resolution by isolating individual emitters and fitting their images using the point spread function (PSF). This technique has revolutionized our understanding of cellular structures and facilitated the discovery of previously unobserved subcellular features [46]. However, the inherent limitations of light microscopy and traditional SMLM constrain the resolving power in the axial dimension, hindering comprehensive visualization of 3D structures.

To overcome these limitations, various techniques have been developed to manipulate the PSF and enhance axial resolution in 3D SMLM. These methods include the double-helix PSF [7,8], TetraPod [9], self-bending PSF [10], phase ramp PSF [11], corkscrew PSF [12], saddle-point PSF [9], and the commonly used approach involving a cylindrical lens for astigmatic z-encoding. Each of these techniques has been applied in 3D SMLM [1315]. Other techniques, such as photoactivated localization microscopy (iPALM) [16,17] and 4Pi single-molecule switching nanoscopy [18,19], offer superior resolution based on the coherence of a single emitter’s fluorescence but are challenging to construct, align, and maintain. Another approach to improve the axial resolution is Supercritical Angle Fluorescence (SAF), which, however, is limited to emitters near the coverslip [20,21].

Alternatively, modulation-enhanced single-molecule localization can be employed to enhance precision in SMLM [22]. For example, MINFLUX uses a scanning doughnut illumination spot [23], which has been extended to 3D imaging [24]. However, MINFLUX’s limitation of localizing one molecule at a time hinders throughput. To address this drawback, sinusoidal patterns are used to obtain enhancement over the entire FOV simultaneously in 2D [25,26] and 3D [27,28]. For 3D imaging, ModLoc offers a significant advantage by employing a single objective, thereby reducing the complexity of the optical setup and sample mounting. On the other hand, ROSE-Z requires two objectives to create the illumination pattern, increasing the overall complexity, but enabling a smaller pattern pitch in the axial direction, and thus a better localization precision.

As an alternative, we introduce ZIMFLUX, a method that combines an astigmatic PSF with a sinusoidal illumination pattern, while employing a single objective. Unlike ModLoc, ZIMFLUX employs a 3D sinusoidal illumination pattern and captures different phases on a single camera region, resulting in an increased field of view (FOV) size and eliminating the need for registration. The inclusion of an astigmatic PSF alongside the sinusoidal illumination pattern in ZIMFLUX provides the ability to obtain a 3D estimate of the position without the use of the illumination patterns. This circumvents the problem of phase wrapping and enables the use of smaller pitches. We use 480 nm versus 1 μm of ModLoc, leading to an improved axial localization precision with a factor of approximately 1.5. Moreover, astigmatism enables the estimation of the applied illumination pattern directly from the acquired image data itself, avoiding the need for a calibration sample and mitigating biased localization due to sample-specific factors, such as refractive index mismatches, non-flat coverslips (in custom samples), and misalignments over time in the optical setup.

The optical layout ZIMFLUX utilizes a digital micromirror device (DMD) to generate two excitation beams that intersect at the sample plane, resulting in the interference pattern (Fig. 1(a)). Rapidly shifting the pattern on the DMD causes the illumination pattern to move, leading to variations in the emitted photon count from the fluorophore. To achieve more accurate position estimations, we employ a vectorial PSF model [2931], effectively handling aberrations, depth varying PSFs, and SAF conditions. The PSF and illumination pattern information are then integrated into a maximum likelihood estimation (MLE) to achieve superior axial precision in emitter localization (Fig. 1(b)).

 figure: Fig. 1.

Fig. 1. a, An interference pattern is generated by two beams entering the sample, illuminating the emitter located at (x0, y0, z0). b, ZIMFLUX operates by recording three images with shifted patterns, and using the information of the illumination pattern, PSF and photon count for a maximum likelihood estimation. This method results in improved axial localization precision compared to conventional single molecule localization microscopy (SMLM) using an astigmatic PSF, where only the PSF obtained from the sum of the frames is used for localization of the emitter.

Download Full Size | PDF

To summarize, in this paper we introduce ZIMFLUX, an advancement in modulation-enhanced single-molecule localization microscopy that alleviates some of the current limitations of existing methods. By integrating astigmatic PSF engineering with sinusoidal illumination, ZIMFLUX achieves enhanced axial resolution, expanding its potential applications in 3D imaging of nanostructures and biological samples.

2. Theory

2.1 Imaging model

The imaging model combines the illumination patterns and the PSF model to describe the expected photon count of a pixel, due to the presence of an emitter. The imaging model is given by

$$\mu_j^{lk} = N P\big(\phi_{lk}(\vec{r_0})\big) H(\vec{r}_j - \vec{r}_0) + \frac{b}{LK}$$
where $N$ is the photon count of the emitter, $b$ represents the background photon count and $\vec {r}_0$ and $\vec {r}_j$ are the center positional vectors of the emitter and pixel $j$, respectively. $H$ is the PSF model, and $P$ is the illumination pattern. $l=1, 2, \dots, L$ and $k=1, 2, \dots, K$ denote the indices representing the direction and phase step of the illumination pattern. $P(\phi _{lk}(\vec {r_0}))$ is the relative intensity of the phase $\phi _{lk}$ at position $\vec {r}_0$. For a more comprehensive understanding of the subject, the following sections will provide detailed descriptions of the illumination and PSF models.

2.1.1 Illumination model

The sample undergoes excitation with a three-dimensional sinusoidal patterned electric field $\vec {E}_{\text {i}}$ characterized by a lateral pitch of $p_\text {lat}$ and an axial pitch of $p_\text {ax}$, given by

$$p_\text{lat} =\frac{\lambda_0}{\lvert n_0(\sin\alpha_1+\sin\beta_1)\rvert},$$
$$p_\text{ax} =\frac{\lambda_0}{\lvert n_0(\cos{\alpha_1}-\cos\beta_1)\rvert}.$$
where $\lambda _0$ is the wavelength of the monochromatic light source, $n_0$ is the refractive index of the mounting medium, $\alpha _1$ and $\beta _1$ are the angles of the two excitation beams $\vec {E}_1$ and $\vec {E}_2$, with respect to the optical axis in the mounting medium, as shown in Fig. 2. As shown in Supplement 1, the intensity of the sinusoidal interfering electric field ($\vec {E}_i$) pattern can be modeled as
$$P(\phi_{lk}(\vec{r})) = \eta_{lk}(1+m_{lk} \cos(\phi_{lk}(\vec{r}))),$$
where $\eta _{lk}$ is the relative intensity per pattern, satisfying $\sum _{lk}^{LK} \eta _{lk} = 1$, $m_{lk}$ is the modulation depth, and $\phi _{lk}$ is the phase at location $\vec {r}$, which is defined as
$$\phi_{lk}(\vec{r}) = 2\pi \vec{q}_l \cdot \vec{r} - \psi_{lk},$$
here $\psi _{lk}$ is the phase offset and $\vec {q}_l$ is the spatial frequency vector defined by
$$\vec{q}_l = \{\frac{ \cos(\gamma_l)}{p_{\text{lat}}}, \frac{ \sin(\gamma_l)}{p_{\text{lat}}} , \frac{1}{p_{\text{ax}}} \}.$$
where $\gamma _l$ is the azimuthal angle in the $x,y$ plane.

 figure: Fig. 2.

Fig. 2. The figure illustrates the interference pattern $\vec {E}_i$ that is formed by two plane waves $\vec {E}_1$ and $\vec {E}_2$ originating from the objective, with incidence angles of $\alpha _0$ and $\beta _0$, and with an azimuthal angle of $\gamma$. The light passes through the immersion media and cover glass, with refractive indices of $n_2$ and $n_1$. Due to the refractive index mismatch, the angles of the beams change to $\alpha _1$ and $\beta _1$. The fluorophores are embedded in a medium with a refractive index of $n_0$. The imaging depth $z_\text {d}$ corresponds to the distance from the cover glass to the focal plane, while $z_{0}$ represents the emitter’s distance from the focal plane. $z_\text {stage}$ is defined as the distance the stage has moved from the point at which the top of the cover glass is aligned with the focal plane.

Download Full Size | PDF

Following a similar approach as described in previous research [26], we phase-shift the pattern equidistantly in $k = 1,2,\dots, K$ illuminations for various orientations $l=1,2,\dots, L$, adhering to the following condition:

$$\sum^L_{l=1}\sum^K_{k=1}P(\phi_{lk}(\vec{r})) = 1.$$

2.1.2 Point spread function model

The freely-rotating dipole vectorial PSF model is employed in accordance with previous studies [2933]. The emitted light from the dipole ($E_\text {dipole}$) propagates through the media with different refractive indices, which are the mounting media ($n_{0}$), cover-glass ($n_{1}$) and immersion medium ($n_{2}$). For an oil objective, the refractive index of the immersion oil and the coverglass are similar, hence we assume that $n_1 = n_2$. The objective collects the light and transfers it to the pupil plane ($E_{\text {pupil}}$). Subsequently, the electric field is focused on the detector by the tube lens. The electric field component $p = x, y$ in the pupil plane is proportional to the emission dipole component $q = x, y, z$ as follows:

$$E_{\text{pupil},pq}(W,\vec{\rho},z_\text{d},z_\text{stage}) = A(\vec{\rho}) q_{pq}(\vec{\rho})\exp\left[iW(\vec{\rho}) + i \right(z_\text{d} k_{\text{z},0}(\vec{\rho}) - z_\text{stage}k_{\text{z},2}(\vec{\rho})\left) \right],$$
where $\vec {\rho }$ are the normalized pupil coordinates, $A(\vec {\rho })$ is the amplitude, including the well-known aplanatic correction factor [34,35], $q_{pq} (\vec {\rho })$ are polarization vector components defined elsewhere [29], $W(\vec{\rho})$ is the aberration function, $z_\text {d}$ is the imaging depth, which is the distance between the focal plane and the coverglass, $z_{\text {stage}}$ is the corresponding position of the stage, and the $z$-component of the wave-vector in the $i$-th medium $k_{\text {z},i}$ is
$$k_{\text{z},i}(\vec{\rho}) = \frac{2\pi}{\lambda}\sqrt{n_i^2 - NA^2\lvert| \vec{\rho} | \rvert_2^2},$$
with $n_{i}$, $\lambda$, and $NA$ as the refractive index, wavelength, and numerical aperture respectively.

Finally, to calculate the incoherent PSF from the freely-rotating dipole, the six Fourier transforms of the electric field components in the pupil are quadratically added

$$\begin{aligned} H(\vec{r}_j-\vec{r}_0) &= \frac{N}{3 w_{\text{n}}} \sum_{p=x,y}\sum_{q=x,y,z} \bigg\lvert \int_{\xi_j} \int_{\lvert \vec{\rho}\rvert <1} E_{\text{pupil},pq}(W,\vec{\rho},z_\text{d},z_\text{stage})\\ &\quad \times \exp\left[{-}i\vec{k}(\vec{\rho}) \cdot (\vec{r}_j-\vec{r}_0) \right]d^2\rho d^2\xi_j \bigg \rvert^2, \end{aligned}$$
where $\xi _j$ is the $j$-th pixel, $\vec {k}(\vec {\rho })= \left (k_\text {x}(\vec {\rho }), k_\text {y}(\vec {\rho }), k_{\text {z},0}(\vec {\rho }) \right )$, $N$ is the photon count of the signal, and $w_{\text {n}}$ is a normalizing factor defined elsewhere [36]. The lateral wavevectors are defined as $k_\text {x}(\vec {\rho }) = 2\pi NA \rho _{\text {x}}/\lambda$ and $k_\text {y}(\vec {\rho }) = 2\pi NA \rho _{\text {y}}/\lambda$. The aberration function $W(\vec {\rho })$ is expressed as a sum of the Zernike polynomials.

2.2 Supercritical angle fluorescence

When the dipole is positioned close to the cover glass, the emitted evanescent wave will extend into the cover glass and becomes a propagating wave that can be captured by the objective. This phenomenon is referred to as supercritical angle fluorescence (SAF) and only occurs if $NA>n_0$ and $n_2>n_0$ [37,38]. SAF influences the PSF model and increases the effective $NA$. For pupil coordinates that satisfy $\lvert | \vec {\rho } | \rvert _2 > n_0/NA$, Eq. (9) and Eq. (8) become

$$k_{\text{z},0}(\vec{\rho}) = \frac{2\pi}{\lambda}i\sqrt{NA^2\lvert| \vec{\rho} | \rvert_2^2 - n_0^2},$$
$$E_{\text{pupil},pq}(W,\vec{\rho},z_\text{d},z_\text{stage}) = A(\vec{\rho}) q_{pq}(\vec{\rho})\exp\left[ - \delta\frac{ z_\text{d}}{\lambda}+ i\left(W(\vec{\rho}) - z_\text{stage}k_{\text{z},2}(\vec{\rho})\right) \right],$$
where $\delta = 2\pi \sqrt {NA^2\lvert | \vec {\rho } | \rvert _2^2 - n_0^2}$ is the attenuation constant. The SAF term decays exponentially with $z_\text {d}/\lambda$, indicating that the effect will decay to zero with $z_\text {d}\gg \lambda$.

2.3 Maximum likelihood estimation for ZIMFLUX

A maximum likelihood estimation is used to fit the parameters of the imaging model [39]. The parameters to be estimated are $\theta = \{x_0,y_0,z_0,N,b\}$. Other parameters that define the imaging model, such as $z_\text {d}$, $z_\text {stage}$, and $W$, are assumed to be known and measured a priori through calibration experiments (for details see section 3.). The measured photon counts by state-of-the-art EMCCD and sCMOS cameras can accurately be modeled as realizations of a Poisson process [3941]. The log-likelihood for a Poisson process is computed as

$$\log \mathcal{L} = \sum^L_{l=1}\sum^K_{k=1}\sum^J_{j=1} \big[n_j^{lk} \log(\mu_j^{lk})- \mu_j^{lk} \big],$$
where we recall that $K$ and $L$ are the different pattern shifts and orientations respectively, $J$ is the total amount of pixels in the ROI and $n_{j}^{lk}$ is the photon count for pixel $j$. The maximum likelihood estimate is computed using the Levenberg-Marquardt algorithm, with the log-likelihood ($\log \mathcal {L}$) and its derivatives guiding the optimization process
$$\frac{\partial \log\mathcal{L}}{\partial \theta_i} = \sum^L_{l=1}\sum^K_{k=1}\sum^J_{j=1} \frac{n_j^{lk}-\mu_j^{lk}}{\mu_j^{lk}} \frac{\partial \mu_j^{lk}}{\partial\theta_i}.$$

The derivatives of $\mu _j^{lk}$ with respect to the relevant parameters are

$$\frac{\partial \mu_j^{lk}}{\partial \vec{r}_0} = N P(\phi_{lk}(\vec{r}_0)) \frac{\partial H(\vec{r}_j - \vec{r}_0)}{\partial \vec{r}_0} + N H(\vec{r}_j - \vec{r}_0)\frac{\partial P(\phi_{lk}(\vec{r}_0))}{\partial \vec{r}_0},$$
$$\frac{\partial \mu_j^{lk}}{\partial N} = P(\phi_{lk}(\vec{r_0})) H(\vec{r}_j - \vec{r}_0),$$
$$\frac{\partial \mu_j^{lk}}{\partial b} = \frac{1}{LK}.$$
where $\vec {r}_0 = \{x_0,y_0,z_0\}$ is the position of the emitter and the lateral component of $\vec {r}_j$ is the center position of pixel $j$, while the axial component is the focal plane. The derivative of the illumination field is given by
$$\frac{\partial P(\phi_{lk}(\vec{r_0}))}{\partial \vec{r}_0} ={-}2\pi \eta_{lk}m_{lk}\sin(\phi_{lk}(\vec{r_0})),$$

The positional derivatives of the PSF are computed as follows [42]

$$\frac{{\partial H(\vec{r}_j - \vec{r}_0)}}{{\partial \vec{r}_0}} = \frac{{N}}{{3w_\text{n}}} \sum_{p=x,y} \sum_{q=x,y,z} \frac{{\partial }}{{\partial \vec{r}_0}} U_{pq}(\vec{r}_j-\vec{r}_0) \, U_{pq}(\vec{r}_j-\vec{r}_0) \,$$
$$= \frac{{2N}}{{3w_\text{n}}} \sum_{p=x,y} \sum_{q=x,y,z} \operatorname{Re}\left\{ U_{pq}(\vec{r}_j-\vec{r}_0) \, \frac{{\partial U_{pq}(\vec{r}_j-\vec{r}_0)}}{{\partial \vec{r}_0}}\right\} ,$$
where we have introduced the following shorthand notation
$$U_{pq}(\vec{r}_j-\vec{r}_0) = \int_{\xi_j} \int_{\lvert \vec{\rho}\rvert <1} E_{\text{pupil},pq}(W,\vec{\rho},z_\text{d},z_\text{stage}) \exp\left[{-}i\vec{k}(\vec{\rho}) \cdot (\vec{r}_j-\vec{r}_0) \right]d^2\rho d^2\xi_j ,$$
so that the positional derivatives are
$$\begin{aligned}\frac{\partial U_{pq}(\vec{r}_j-\vec{r}_0)}{\partial x} &={-}i \int_{\xi_j} \int_{\lvert \vec{\rho}\rvert <1} E_{\text{pupil},pq}(W,\vec{\rho},z_\text{d},z_\text{stage})k_\text{x}(\vec{\rho})\\ &\quad \times \exp \left[{-}i\vec{k}(\vec{\rho}) \cdot (\vec{r}_j-\vec{r}_0) \right] d^2\rho d^2\xi_j, \end{aligned}$$
in which $x$ can be interchanged with $y$ and $z$ by substituting $k_\text {y}$ and $k_{\text {z},0}$ for $k_\text {x}$, respectively. The Fisher matrix can be calculated as follows:
$$F_{rs} = \sum^L_{l=1}\sum^K_{k=1}\sum^J_{j=1} \frac{1}{\mu^{lk}_j}\frac{\partial\mu^{lk}_j}{\partial\theta_r}\frac{\partial\mu^{lk}_j}{\partial\theta_s}.$$

The Cramér-Rao Lower Bound (CRLB) is the inverse of the Fisher matrix.

2.3.1 Maximum likelihood estimation using only the illumination pattern or PSF

The position of the emitter can also be estimated using only the illumination pattern or only the PSF model. This is used to in-situ refine the model for the illumination pattern (see section 3.5.3). To obtain an imaging model that describes the data but only relies on the illumination pattern, we sum over all the pixels in the ROI as

$$\mu^{lk} = \sum_{j=1}^J N P\big(\phi_{lk}(\vec{r}_\text{ill,0})\big) H(\vec{r}_j - \vec{r}_\text{ill,0}) + \frac{b}{LK},$$
$$\mu^{lk} = N P(\phi_{lk}(\vec{r}_\text{ill,0})) + \frac{b}{LK}J.$$
where $J$ is the total amount of pixels in the ROI and $\vec {r}_\text {ill,0}$ the emitter position computed based on the illumination pattern. In analogy with Eq. (13), the log-likelihood is
$$\log \mathcal{L} = \sum^L_{l=1}\sum^K_{k=1} \big[n^{lk}\log(\mu^{lk})- \mu^{lk} \big].$$

Similarly, to obtain an imaging model that only relies on the PSF we sum over all the illumination patterns

$$\mu_j = \sum_{l=1}^L\sum_{k=1}^K N P\big(\phi_{lk}(\vec{r}_\text{psf,0})\big) H(\vec{r}_j - \vec{r}_\text{psf,0}) + \frac{b}{LK},$$
$$\mu_j = N H(\vec{r}_j - \vec{r}_\text{psf,0}) + b,$$
where $\vec {r}_\text {psf,0}$ is the emitter position computed based on the PSF. The maximum likelihood and CRLB can be computed analogous to the previous section.

3. Experimental setup and methods

3.1 Experimental setup

A customized setup was constructed to generate the interference pattern, illuminate, and image the sample, enabling ZIMFLUX localization (see Fig. 3(a)). The setup utilizes a 642 nm diode laser (MPB Communications, F-04306-107) to produce a monochromatic beam. The beam is modulated using an acousto-optic modulator (AOM, G&H, 3080-125) to adjust the power and is set at 25% of the maximum power (250 mW), resulting in a total laser power of 3 mW at the sample plane over an area with a diameter of approximately 50 μm. Assuming a Gaussian laser profile, this translates to an energy density of $\sim \;{300}\;\textrm{W/cm}^{2}$ over the FOV (16.4 μm x 16.4 μm). A Glan-Taylor polarization prism (PP, Thorlabs, GT10-A) polarizes the light, and a voltage electro-optic light modulator (EOM, Leysop, EM400K) regulates the polarization angle. A half wave plate (HWP, Thorlabs, WPH05M-633) and quarter wave plate (QWP, Thorlabs, WPQ05M-633) are used to align the polarization and correct for elliptical polarization induced by the reflective elements further in the setup. A polarization-maintaining optical fiber (Thorlabs, P1-488PM-FC-1) and a 0.13 NA objective (L1, Olympus, UplanFL N, 4x/0.13) collimate the beam.

 figure: Fig. 3.

Fig. 3. a, A schematic of the custom-built ZIMFLUX setup. Additional information is available in the main text. b, A simplified schematic of the excitation path. Incoming laser light is diffracted by a digital micromirror device (DMD) on which a binary block wave is projected. The spatial filter (SF) permits only the zeroth and one first-order beam, with a spacing of $u$, to pass through. The spacing is then magnified to $u'$ at the back focal plane, resulting in non-parallel beams that generate an interference pattern at the overlap in the sample plane. All abbreviations used in the figure are defined in the main text.

Download Full Size | PDF

Two mirrors (M1/M2, Thorlabs, BB1-E02) direct the beam through an excitation filter (ExF, Chroma, ET640/20m), and another quarter wave plate onto the digital micromirror device (DMD, Texas Instruments, DLP7000BFLP). The DMD is controlled by a high-speed DLP subsystem (VIALUX GmbH, V4100 0.7 VIS + ALP-4.2). A telecentric relay lens system is established between the DMD and the objective to filter the diffraction pattern and control the spacing of the beams in the back focal plane of the objective (see Fig. 3(b)).

The first lens (L2, Thorlabs, AC254-150-A-ML, f=150 mm) and a mirror (M3, Thorlabs, BB2-E02) guide the light through the custom spatial filter (SF) positioned in the Fourier plane, which filters out everything but the zeroth and first-order diffraction patterns from the DMD. The next lens (L3, Thorlabs, AC508-080-AB, f=80 mm), half wave plate (HWP, Thorlabs, AHWP10M-600), and another lens (L4, Thorlabs, AC508-180-A-ML, f=180 mm) align the polarization and magnify the spacing between the two orders. Another mirror (M4) and a long pass dichroic mirror (DC, Semrock, Di03-R660-t1-25.2x35.6) reflect the beams to the objective (Nikon, CFI Apo 1.49 total internal reflection (TIRF) 100XC Oil). The immersion oil (Nikon, immersion oil type F) used has a refractive index of 1.518 at 23°.

The stage (Physik Instrumente, Q-545 Q-Motion) is driven by E873 PIShift controllers and controlled by the PIMikroMove software. The emission light from the sample passes through the dichroic mirror. An emission filter (EF, Chroma, ET690/50) filters the light, and a mirror (M4, Thorlabs, BB1-E02) directs it through the objective tube lens (L5, Thorlabs, TTL200). The light is then guided through another 4F system with two lenses (L6/L7, Thorlabs, AC254-100-A). In the Fourier plane, a deformable mirror (DM, Boston Micromachines, Multi-3.5) is positioned to add an astigmatic aberration of approximately 90 mλ to the PSF. The emission light is finally imaged with a CMOS camera (Teledyne Kinetix Scientific CMOS, 01-KINETIX-M-C) with a pixel size of 6.5 µm x 6.5 µm in the sensor plane, resulting in a pixel size of 65 nm in the sample plane.

A simplified view of the excitation path is shown in Fig. 3(b). In practice, the DMD is rotated by 45° because the micromirror’s hinge is along the pixel’s diagonal. The DMD is mounted so that its base is perpendicular to the incident beam. As shown in Supplement 1, the placement and angle of the DMD influence the energy distribution over the different diffraction orders. With our alignment, assuming perfect polarization, the best achievable modulation depth is 0.96. The DMD pixel pitch is 13.68 µm, and the repeating pattern of three pixels on and off results in a DMD pitch $p_\text {DMD}= {82.08}$ μm.

The distance $u$, following the grating equation, between the zeroth order and the first order (Fig. 3(b)) is

$$u = \frac{\lambda_0 f}{p_\text{DMD}} = {1.17}\;\textrm{mm}$$
in which the excitation wavelength $\lambda _0$ is 642 nm, and $f = {150}\;\textrm{mm}$, which corresponds to the focal length of L2. In the back focal plane of the objective, $u$ is magnified to $u' = {2.63}\;\textrm{mm}$ by a factor of f4/f3. Considering that the effective focal length (EFL) of the objective is 2 mm and assuming that $\alpha _0$=0°, then $\beta _0$=60.6°.

3.2 Samples

Three different types of samples have been used to calibrate the PSF model, validate the illumination pattern, and demonstrate the proof of concept. Gatta-beads (GATTAquant, ATTO 647N), embedded in a media with a refractive index of 1.46 and a diameter of 23 nm [43], were used for PSF calibration. A bead sample (Invitrogen FluoSpheres Carboxylate-Modified Microspheres, ex/em: 660/680nm, F8783) with a diameter of 20 nm, mounted on the coverslip and embedded in a 1% w/v agarose solution, was imaged for pattern validation. For the proof of concept, the GATTA-PAINT 3D HiRes 80R Expert Line (GATTAquant, ATTO655) sample was employed. This sample consists of DNA origami nanopillars, attached to the coverslip. The nanopillars are randomly oriented in all three dimensions, and both ends have a binding site for fluorescent probes with a spacing of 81±21 nm [44].

3.3 Image acquisition and camera calibration

The samples are imaged using one pattern direction ($L$=1) with three phase shifts ($K$=3), and each frame has an exposure time of 10 ms for a total of 40×103 frames. The observed camera counts, represented as Analog-to-Digital Units (ADU), are converted into photons by measuring the gain and camera offset. This calibration process involves acquiring 2000 bright and 200 dark calibration images [45]. Performing this calibration is essential for achieving the theoretical maximum possible localization precision [39].

3.4 PSF calibration

We defined a fully vectorial PSF model for the optical system. The model’s parameters were derived from the optical system’s specifications, which included a numerical aperture (NA) of 1.49, immersion oil and coverslip with a refractive index of 1.515, and a mounting medium with a refractive index of 1.33 (unless stated otherwise). The upper limit of the emission filter, a wavelength of 715 nm, was set as the emission wavelength for the PSF. With our optical setup, we capture a portion of emission arising from SAF for emitters close to the coverslip. To achieve an unbiased estimation, it is necessary to consider this in the PSF model.

3.4.1 Aberration estimation

The Gatta-beads (section 3.2) are used to make a through-focus image stack of 40 slices, shifted by increments of 20 nm. The camera gain and offset are determined for the imaging conditions (section 3.3), which resulted in an average signal from the beads of $2 \times 10^4$ photons. Using the through-focus image stack, an MLE is employed to determine the aberration coefficients of the Zernike polynomials. In total 12 Zernike polynomials (Noll index 5-16) are used to construct the abberated wavefront $W$ [46]. More details on the MLE framework can be found in the protocol by Siemons et al. [33].

3.4.2 SAF calibration

Before the MLE (section 2.3) can be performed, the imaging depth $z_\text {d}$ and stage position $z_\text {stage}$ need to be determined. Using the found aberrations $W$ from the previous section, $z_\text {d}$ can be computed from $z_\text {stage}$ by maximizing the relative Strehl ratio, as proposed in earlier work [30]. The following metric function is maximized to find $z_\text {d}$ for a given $z_\text {stage}$

$$z_\text{d}^* = \underset{z_\mathrm{d}}{\arg \max} ~\left(\frac{\displaystyle\sum_{p=x,y}\displaystyle\sum_{q=x,y,z} \left|\int_{\lvert\vec{\rho}\rvert <1} E_{\text{pupil},pq}(W_1,\vec{\rho},z_\text{d},z_\text{stage}) d^2\rho\right|^2}{ \displaystyle\sum_{p=x,y}\displaystyle\sum_{q=x,y,z}\left|\int_{\lvert\vec{\rho}\rvert <1} E_{\text{pupil},pq}(W_2,\vec{\rho},z_\text{d},z_\text{stage})d^2\rho \right|^2} \right).$$
where $W_1(\vec {\rho }) =W(\vec {\rho })$, $W_2(\vec {\rho }) = W(\vec {\rho }) + z_\text {d} k_{\text {z},0}(\vec {\rho }) - z_\text {stage} k_{\text {z},2}(\vec {\rho })$, and we follow the the same notation as in section 2.1.2. The relation for our imaging conditions, between $z_\text {d}$ and $z_\text {stage}$ is shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. The relation between the imaging depth and the stage position in SAF conditions for the setup used in this research, which is obtained from Eq. (30) for multiple values of $z_\text {stage}$.

Download Full Size | PDF

The stage position ($z_\text {stage}$) needs to be known in order to use the right distance from the cover glass to the focal plane ($z_\text {d}$). Before the ZIMFLUX data of the DNA PAINT nanorulers is acquired, the stage position is estimated using the following procedure:

  • 1. The focus plane is initially set by hand, using the stage, on the emitters attached to the coverslip, which is considered as $z_\text {d}=0$. Using $z_\text {d}=0$, $z_\text {stage,0}$ can be computed by maximizing the relative Strehl ratio for $z_\text {stage}$, analogous to Eq. (30).
  • 2. The stage is moved by a desired amount $\Delta z_{\text {stage}}$ from $z_\text {d}=0$ in order to move the focal plane towards the binding sites of the DNA PAINT nanorulers, that are not attached to the coverglass.
  • 3. Then $z_\text {stage} = z_\text {stage,0} + \Delta z_{\text {stage}}$ and $z_\text {d}$ can be computed using Eq. (30).

3.5 Pattern estimation

3.5.1 Initial pattern estimation

The illumination pattern is estimated from the recorded data to avoid potential systematic errors in the illumination pattern parameters, thereby minimizing localization errors. First, all the positions $\vec {r}_{\text {psf},v}$ are estimated from all emitters ignoring the pattern information, by summing over the different phases and directions ($L \times K$ applied patterns, see section 2.3.1 and 3.6.1), implying that the total number of acquired frames is $L\times K\times V$. Then, for each individual frame, the photon count per emitter $N^{lk}$ is estimated while keeping the previously estimated positions fixed. For each $l$ and $k$, a SMLM reconstruction is made ($S^{lk}$), where the pixel size is 6 times smaller than the original image. Here 2D Gaussian spots are rendered with a width of 20 nm and an intensity scaled to $N^{lk}$. The lateral components of the spatial frequency vector $\vec {q}_{lk}$ are detected by finding the peak in the Fourier domains of $S^{lk}$ [26]. The spatial frequency vector for each direction $\vec {q}_{l}$ is then calculated as the average over all phase steps of $\vec {q}_{lk}$. To reduce the effect of $z$ deviation of the emitters, the $z$ height variations of 200 nm in which most emitters are found is used to estimate the lateral components of $\vec {q}_{l}$. The axial component of $\vec {q}_{l}$ is initially set based upon the setup. The angle of the off-center beam (Eq. (29)) is calculated, to compute the axial pitch $p_\text {ax}$ (Eq. (3)), to finally determine the axial component of $\vec {q}_{l}$ (Eq. (6)).

3.5.2 Phase, modulation depth, and relative intensity estimation

The found spatial frequency vectors $\vec {q}_l$, the emitter positions $\vec {r}_{\text {psf},v}$, photon counts per frame $N_v^{lk}$, and photon counts of the summed frames $N_v$ from the previous section are used to estimate other parameters of the illumination pattern: the phase $\psi _{lk}$, modulation depth $m_{lk}$ and relative intensity $\eta _{lk}$ (section 2.1.1). These parameters are estimated by minimizing the error metric $\Omega _{lk}$, which is defined as the difference between the measured photon count and the expected photon count based on the pattern, phase, and position [26]:

$$\Omega_{lk} = \sum_{v}^V \big\lvert N_v^{lk} - \eta_{lk}\frac{N_v}{LK}(1+m_{lk}\cos(\phi_{lk}(\vec{r}_{\text{psf},v})) \big\rvert^2.$$

The error metric is minimized by setting the derivative with respect to the zeroth and first-order Fourier coefficients $\{\eta _{lk}, \eta _{lk}m_{lk}\cos (\psi _{lk}), \eta _{lk}m_{lk}\sin (\psi _{lk})\}$ to zero. This results in the following set of equations:

$$\begin{aligned} \left( \begin{matrix} \displaystyle\sum_v \frac{N_v^2}{K^2} & \displaystyle\sum_v \frac{N_v^2}{K^2} \cos{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})}\\ \displaystyle\sum_v \frac{N_v^2}{K^2} \cos{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})} & \displaystyle\sum_v \frac{N_v^2}{K^2} \cos{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})}^2\\ \displaystyle\sum_v \frac{N_v^2}{K^2} \sin{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})} & \displaystyle\sum_v \frac{N_v^2}{K^2} \cos{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})}\sin{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})} \end{matrix} \right.\qquad\qquad\\ \left.\begin{matrix} \displaystyle\sum_v \frac{N_v^2}{K^2}\sin{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})} \\ \displaystyle\sum_v \frac{N_v^2}{K^2} \cos{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})}\sin{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})}\\ \displaystyle\sum_v \frac{N_v^2}{K^2}\sin{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})}^2 \end{matrix}\right) \left(\begin{matrix} \eta_{lk} \\ \eta_{lk}m_{lk}\cos{(\psi_{lk})} \\ \eta_{lk}m_{lk}\sin{(\psi_{lk})} \end{matrix}\right) = \left(\begin{matrix} \displaystyle \sum_v \frac{N_vN_v^{lk}}{K} \\ \displaystyle \sum_v \frac{N_vN_v^{lk}}{K}\cos{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})} \\ \displaystyle\sum_v \frac{N_vN_v^{lk}}{K}\sin{(2\pi\vec{q}_l\cdot\vec{r}_{\text{psf},v})} \end{matrix}\right) \end{aligned}$$
which can be solved for $\psi _{lk}$, $\eta _{lk}$ and $m_{lk}$ for all emitters. However, to check and correct for any variations over time, we employ data binning. Specifically, we divide the data into 10 time bins (each containing approximately 4000 frames).

3.5.3 Pattern refinement

An error in the spatial frequencies $\vec {q}_l$ and the phase $\psi _{lk}$ of the illumination pattern will lead to a bias in the ZIMFLUX position estimates. In order to mitigate this error we minimize the mean square difference $\Delta \vec {r}_v$ between the position estimates obtained with only the PSF $\vec {r}_{\text {psf},v}$ and with only the illumination pattern $\vec {r}_{\text {ill},v}$ (see section 2.3.1). The phase at a specific position is defined as $\phi _{lk}(\vec {r}_v) = 2\pi \vec {q}_l \cdot \vec {r}_v - \psi _{lk}$. We proceed to determine the systematic phase error for each direction

$$\Delta \vec{r}_v = \vec{r}_{\text{psf},v} - \vec{r}_{\text{ill},v},$$
$$\Delta \phi_l(\vec{r}_{\text{psf},v}, \vec{r}_{\text{ill},v}) = 2 \pi \vec{q}_l\cdot \Delta \vec{r}_v + 2\pi\Delta\vec{q}_l \cdot \vec{r}_{\text{psf},v} - \Delta \psi_l,$$
in which $\Delta \vec {q}_l$ and $\Delta \psi _l$ are the unknown errors in the spatial frequency and phase, respectively. To minimize the phase error ($R_l)$, we employ least square estimation, which is defined as follows:
$$R_l = \sum^V_{v=1} \lvert\Delta\phi_l(\vec{r}_{\text{psf},v}, \vec{r}_{\text{ill},v})\rvert^2,$$
which is minimized to yield
$$\sum_v 2\pi(\Delta \vec{q}_l \cdot \vec{r}_{\text{psf},v}) - \sum_v\Delta\psi_l ={-}\sum_v 2\pi(\vec{q}_l \cdot \Delta \vec{r}_{\text{psf},v}).$$
which can be solved for $\Delta \vec {q}_l$ and $\Delta \psi _l$, by fitting a linear line through $\Delta \vec {r}_{\text {psf},v}$ versus $\vec {r}_{\text {psf},v}$ for each dimension and all $v$. From the found slope and offset, $\Delta \vec {q}_l$ and $\Delta \psi _l$ are updated. We found that the phase estimation described earlier already determines $\frac {\Delta \psi }{\psi _l}< 10^{-4}$ and does not need to be refined. Starting with the initial estimate of spatial frequency vector of the pattern, the following iterative procedure for pattern refinement is used:
  • 1. Phase estimation (section 3.5.2).
  • 2. Estimate $\vec {r}_{\text {ill},v}$ and compute $\Delta \vec {r}_v$.
  • 3. Update $\vec {q}_l' =\vec {q}_l+ \Delta \vec {q}_l$

The procedure is stopped if the fraction $\frac {\Delta \vec {q}_l}{\vec {q}_l}$ is smaller than $10^{-4}$.

3.6 Single-molecule localization microscopy

3.6.1 Astigmatic PSF estimation

After acquiring the patterned illuminated frames ($L \times K \times V$), we generate uniform illuminated frames by summing the applied patterns ($L\times K$) corresponding to all phase steps and directions. For ZIMFLUX $L=1$ and $K=3$. Regions of interest (ROIs) measuring $16\times 16$ pixels, which contain candidate emitters, are detected using a thresholding algorithm based on local maxima in a feature-enhancing Difference of Gaussian filtered image [47]. The position ($x$, $y$, and $z$), signal photon count $N$, and background photon count per pixel $b$ of the candidate emitters in the identified ROIs are estimated using MLE (section 2.3.1), without considering the illumination field information. Any non-converging estimations within the ROI, excluding the 2-pixel width border, are filtered out. The results obtained from the summed frames are henceforth called astigmatic PSF estimation.

3.6.2 ZIMFLUX estimation

Using the knowledge about illumination patterns, obtained via the previous step, the ZIMFLUX localization is performed. If the emitter was not in its ’on state’ during all phase shifts, it affects the accuracy and precision of the localization. To mitigate this effect, all spots which have maximum modulation error $\epsilon _\text {mod}$ larger than a user defined threshold are filtered out. The maximum modulation error is

$$\epsilon_\text{mod} = \mathrm{max} \left(\frac{N^{lk}_\text{exp}-N^{lk}_\text{est}}{N^{lk}_\text{exp}} \right) \qquad \forall \quad 0 \leq l \leq L \quad \textrm{and} \quad 0 \leq k \leq K$$
where $N^{lk}_\text {exp}$ represents the expected signal of the emitter based on the illumination pattern, while $N^{lk}_\text {est}$ is the estimated signal of the emitter in that frame. We have found that $\epsilon _\text {mod}<0.3$ works to ensure an accurate ZIMFLUX estimation.

3.6.3 Drift correction

Finally, drift correction is performed based on the astigmatic PSF localizations to maximize the information. The drift correction is based on entropy minimization as described previously [48]. The found drift correction is applied to both the astigmatic PSF and ZIMFLUX estimates.

4. Results

Firstly, we investigate the impact of astigmatism on the precision of ZIMFLUX in comparison to using only the astigmatic PSF through simulations. We analyze how the precision varies depending on the emitter’s position relative to the focal plane under SAF and undercritical angle fluorescence (UAF) conditions, using the CRLB. Secondly, we examine the influence of the axial pitch and modulation depth of the illumination pattern on the CRLB of ZIMFLUX, once again under UAF and SAF conditions. Thirdly, we present the experimental PSF obtained from the optical setup and demonstrate the impact of SAF on the electric field in the pupil plane. Fourthly, we image the illumination pattern using a fluorescent bead sample and confirm that it corresponds to the expected pattern from the optical setup. Lastly, we provide a proof of concept for ZIMFLUX by utilizing 3D nanorulers, demonstrating its effectiveness and potential applications.

4.1 Simulation results

We conducted simulations assuming perfect alignment of the center beam with the optical axis and no optical aberrations, except for vertical astigmatism $Z^2_2$. Our goal is to assess the effects of astigmatism and SAF on using solely the astigmatic PSF and ZIMFLUX by simulating emitters at various depths, as illustrated in Fig. 5. We used Zernike coefficients of −30 mλ, −60 mλ, and −90 mλ to represent varying degrees of astigmatism and a signal of 2800 photons with a background photon count of 8 per pixel. For the ZIMFLUX simulation, the phases for each emitter are randomly assigned and the illumination pattern exhibits an axial pitch of 492 nm and a lateral pitch of 452 nm, with a modulation depth of 0.85.

 figure: Fig. 5.

Fig. 5. a, PSF at various $z$ distances from the focal plane for different values of the astigmatic Zernike coefficient $Z_2^2$ (−30 mλ, −60 mλ, −90 mλ) at an imaging depth of 300 nm. The parameters used are similar to the experimental conditions (an oil immersion objective, refractive index mounting medium of 1.33, and signal and background photon count of 2800 and 8). b, c, The CRLB of the astigmatic PSF and ZIMFLUX axial localization are shown as a function of different distances from the focal plane at an imaging depth of $z_\text {d} = {300}\;\textrm{nm}$. SAF occurs due to refractive index mismatch, which lowers the CRLB near the coverslip and decreases with increasing distance from the coverslip. The illumination pattern has a modulation depth of $0.85$ and an axial and lateral pitch of 492 nm and 452 nm, respectively. The CRLB of ZIMFLUX is nearly independent of the level of astigmatism and is approximately half of the CRLB of the astigmatic PSF. d-f, Similar to a-c but at $z_\text {d} = {1300}\;\textrm{nm}$, where the effect of SAF is negligible. The refractive index mismatch significantly affects the axial localization precision of the astigmatic PSF (e), but ZIMFLUX enables a higher precision when imaging deeper into the sample, including a 10-fold improvement for negative $z$ values, that goes down to approximately 2-fold for positive $z$ values (f). The y-axis of the plots are scaled such that the CRLB of the astigmatic PSF and ZIMFLUX can be compared easily, for a scale that is suitable for the individual plots, see Fig. S7.

Download Full Size | PDF

Firstly, we simulated emitters at an imaging depth of $z_\text {d} = {300}\;\textrm{nm}$, where SAF is present. It can be seen that the astigmatic PSF is non-symmetric, due to the SAF conditions, and that the PSF changes significantly over its $z$ position (Fig. 5(a)). This results in a low CRLB for the $z$ position estimation of the astigmatic PSF and it shows that stronger astigmatism is favourable for higher precision (Fig. 5(b)). For ZIMFLUX, the effect of astigmatism on the CRLB of the $z$ position estimation is much smaller, compared to the astigmatic PSF estimation (Fig. 5(c)). Looking at Fig. 5(b) and Fig. 5(c), it can be seen that the improvement in the axial localization of ZIMFLUX over the astigmatic PSF is roughly a factor 2.

Secondly, we simulated emitters at $z_\text {d} = {1300}\;\textrm{nm}$, whereas SAF has a negligible effect. At this depth, the PSF barely changes, particularly for negative z-values (Fig. 5(d)), leading to a high CRLB for the axial localization for the astigmatic PSF (Fig. 5(e)). This effect has been more extensively covered in previous work [49]. However, ZIMFLUX demonstrates a significantly lower CRLB (Fig. 5(f)), by comparing it to the CRLB of the astigmatic PSF (Fig. 5(e)). Since the PSF barely changes for negative $z$-values at $z_\text {d} = {1300}\;\textrm{nm}$, the improvement factor in the CRLB of ZIMFLUX compared to the astigmatic PSF can reach a factor of roughly 10, while for positive $z$ values, the improvement factor goes down to approximately 2. This indicates that ZIMFLUX maintains superior axial localization precision at greater imaging depths compared to SMLM using an astigmatic PSF.

To investigate how the pattern and SAF affect the performance of ZIMFLUX, emitters were randomly simulated over an axial distance of 600 nm at imaging depths of 300 nm and 1300 nm. Generally, a lower axial pattern pitch improves the CRLB of the z-position estimation, as depicted in Fig. 6. The simulation area of the emitters, where SAF is expected, is illustrated in Fig. 6(a). The CRLB of the $z$-estimation of ZIMFLUX is slightly affected by the modulation depth of the pattern (Fig. 6(b)). The impact of different levels of astigmatism on the ZIMFLUX CRLB was examined with a modulation depth of 0.85. The results suggest that the level of astigmatism does not noticeably enhance the precision and may even impair it when the axial pitch is less than 1000 nm (Fig. 6(c)). At an imaging depth of 300 nm, the improvement factor in the CRLB of ZIMFLUX over the astigmatic PSF ranges from 1.5 to 3.5, depending on the level of astigmatism and axial pitch (Fig. 6(d)). At an imaging depth of 1300 nm (Fig. 6(e-h)), the trends are similar, but the improvement factor of ZIMFLUX over the astigmatic PSF reaches 3 to 8, depending on the astigmatism and axial pitch.

 figure: Fig. 6.

Fig. 6. a, The schematic shows that the CRLB in b-d is computed for emitters randomly generated between −300 nm and 300 nm away from the focal plane at an imaging depth of 300 nm. b, The CRLB for estimating the z-position as a function of the axial pitch for different modulation depths $m$, assuming the center beam is perfectly parallel to the optical axis and the astigmatism is set with the Zernike mode $Z_2^2$ of −60 mλ. c, The effect of different levels of astigmatism on the CRLB, with a modulation depth of 0.85. It can be observed that the degree of astigmatism does not significantly improve the precision and can even deteriorate it for an axial pitch lower than 1000 nm. d, The improvement factor of the CRLB in $z$-estimation for ZIMFLUX compared to the astigmatic PSF is shown for different levels of astigmatism and axial pitches. e, The CRLB in f-h is computed similarly as for b-d, but at an imaging depth of 1300 nm, at which the effect of SAF is negligible. f-h, Similar to b-d, but with an imaging depth of 1300 nm. The improvement factor of the CRLB for ZIMFLUX over the astigmatic PSF $z$ estimation is about 2 times higher (roughly 3-8 times higher overall) at this depth compared to an 300 nm imaging depth.

Download Full Size | PDF

4.2 Calibration of PSF model

In order to successfully use the vectorial PSF model in experimental conditions, it is necessary to determine the aberrations of the system. A through-focus PSF scan of fluorescent beads embedded in a medium with a refractive index of 1.46 was performed to retrieve the aberrations. The fluorescent beads were assumed to be attached to the coverslip and to be point sources because the diameter (23 nm) is much smaller than the diffraction limit. The 12 (Noll index 5-16) Zernike coefficients defining the aberrated wavefront were obtained from 11 beads with an average precision of 2 mλ. The Zernike coefficient of the vertical astigmatism induced by the deformable mirror in the emission path was −87±3 mλ, while the others are around 0 mλ (Fig. S8c). The calibration PSF, the Zernike modes, the aberrated wavefront and the effect of SAF are shown in Fig. S8.

4.3 Illumination pattern measurement

The illumination pattern is imaged by analyzing a bead sample embedded in agarose as described in section 3.2. The pattern phase is kept constant while the stage is moved. From the intensity of the emitters, based on the stage position, the lateral and axial pattern pitches are estimated.

To find the axial pattern pitch, the stage is moved in the axial direction and the lateral position of the PSF is fixed at the center of the ROI, which is found from a maximum intensity projection. The other parameters, $z$, photons $N$, and background photons $b$ are then estimated for each ROI. As the stage is shifted in steps of 30 nm in the axial direction, the intensity of the emitter varies according to the pitch of the pattern in the immersion oil, $p'_z$, as depicted in Fig. 7(a) (see Supplement 1 for the derivation). A sinusoidal function with a linear offset (to incorporate bleaching) is fitted to the estimated PSF signal as a function of the stage shift, to determine $p'_{z}$. This process is repeated for 99 beads, and $p'_{z}$ and the standard error of the mean (SEM) is found to be 835±12 nm (SEM).

 figure: Fig. 7.

Fig. 7. a, By shifting the stage and measuring the signal of a single emitter while keeping the pattern constant, the lateral pitches $p_x$ and $p_y$ in a sample can be determined. Due to a refractive index mismatch between the sample and coverslip, there is a difference in the pattern below and above the coverslip. Thus, when the stage is moved in the $z$-direction, the intensity profile of the emitter represents the pitch in $z$ within the immersion oil, as explained in the Theory section and depicted in the schematic. For the lateral directions, the found pitch corresponds to the pitch of the pattern within the sample. b, The axial pitch $p_z'$ and standard error of the mean (SEM) obtained from the stage shift is 835±12 nm (SEM, n=99) (bin width 50 nm). c, d, Histograms of the estimated lateral pitches of the sinusoidal function reveal values of 646±3 nm (SEM, n=111) and 663±3 nm (SEM, n=110) for $p_x$ and $p_y$, respectively (bin width 30 nm).

Download Full Size | PDF

For estimating the lateral illumination pattern, the stage is shifted in both the $x$ and $y$ directions by one pixel (65 nm) for 25 steps. Unlike the axial stage shift, the PSF remains unchanged, so the ROIs containing a bead are summed after shifting, and the position of the bead within the ROI is determined. Using the estimated position for each individual ROI, the signal is estimated, and the intensity profiles are fitted to determine the pitches of the pattern in the $x$ and $y$ directions, denoted by $p_x$ and $p_y$. The pitch of the sinusoidal fit is estimated for 111 different beads shifted in the $x$-direction and 110 beads in the $y$-direction, with values of $p_x = 646{\pm}3\;\textrm{nm}$ (SEM) and $p_y = 663{\pm}3\;\textrm{nm}$ (SEM), as shown in Fig. 7(c,d). Computing the lateral wave vector using $p_x$ and $p_y$ gives a lateral pitch $p_\text {lat}$ of 462±3 nm (SEM).

Equation (2) and Eq. S22 in Supplement 1 are solved using the found pitches $p'_{z}$ and $p_\text {lat}$ to determine the two unknown beam angles and their uncertainties. This yields $\alpha _0 = 3{\pm}2^{\circ}$ and $\beta _0 = 60{\pm}1^{\circ}$ and a spacing $u'$ in the back focal plane between the two beams arising from those angles are 2.8±0.3 mm, which is within the expected range based on the optical setup and the pitch of the DMD. In the case of perfect alignment, the expected $u'$ is calculated to be 2.63 mm (section 3.1).

4.4 Proof of principle

We conducted an experiment to demonstrate the feasibility of our method by imaging randomly oriented 3D DNA-origami nanorulers with a length $l$ of 81±21 nm, as provided by the manufacturer (details in section 3.2). The field of view (FOV) was 16.6 µm$\times$16.6 µm, as depicted in Fig. 8(a). Before image acquisition, we displaced the stage position $z_\text {stage}$ such that the imaging depth $z_\text {d}$ is approximately 60 nm (Fig. 4), which is used in the astigmatic PSF model. Due to a relatively low axial deviation of 80 nm, the lateral pitch could be estimated using 2D Fourier domain peak finding, and the corresponding spatial spectrum and the peak (Fig. S3(a) in Supplement 1) correspond to a lateral pitch $p_\text {lat}$ of 454 nm, with an azimuthal angle of the pattern of 42°. The initial axial pitch $p_\text {ax}$ was set to 500 nm based on pattern validation. We further refined $p_\text {ax}$ as described in section 3.5.3, which resulted in $p_\text {ax} = {481}\;\textrm{nm}$, to which it converged within 4 iterations (Fig. S3(b)). With the found pattern, no systematic bias in the localization for all three dimensions has been found between using only the astigmatic PSF and only the illumination pattern (Fig. S3(c-f)).

 figure: Fig. 8.

Fig. 8. a, The field of view (FOV) is captured from a sample containing randomly oriented 3D DNA origami nanorulers that are attached to the coverslip. The polar angle, $\theta$, denotes the angle between the nanoruler of length $l$ and the coverslip, as illustrated in the upper left corner. b, The colored boxes in a are zoomed in to display the results of the astigmatic PSF and ZIMFLUX on the same underlying data. To aid visualization, the localizations are convolved with a Gaussian kernel of size 10 nm and the color corresponds to the $z$ position. The histograms show the estimated z-positions of the individual localizations. The two binding sites are identified by K-means clustering, and the fit of each cluster is shown in the histograms with the full width half maximum (FWHM) noted. c, The positions of the binding sites are calculated as the mean of all localizations in each cluster. The Euclidean distance between the binding sites, $l$, as a function of the polar angle $\theta$ is plotted. d, The histogram of length $l$ has mean values of 83±7 nm and 83±8 nm for the astigmatic PSF and ZIMFLUX, respectively. The bin width of the histograms in b and d is 5 nm. e, The median values of the found precision of the clusters are 18.8 nm and 12.6 nm for the astigmatic PSF and ZIMFLUX and 11.7 nm and 6.0 nm for the CRLB. The CRLB is computed with the mean values of the estimated parameters of each cluster.

Download Full Size | PDF

Subsequently, the phases, modulation depths and illumination power were estimated (Fig. S4(a-c)). The estimated intensities relative to the expected pattern intensity for the emitters are provided in Fig. S4(e-f) and the localizations with a $\epsilon _\text {mod}>0.3$ are filtered out.

In Fig. 8(b), a zoomed-in display of the boxes in Fig. 8(a) is presented, in which the localizations are convolved with a Gaussian kernel of size 10 nm. The histograms of the $z$-projection are also depicted for the individual nanorulers, and the enhancement in precision can be observed. To verify the consistency of the astigmatic PSF and ZIMLFUX estimation for different $z$-positions, the Euclidean distance $l$ between the binding sites versus the polar angle of the nanoruler with respect to the coverslip is shown in Fig. 8(c), where only minor deviations from a horizontal line can be observed, indicating that the PSF model is accurate. The length of the nanorulers is displayed differently in Fig. 8(d), with mean values of 83±7 nm and 83±8 nm for the astigmatic PSF estimation and ZIMFLUX, respectively. The experimentally obtained precisions and the corresponding CRLBs are shown in Fig. 8(e), and the median values for the clusters are 18.8 nm for the astigmatic PSF estimation and 12.6 nm for the ZIMFLUX estimation. The corresponding CRLB values are 11.7 nm and 6.0 nm for the astigmatic PSF and ZIMFLUX estimation, respectively.

The experimentally observed precision $\sigma _\text {exp}$ is worse than the theoretical CRLB $\sigma _\text {crlb}$ by a margin $\sigma _\text {e}$. By applying the relationship $\sigma _\text {exp}^2 = \sigma _\text {crlb}^2 + \sigma _\text {e}^2$, it is calculated that $\sigma _\text {e}$ is 14.7 nm for the astigmatic PSF estimation and 11.1 nm for ZIMFLUX estimation. This deviation may be attributed in part to imprecisions in the drift correction, as well as the fact that a single PSF model is employed for estimating the measured PSF, despite that any slope of the cover glass and axial drift that can alter imaging depth and influence the PSF, giving a model mismatch, particularly given the significant variability of the SAF effect over small distances. As we are using the lateral pitch of 452 nm that is relatively large compared to the spot width, there is no improvement visible for the lateral directions. The experimentally observed precision is worse than the theoretical CRLB for the lateral localization by a similar margin as for the axial localization (Supplement 1, Fig. S5).

5. Discussion

The presented approach offers an effective way to improve the axial localization precision in standard SMLM experimental settings by incorporating the vectorial PSF model with a sinusoidal illumination pattern. Although a similar technique using a two-beam excitation, which uses a different type of setup, has been previously demonstrated in ModLoc [27], the addition of PSF information and astigmatism allows for the use of smaller axial pitches and eliminates phase wrapping issues. This overcomes the lower limit of the axial pitch to be at least the depth of focus of the imaged sample. Our simulations show that using a pitch of 500 nm improves axial localization precision by a factor of $\sim 1.5$ compared to using a pitch of 1 μm as ModLoc is using. Additionally, incorporating the astigmatic PSF model provides better control over the estimation of pattern parameters and results in unbiased estimations, since the imaging setup can have misalignments over time, which affects the pattern, but also the pattern can deviate per sample due to a changing coverslip angle or refractive index of the mounting medium.

Our experimental results show that the vectorial PSF model can effectively handle SAF conditions. In previous work, it has been shown that solely using an astigmatic PSF model in the presence of SAF leads to relative axial localization errors between 30% and 50% over a range of several hundreds of nanometers [30]. Therefore, the vectorial PSF model can provide more accurate and reliable results in such experimental conditions.

5.1 Imaging depth

The maximum achievable imaging depth with ZIMFLUX varies depending on the specific optical system. As the imaging depth increases, the axial localization precision $(\sigma _{\text {ax}})$ of the astigmatic PSF decreases, because of depth-dependent aberrations [49,50]. For instance, if $\sigma _{\text {ax}}$ is bigger than the axial pitch of the illumination pattern ($p_\text {ax}$), it becomes unfeasible to reliably estimate the illumination pattern. Considering the Nyquist-Shannon sampling theorem, which dictates that the sampling frequency should be at least twice the signal frequency, one could argue that ZIMFLUX is applicable up to imaging depths where $\sigma _{\text {ax}} < p_\text {ax}/2$.

5.2 Computation time

Estimating parameters with the vectorial PSF model takes significantly longer computational time compared to only using an astigmatic PSF, making it less practical. Computing the MLE for both models has a linear computational time complexity for increasing iterations and the MLE for the vectorial PSF model is $\sim 135$ times slower than using solely an astigmatic PSF on a standard commercial graphics processing unit (NVIDIA GeForce RTX 3060), as shown in Supplement 1, Fig. S9. The total time to run the whole pipeline takes approximately 2.5 hours for 180000 spots.

5.3 Point spread function model mismatch

The determination of the imaging depth $z_\text {d}$ and stage position $z_\text {stage}$ is not very precise, because it is difficult to precisely determine when the spots are in focus and a sample with emitters attached to the coverslip is needed in order to use the method described in this research. Additionally, axial sample drift has a significant effect on the PSF, especially in SAF conditions. For this study, a single model was used for the entire data acquisition, resulting in a model mismatch. In future research, a more variable vectorial PSF model could be preferable.

5.4 Lateral localization precision

With the setup used in ZIMFLUX and a perfect alignment of the center beam, a minimum lateral pitch $p_\text {lat}$ of approximately 480 nm can be achieved; otherwise, the off-center beam will enter the TIRF regime. Theoretically, assuming zero background and neglecting the dependence on the global phase, the improvement factor for lateral precision using a sinusoidal illumination pattern, compared to conventional SMLM, is given by $\sqrt {1+2\pi ^2 \left (\frac {m^2}{1+\sqrt {1-m^2}}\cdot \frac {\sigma ^2}{p_\text {lat}^2}\right )}$, where $m$ is the modulation depth and $\sigma \approx \lambda /4\mathrm {NA}$ is the spot width if astigmatism is neglected [26]. In our setup, with $m=0.85$, the sinusoidal illumination pattern would result in a 1.35-fold improvement in lateral precision compared to using only the PSF. For an astigmatic PSF, $\sigma$ increases depending on the $z$ position, and a higher improvement factor for the lateral position would be expected in ZIMFLUX. However, this has not been observed under experimental conditions, as shown in Figure S5.

5.5 Outlook

In our study, we have demonstrated the effectiveness of a rapidly shifting illumination pattern in combination with a high NA oil objective (NA = 1.49) and an aberrated PSF. For future research, it could be worthwhile to explore alternative illumination patterns to improve the resolution of SMLM, such as the counter-propagating beams as proposed in ROSE-Z [28], which results in an axial pitch of approximately 240 nm of the illumination pattern, using an excitation wavelength of 640 nm. Including an astigmatic PSF will prevent phase wrapping and allows to image samples with a broader axial range (>240 nm) of emitters. This could be combined with with two off-center beams with opposing angles, which could reduce the lateral pitch of the illumination pattern up to 240 nm as well. This combined approach holds the potential to achieve enhanced precision in all dimensions.

Funding

Nederlandse Organisatie voor Toegepast Natuurwetenschappelijk Onderzoek (NWO START-UP, project no. 740.018.015). P.V.V. and D.G. were supported by the National Science Foundation under Grant No. 1917206.

Disclosures

The authors declare no conflicts of interest.

Data availability

Code and example data is available in Ref. [51]. A stand-alone implementation for a vectorial PSF fitting tool, including a graphical user interface, is available in [52] .

Supplemental document

See Supplement 1 for supporting content.

References

1. M. Lelek, M. T. Gyparaki, G. Beliu, et al., “Single-molecule localization microscopy,” Nat. Rev. Methods Primers 1(1), 39 (2021). [CrossRef]  

2. S. W. Hell, “Microscopy and its focal switch,” Nat. Methods 6(1), 24–32 (2009). [CrossRef]  

3. B. Huang, H. Babcock, and X. Zhuang, “Breaking the diffraction barrier: super-resolution imaging of cells,” Cell 143(7), 1047–1058 (2010). [CrossRef]  

4. K. Xu, G. Zhong, and X. Zhuang, “Actin, spectrin, and associated proteins form a periodic cytoskeletal structure in axons,” Science 339(6118), 452–456 (2013). [CrossRef]  

5. Y. Doksani, J. Y. Wu, T. de Lange, et al., “Super-resolution fluorescence imaging of telomeres reveals trf2-dependent t-loop formation,” Cell 155(2), 345–356 (2013). [CrossRef]  

6. A. Dani, B. Huang, J. Bergan, et al., “Superresolution imaging of chemical synapses in the brain,” Neuron 68(5), 843–856 (2010). [CrossRef]  

7. M. P. Backlund, M. D. Lew, A. S. Backer, et al., “The double-helix point spread function enables precise and accurate measurement of 3d single-molecule localization and orientation,” Proc. SPIE 8590, 85900L (2013). [CrossRef]  

8. S. Prasad, “Rotating point spread function via pupil-phase engineering,” Opt. Lett. 38(4), 585–587 (2013). [CrossRef]  

9. Y. Shechtman, S. J. Sahl, A. S. Backer, et al., “Optimal point spread function design for 3d imaging,” Phys. Rev. Lett. 113(13), 133902 (2014). [CrossRef]  

10. S. Jia, J. Vaughan, and X. Zhuang, “Isotropic three-dimensional super-resolution imaging with a self-bending point spread function,” Nature Photonics (2014).

11. D. Baddeley, M. B. Cannell, and C. Soeller, “Three-dimensional sub-100 nm super-resolution imaging of biological samples using a phase ramp in the objective pupil,” Nano Res. 4(6), 589–598 (2011). [CrossRef]  

12. M. D. Lew, S. F. Lee, M. Badieirostami, et al., “Corkscrew point spread function for far-field three-dimensional nanoscale localization of pointlike objects,” Opt. Lett. 36(2), 202–204 (2011). [CrossRef]  

13. A. Aristov, B. Lelandais, E. Rensen, et al., “Zola-3d allows flexible 3d localization microscopy over an adjustable axial range,” Nat. Commun. 9(1), 2409 (2018). [CrossRef]  

14. Y. Li, M. Mund, P. Hoess, et al., “Real-time 3d single-molecule localization using experimental point spread functions,” Nat. Methods 15(5), 367–369 (2018). [CrossRef]  

15. B. Huang, W. Wang, M. Bates, et al., “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319(5864), 810–813 (2008). [CrossRef]  

16. G. Shtengel, J. A. Galbraith, C. G. Galbraith, et al., “Interferometric fluorescent super-resolution microscopy resolves 3d cellular ultrastructure,” Proc. Natl. Acad. Sci. 106(9), 3125–3130 (2009). [CrossRef]  

17. G. Wang, J. Hauver, Z. Thomas, et al., “Single-molecule real-time 3d imaging of the transcription cycle by modulation interferometry,” Cell 167(7), 1839–1852.e21 (2016). [CrossRef]  

18. D. Aquino, A. Schönle, C. Geisler, et al., “Two-color nanoscopy of three-dimensional volumes by 4pi detection of stochastically switched fluorophores,” Nat. Methods 8(4), 353–359 (2011). [CrossRef]  

19. F. Huang, G. Sirinakis, E. S. Allgeyer, et al., “Ultra-high resolution 3d imaging of whole cells,” Cell 166(4), 1028–1040 (2016). [CrossRef]  

20. A. Dasgupta, J. Deschamps, U. Matti, et al., “Direct supercritical angle localization microscopy for nanometer 3d superresolution,” Nat. Commun. 12(1), 1180 (2021). [CrossRef]  

21. M. Oheim, A. Salomon, and M. Brunstein, “Supercritical angle fluorescence microscopy and spectroscopy,” Biophys. J. 118(10), 2339–2348 (2020). [CrossRef]  

22. D. Kalisvaart, J. Cnossen, S.-T. Hung, et al., “Precision in iterative modulation enhanced single-molecule localization microscopy,” Biophys. J. 121(12), 2279–2289 (2022). [CrossRef]  

23. F. Balzarotti, Y. Eilers, K. C. Gwosch, et al., “Nanometer resolution imaging and tracking of fluorescent molecules with minimal photon fluxes,” Science 355(6325), 606–612 (2017). [CrossRef]  

24. K. C. Gwosch, J. K. Pape, F. Balzarotti, et al., “Minflux nanoscopy delivers 3d multicolor nanometer resolution in cells,” Nat. Methods 17(2), 217–224 (2020). [CrossRef]  

25. L. Reymond, J. Ziegler, C. Knapp, et al., “Simple: Structured illumination based point localization estimator with enhanced precision,” Opt. Express 27(17), 24578–24590 (2019). [CrossRef]  

26. J. Cnossen, T. Hinsdale, R. Ø. Thorsen, et al., “Localization microscopy at doubled precision with patterned illumination,” Nat. Methods 17(1), 59–63 (2020). [CrossRef]  

27. P. Jouchet, C. Cabriel, N. Bourg, et al., “Nanometric axial localization of single fluorescent molecules with modulated excitation,” Nat. Photonics 15(4), 297–304 (2021). [CrossRef]  

28. L. Gu, Y. Li, S. Zhang, et al., “Molecular-scale axial localization by repetitive optical selective exposure,” Nat. Methods 18(4), 369–373 (2021). [CrossRef]  

29. S. Stallinga and B. Rieger, “Accuracy of the gaussian point spread function model in 2d localization microscopy,” Opt. Express 18(24), 24461–24476 (2010). [CrossRef]  

30. M. E. Siemons, L. C. Kapitein, and S. Stallinga, “Axial accuracy in localization microscopy with 3d point spread function engineering,” Opt. Express 30(16), 28290–28300 (2022). [CrossRef]  

31. C. Smith, M. Huisman, M. Siemons, et al., “Simultaneous measurement of emission color and 3d position of single molecules,” Opt. Express 24(5), 4996–5013 (2016). [CrossRef]  

32. C. N. Hulleman, R. Ø. Thorsen, E. Kim, et al., “Simultaneous orientation and 3d localization microscopy with a vortex point spread function,” Nat. Commun. 12(1), 5934 (2021). [CrossRef]  

33. M. Siemons, C. Hulleman, R. Thorsen, et al., “High precision wavefront control in point spread function engineering for single emitter localization,” Opt. Express 26(7), 8397–8416 (2018). [CrossRef]  

34. T. Wilson, R. Juškaitis, and P. Higdon, “The imaging of dielectric point scatterers in conventional and confocal polarisation microscopes,” Opt. Commun. 141(5-6), 298–313 (1997). [CrossRef]  

35. P. Török, P. D. Higdon, and T. Wilson, “Theory for confocal and conventional microscopes imaging small dielectric scatterers,” J. Mod. Opt. 45(8), 1681–1698 (1998). [CrossRef]  

36. T. Liaudat, J.-L. Starck, M. Kilbinger, et al., “Rethinking data-driven point spread function modeling with a differentiable optical model,” Inverse Probl. 39(3), 035008 (2023). [CrossRef]  

37. S. Liu, E. B. Kromann, W. D. Krueger, et al., “Three dimensional single molecule localization using a phase retrieved pupil function,” Opt. Express 21(24), 29462–29487 (2013). [CrossRef]  

38. J. Enderlein, T. Ruckstuhl, and S. Seeger, “Highly efficient optical detection of surface-generated fluorescence,” Appl. Opt. 38(4), 724–732 (1999). [CrossRef]  

39. C. S. Smith, N. Joseph, B. Rieger, et al., “Fast, single-molecule localization that achieves theoretically minimum uncertainty,” Nat. Methods 7(5), 373–375 (2010). [CrossRef]  

40. C. S. Smith, S. Stallinga, K. A. Lidke, et al., “Probability-based particle detection that enables threshold-free and robust in vivo single-molecule tracking,” Mol. Biol. Cell 26(22), 4057–4062 (2015). [CrossRef]  

41. F. Huang, T. M. Hartwich, F. E. Rivera-Molina, et al., “Video-rate nanoscopy using scmos camera–specific single-molecule localization algorithms,” Nat. Methods 10(7), 653–658 (2013). [CrossRef]  

42. C. Smith, R. Marinică, A. Den Dekker, et al., “Iterative linear focal-plane wavefront correction,” J. Opt. Soc. Am. A 30(10), 2002–2011 (2013). [CrossRef]  

43. GATTAquant, “Technical FAQ,” https://www.gattaquant.com/faq (2023). [Online; accessed 29-March-2023].

44. J. J. Schmied, C. Forthmann, E. Pibiri, et al., “Dna origami nanopillars as standards for three-dimensional superresolution microscopy,” Nano Lett. 13(2), 781–785 (2013). [CrossRef]  

45. J. C. Mullikin, L. J. van Vliet, H. Netten, et al., “Methods for ccd camera characterization,” in Image Acquisition and Scientific Imaging Systems, vol. 2173 (Spie, 1994), pp. 73–84.

46. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207–211 (1976). [CrossRef]  

47. F. Huang, S. L. Schwartz, J. M. Byars, et al., “Simultaneous multiple-emitter fitting for single molecule super-resolution imaging,” Biomed. Opt. Express 2(5), 1377–1393 (2011). [CrossRef]  

48. J. Cnossen, T. J. Cui, C. Joo, et al., “Drift correction in localization microscopy using entropy minimization,” Opt. Express 29(18), 27961–27974 (2021). [CrossRef]  

49. M. Siemons, B. M. Cloin, D. M. Salas, et al., “Comparing strategies for deep astigmatism-based single-molecule localization microscopy,” Biomed. Opt. Express 11(2), 735–751 (2020). [CrossRef]  

50. S.-T. Hung, A. Llobet Rosell, D. Jurriens, et al., “Adaptive optics in single objective inclined light sheet microscopy enables three-dimensional localization microscopy in adult drosophila brains,” Front. Neurosci. 16, 954949 (2022). [CrossRef]  

51. P. van Velde, “Zimflux,” GitHub (2023) [accessed Nov. 30, 2023], https://github.com/qnano/zimflux.

52. P. van Velde, “VectorialPSF,” GitHub (2023) [accessed Nov. 30,2023], https://github.com/qnano/VectorialPSF.

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Document

Data availability

Code and example data is available in Ref. [51]. A stand-alone implementation for a vectorial PSF fitting tool, including a graphical user interface, is available in [52] .

51. P. van Velde, “Zimflux,” GitHub (2023) [accessed Nov. 30, 2023], https://github.com/qnano/zimflux.

52. P. van Velde, “VectorialPSF,” GitHub (2023) [accessed Nov. 30,2023], https://github.com/qnano/VectorialPSF.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. a, An interference pattern is generated by two beams entering the sample, illuminating the emitter located at (x0, y0, z0). b, ZIMFLUX operates by recording three images with shifted patterns, and using the information of the illumination pattern, PSF and photon count for a maximum likelihood estimation. This method results in improved axial localization precision compared to conventional single molecule localization microscopy (SMLM) using an astigmatic PSF, where only the PSF obtained from the sum of the frames is used for localization of the emitter.
Fig. 2.
Fig. 2. The figure illustrates the interference pattern $\vec {E}_i$ that is formed by two plane waves $\vec {E}_1$ and $\vec {E}_2$ originating from the objective, with incidence angles of $\alpha _0$ and $\beta _0$, and with an azimuthal angle of $\gamma$. The light passes through the immersion media and cover glass, with refractive indices of $n_2$ and $n_1$. Due to the refractive index mismatch, the angles of the beams change to $\alpha _1$ and $\beta _1$. The fluorophores are embedded in a medium with a refractive index of $n_0$. The imaging depth $z_\text {d}$ corresponds to the distance from the cover glass to the focal plane, while $z_{0}$ represents the emitter’s distance from the focal plane. $z_\text {stage}$ is defined as the distance the stage has moved from the point at which the top of the cover glass is aligned with the focal plane.
Fig. 3.
Fig. 3. a, A schematic of the custom-built ZIMFLUX setup. Additional information is available in the main text. b, A simplified schematic of the excitation path. Incoming laser light is diffracted by a digital micromirror device (DMD) on which a binary block wave is projected. The spatial filter (SF) permits only the zeroth and one first-order beam, with a spacing of $u$, to pass through. The spacing is then magnified to $u'$ at the back focal plane, resulting in non-parallel beams that generate an interference pattern at the overlap in the sample plane. All abbreviations used in the figure are defined in the main text.
Fig. 4.
Fig. 4. The relation between the imaging depth and the stage position in SAF conditions for the setup used in this research, which is obtained from Eq. (30) for multiple values of $z_\text {stage}$.
Fig. 5.
Fig. 5. a, PSF at various $z$ distances from the focal plane for different values of the astigmatic Zernike coefficient $Z_2^2$ (−30 mλ, −60 mλ, −90 mλ) at an imaging depth of 300 nm. The parameters used are similar to the experimental conditions (an oil immersion objective, refractive index mounting medium of 1.33, and signal and background photon count of 2800 and 8). b, c, The CRLB of the astigmatic PSF and ZIMFLUX axial localization are shown as a function of different distances from the focal plane at an imaging depth of $z_\text {d} = {300}\;\textrm{nm}$. SAF occurs due to refractive index mismatch, which lowers the CRLB near the coverslip and decreases with increasing distance from the coverslip. The illumination pattern has a modulation depth of $0.85$ and an axial and lateral pitch of 492 nm and 452 nm, respectively. The CRLB of ZIMFLUX is nearly independent of the level of astigmatism and is approximately half of the CRLB of the astigmatic PSF. d-f, Similar to a-c but at $z_\text {d} = {1300}\;\textrm{nm}$, where the effect of SAF is negligible. The refractive index mismatch significantly affects the axial localization precision of the astigmatic PSF (e), but ZIMFLUX enables a higher precision when imaging deeper into the sample, including a 10-fold improvement for negative $z$ values, that goes down to approximately 2-fold for positive $z$ values (f). The y-axis of the plots are scaled such that the CRLB of the astigmatic PSF and ZIMFLUX can be compared easily, for a scale that is suitable for the individual plots, see Fig. S7.
Fig. 6.
Fig. 6. a, The schematic shows that the CRLB in b-d is computed for emitters randomly generated between −300 nm and 300 nm away from the focal plane at an imaging depth of 300 nm. b, The CRLB for estimating the z-position as a function of the axial pitch for different modulation depths $m$, assuming the center beam is perfectly parallel to the optical axis and the astigmatism is set with the Zernike mode $Z_2^2$ of −60 mλ. c, The effect of different levels of astigmatism on the CRLB, with a modulation depth of 0.85. It can be observed that the degree of astigmatism does not significantly improve the precision and can even deteriorate it for an axial pitch lower than 1000 nm. d, The improvement factor of the CRLB in $z$-estimation for ZIMFLUX compared to the astigmatic PSF is shown for different levels of astigmatism and axial pitches. e, The CRLB in f-h is computed similarly as for b-d, but at an imaging depth of 1300 nm, at which the effect of SAF is negligible. f-h, Similar to b-d, but with an imaging depth of 1300 nm. The improvement factor of the CRLB for ZIMFLUX over the astigmatic PSF $z$ estimation is about 2 times higher (roughly 3-8 times higher overall) at this depth compared to an 300 nm imaging depth.
Fig. 7.
Fig. 7. a, By shifting the stage and measuring the signal of a single emitter while keeping the pattern constant, the lateral pitches $p_x$ and $p_y$ in a sample can be determined. Due to a refractive index mismatch between the sample and coverslip, there is a difference in the pattern below and above the coverslip. Thus, when the stage is moved in the $z$-direction, the intensity profile of the emitter represents the pitch in $z$ within the immersion oil, as explained in the Theory section and depicted in the schematic. For the lateral directions, the found pitch corresponds to the pitch of the pattern within the sample. b, The axial pitch $p_z'$ and standard error of the mean (SEM) obtained from the stage shift is 835±12 nm (SEM, n=99) (bin width 50 nm). c, d, Histograms of the estimated lateral pitches of the sinusoidal function reveal values of 646±3 nm (SEM, n=111) and 663±3 nm (SEM, n=110) for $p_x$ and $p_y$, respectively (bin width 30 nm).
Fig. 8.
Fig. 8. a, The field of view (FOV) is captured from a sample containing randomly oriented 3D DNA origami nanorulers that are attached to the coverslip. The polar angle, $\theta$, denotes the angle between the nanoruler of length $l$ and the coverslip, as illustrated in the upper left corner. b, The colored boxes in a are zoomed in to display the results of the astigmatic PSF and ZIMFLUX on the same underlying data. To aid visualization, the localizations are convolved with a Gaussian kernel of size 10 nm and the color corresponds to the $z$ position. The histograms show the estimated z-positions of the individual localizations. The two binding sites are identified by K-means clustering, and the fit of each cluster is shown in the histograms with the full width half maximum (FWHM) noted. c, The positions of the binding sites are calculated as the mean of all localizations in each cluster. The Euclidean distance between the binding sites, $l$, as a function of the polar angle $\theta$ is plotted. d, The histogram of length $l$ has mean values of 83±7 nm and 83±8 nm for the astigmatic PSF and ZIMFLUX, respectively. The bin width of the histograms in b and d is 5 nm. e, The median values of the found precision of the clusters are 18.8 nm and 12.6 nm for the astigmatic PSF and ZIMFLUX and 11.7 nm and 6.0 nm for the CRLB. The CRLB is computed with the mean values of the estimated parameters of each cluster.

Equations (37)

Equations on this page are rendered with MathJax. Learn more.

μ j l k = N P ( ϕ l k ( r 0 ) ) H ( r j r 0 ) + b L K
p lat = λ 0 | n 0 ( sin α 1 + sin β 1 ) | ,
p ax = λ 0 | n 0 ( cos α 1 cos β 1 ) | .
P ( ϕ l k ( r ) ) = η l k ( 1 + m l k cos ( ϕ l k ( r ) ) ) ,
ϕ l k ( r ) = 2 π q l r ψ l k ,
q l = { cos ( γ l ) p lat , sin ( γ l ) p lat , 1 p ax } .
l = 1 L k = 1 K P ( ϕ l k ( r ) ) = 1.
E pupil , p q ( W , ρ , z d , z stage ) = A ( ρ ) q p q ( ρ ) exp [ i W ( ρ ) + i ( z d k z , 0 ( ρ ) z stage k z , 2 ( ρ ) ) ] ,
k z , i ( ρ ) = 2 π λ n i 2 N A 2 | | ρ | | 2 2 ,
H ( r j r 0 ) = N 3 w n p = x , y q = x , y , z | ξ j | ρ | < 1 E pupil , p q ( W , ρ , z d , z stage ) × exp [ i k ( ρ ) ( r j r 0 ) ] d 2 ρ d 2 ξ j | 2 ,
k z , 0 ( ρ ) = 2 π λ i N A 2 | | ρ | | 2 2 n 0 2 ,
E pupil , p q ( W , ρ , z d , z stage ) = A ( ρ ) q p q ( ρ ) exp [ δ z d λ + i ( W ( ρ ) z stage k z , 2 ( ρ ) ) ] ,
log L = l = 1 L k = 1 K j = 1 J [ n j l k log ( μ j l k ) μ j l k ] ,
log L θ i = l = 1 L k = 1 K j = 1 J n j l k μ j l k μ j l k μ j l k θ i .
μ j l k r 0 = N P ( ϕ l k ( r 0 ) ) H ( r j r 0 ) r 0 + N H ( r j r 0 ) P ( ϕ l k ( r 0 ) ) r 0 ,
μ j l k N = P ( ϕ l k ( r 0 ) ) H ( r j r 0 ) ,
μ j l k b = 1 L K .
P ( ϕ l k ( r 0 ) ) r 0 = 2 π η l k m l k sin ( ϕ l k ( r 0 ) ) ,
H ( r j r 0 ) r 0 = N 3 w n p = x , y q = x , y , z r 0 U p q ( r j r 0 ) U p q ( r j r 0 )
= 2 N 3 w n p = x , y q = x , y , z Re { U p q ( r j r 0 ) U p q ( r j r 0 ) r 0 } ,
U p q ( r j r 0 ) = ξ j | ρ | < 1 E pupil , p q ( W , ρ , z d , z stage ) exp [ i k ( ρ ) ( r j r 0 ) ] d 2 ρ d 2 ξ j ,
U p q ( r j r 0 ) x = i ξ j | ρ | < 1 E pupil , p q ( W , ρ , z d , z stage ) k x ( ρ ) × exp [ i k ( ρ ) ( r j r 0 ) ] d 2 ρ d 2 ξ j ,
F r s = l = 1 L k = 1 K j = 1 J 1 μ j l k μ j l k θ r μ j l k θ s .
μ l k = j = 1 J N P ( ϕ l k ( r ill,0 ) ) H ( r j r ill,0 ) + b L K ,
μ l k = N P ( ϕ l k ( r ill,0 ) ) + b L K J .
log L = l = 1 L k = 1 K [ n l k log ( μ l k ) μ l k ] .
μ j = l = 1 L k = 1 K N P ( ϕ l k ( r psf,0 ) ) H ( r j r psf,0 ) + b L K ,
μ j = N H ( r j r psf,0 ) + b ,
u = λ 0 f p DMD = 1.17 mm
z d = arg max z d   ( p = x , y q = x , y , z | | ρ | < 1 E pupil , p q ( W 1 , ρ , z d , z stage ) d 2 ρ | 2 p = x , y q = x , y , z | | ρ | < 1 E pupil , p q ( W 2 , ρ , z d , z stage ) d 2 ρ | 2 ) .
Ω l k = v V | N v l k η l k N v L K ( 1 + m l k cos ( ϕ l k ( r psf , v ) ) | 2 .
( v N v 2 K 2 v N v 2 K 2 cos ( 2 π q l r psf , v ) v N v 2 K 2 cos ( 2 π q l r psf , v ) v N v 2 K 2 cos ( 2 π q l r psf , v ) 2 v N v 2 K 2 sin ( 2 π q l r psf , v ) v N v 2 K 2 cos ( 2 π q l r psf , v ) sin ( 2 π q l r psf , v ) v N v 2 K 2 sin ( 2 π q l r psf , v ) v N v 2 K 2 cos ( 2 π q l r psf , v ) sin ( 2 π q l r psf , v ) v N v 2 K 2 sin ( 2 π q l r psf , v ) 2 ) ( η l k η l k m l k cos ( ψ l k ) η l k m l k sin ( ψ l k ) ) = ( v N v N v l k K v N v N v l k K cos ( 2 π q l r psf , v ) v N v N v l k K sin ( 2 π q l r psf , v ) )
Δ r v = r psf , v r ill , v ,
Δ ϕ l ( r psf , v , r ill , v ) = 2 π q l Δ r v + 2 π Δ q l r psf , v Δ ψ l ,
R l = v = 1 V | Δ ϕ l ( r psf , v , r ill , v ) | 2 ,
v 2 π ( Δ q l r psf , v ) v Δ ψ l = v 2 π ( q l Δ r psf , v ) .
ϵ mod = m a x ( N exp l k N est l k N exp l k ) 0 l L and 0 k K
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.