Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hybrid multifocal structured illumination microscopy with enhanced lateral resolution and axial localization capability

Open Access Open Access

Abstract

Super-resolution (SR) fluorescence microscopy that breaks through the diffraction barrier has drawn great interest in biomedical research. However, obtaining a high precision three-dimensional distribution of the specimen in a short time still remains a challenging task for existing techniques. In this paper, we propose a super-resolution fluorescence microscopy with axial localization capability by combining multifocal structured illumination microscopy with a hybrid detection PSF composed of a Gaussian PSF and a double-helix PSF. A modified reconstruction scheme is presented to accommodate the new hybrid PSF. This method can not only recover the lateral super-resolution image of the specimen but also retain the specimen’s depth map within a range of 600 nm with an axial localization precision of 20.8 nm. The performance of this approach is verified by testing fluorescent beads and tubulin in 293-cells. The developed microscope is well suited for observing the precise 3D distribution of thin specimens.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Super-resolution (SR) techniques which overcome the diffraction limit in optical microscopy have met increasing interests in biomedical imaging [13]. Comparing with other super-resolution techniques such as single molecule localization microscopy (SMLM) [4,5] and stimulated emission depletion (STED) microscopy [6], structured illumination microscopy (SIM) [7] provides a more modest lateral resolution (about 100 nm), but in return offers higher imaging speed, lower phototoxicity and places less demands on the photon budget and special labels, which make it an ideal tool for long-term observation of live specimens.

Owning to the “missing cone” [8], SR-SIM, as same as other optical microscopy, suffers from a degraded axial resolution comparing with the lateral one. Even though the axial resolution can be enhanced (∼300 nm) by generating three-dimensional structured illumination and filling in the “missing cone” of spatial frequencies [9], it is still inferior to the lateral resolution (∼100 nm). Another approach to improve the axial resolution was achieved by combining SR-SIM with I5M [10], with which near-isotropic resolution (∼100 nm) in three-dimension can be produced. Nevertheless, it requires two opposing objective lenses, which greatly increases the difficulty of system calibration and sample-holding, limiting its widespread application in biomedical imaging. In addition, the SR-SIM also has been implemented with diffraction-limited excitation focus, including image scanning microscopy (ISM) [11,12], multifocal structured illumination microscopy (MSIM) [13], re-scan confocal microscopy (RCM) [14], and optical photon reassignment (OPRA) microscopy [15]. In aid of pixel reassignment or photon reassignment, these methods can also achieve an axial resolution of 360 nm while producing a lateral resolution of 120 nm [16]. To avoid axial scanning, ISM and MSIM based on PSF engineering (termed refocusing after scanning using helical phase engineering, RESCH) [1720] were also developed by employing post-acquisition refocusing algorithms [17,21], enabling faster acquisition of the super-resolution images in 3D with a more modest spatial resolution.

So far, to obtain precise 3D distribution of a specimen, most SR-SIM techniques focus on improving the axial resolution of the system, with which optically sectioned images can finally be obtained via axial scanning or post-acquisition refocusing. These methods work fine for specimens at micron thickness regime. But for specimens thinner than the depth-of-field (DOF, ∼500 nm), little useful 3D information can be obtained. In contrast to above techniques based on optical sectioning, single molecule localization microscopy can realize axial localization with high precision by employing PSF engineering techniques [22,23]. For instance, Pavani [22] and Shu [23] respectively achieved 20 nm and 15 nm axial resolution in single molecule localization microscopy by using a double-helix (DH) PSF or a cubic phase PSF. In our previous work [24], a single shot 3D microscope has been developed by extending the axial localization concept in 3D wide-field microscopy, which enable us obtain the extended DOF image and the depth map of the specimen with single snapshot. This method can reach a framerate-rank imaging speed, but the lateral resolution remains unchanged comparing with the wide-field microscopy.

In this paper, a novel super-resolution 3D fluorescence microscope is developed by introducing a hybrid PSF in the detection path of MSIM, which is termed hybrid multifocal structured illumination microscopy (HMSIM). The hybrid detection PSF is composed of a Gaussian PSF and a double-helix PSF, which are respectively utilized to reconstruct the lateral SR image and the depth information. To generate the hybrid detection PSF with minimum modification in light path, a simple and economic detection scheme is developed. A modified reconstruction scheme is also presented to accommodate the new hybrid PSF, which can recover the lateral SR image and the depth map of the specimen with single image sequence. Eventually, the lateral resolution and the axial localization precision of the setup are respectively measured to be 149 nm and 20.8 nm by experimental observations of fluorescent beads and 293-cells.

2. Methods

2.1 Depth estimation with double-helix PSF

Double-helix PSF has drawn great attention in various microscopy techniques due to its high performance in depth estimation. The mainstream of computer-generated-holograms (CGHs) to generate the double-helix PSFs are derived from either the Gaussian-Laguerre (GL) modes [25] or the spiral phases [26,27]. In Ref [24]., we have discovered the flexibility of spiral-phase-based DH-PSF in tuning the extended range and rotation rate. Besides, we also found that the inter-lobe distance also monotonously increases with the increasing number of Fresnel zones. The simplified form of the CGHs to generate double-helix PSFs is described by the following segmented functions:

$${P_N}(\rho ,\varphi ) = \left\{ {\begin{array}{cc} {\exp [i(2n - 1)\varphi ],}&{R\sqrt {\frac{{n - 1}}{N}} < \rho \le R\sqrt {\frac{n}{N}} ,n = 1,2,\ldots ,N}\\ {0,}&{\rho > R} \end{array}} \right.$$
where PN(ρ,φ) denotes the phase modulation at the polar coordinates (ρ,φ), R presents the maximal radius of the selected aperture, and N is the total number of the Fresnel zones. The phase mask is composed of a sequence of radial sampling of the spiral phase into the Fresnel zones. It’s worth mentioning that the number of the Fresnel zones is an important parameter of the spiral-phase-based CGHs, for it directly determines the extended range, rotation rate, inter-lobe distance and the main-lobe size of the double-helix PSF [24].

For every excitation focus, the detected image can be described as follows:

$$d(x,y) = \int_{ - \infty }^{ + \infty } {[o(x,y;z)\cdot {h_{\textrm{ex}}}(x,y;z)] \otimes {h_{\det }}(x,y;z)dz}, $$
where hex(x,y;z) and hdet(x,y;z) are respectively the excitation PSF and detection PSF at the depth of z, and o(x,y;z) denotes the 3D distribution of the specimen. For MSIM, both of hex(x,y;z) and hdet(x,y;z) can be approximated as a Gaussian PSF. Therefore, the effective PSFs within the depth-of-field remains the same, i.e., we are not able to distinguish the depth discrepancy of the specimen with the acquired images in conventional MSIM.

If the detection PSF of the system is modified as a double-helix PSF, specimen at different depths will convolve with different detection PSFs, which can be expressed as follows [24]:

$${h_{\rm{det}}}(x,y;z) = {\mathop{\rm {rot}}\nolimits} [{h_0}(x,y + \Delta y/2) + {h_0}(x,y - \Delta y/2),{k_{\theta (z)}}z], $$
where h0(x,y) presents one of the main lobes of the PSF at the focal plane, kθ(z) describes the linear dependency between the axial position z and the rotation angle θ, Δy indicates the distance of the two main lobes, and rot(h,θ) is the function to rotate the PSF h by an angle of θ around the center. This is an empirical description of DH-PSF based on the assumptions that the main lobes of the double-helix PSF at the focal plane can be approximated as an elliptical Gaussian PSF, and its rotation rate and inter-lobe distance keep invariant within the depth-of-field [24].

If we define ${\delta ^ \pm }(z)$ and $h_0^r(x,y;z)$ as below

$$\left\{ {\begin{array}{c} {{\delta^ \pm }(z) = \delta \left( { \pm \frac{{\Delta y}}{2}\cos ({{k_{\theta (z)}}z} ), \pm \frac{{\Delta y}}{2}\sin ({{k_{\theta (z)}}z} )} \right)}\\ {h_0^r(x,y;z) = \textrm{rot}({h_0}(x,y),{k_{\theta (z)}}z)} \end{array}} \right., $$
the double-helix PSF can be approximated by
$${h_{\det }}(x,y;z) = {h_\textrm{DH}}(x,y;z) = h_0^r(x,y;z) \otimes [{{\delta^ + }(z) + {\delta^ - }(z)} ]$$
Further, the acquired image in the camera can be written as
$$d(x,y) = \int_{ - \infty }^{ + \infty } {[o(x,y;z)\cdot {h_{\textrm{ex}}}(x,y;z)] \otimes h_0^r(x,y;z) \otimes [{{\delta^ + }(z) + {\delta^ - }(z)} ]dz}. $$
Therefore, the image obtained by using the double-helix PSF is essentially the superposition of twin images. The distance of the twin images is fixed, while their orientation varies with the axial position of the specimen. In this way, the depth information of the specimen can be decoded by calculating the azimuth of the twin image.

For the conventional MSIM with a Gaussian detection PSF, the super-resolution image can be obtained via the multifocal-excited, pinholed, scaled and summed (MPSS) algorithm [13]. However, for a double-helix detection PSF, the MPSS algorithm in MSIM will no longer be applicable. To obtain the SR image, the simplest method is to restore the Gaussian image by deblurring raw images of each excitation focus with corresponding DH-PSF, and then reconstruct the SR image with the MPSS algorithm. However, the computational burden of the reconstruction process will be greatly increased owning to the deconvolution of each excitation focus. Worse still, extra deconvolution process will introduce abundant artifacts in the reconstructed image, especially when the aberrations of the system are not properly corrected. In addition, the double-helix PSF itself will lead to resolution loss compared to the Gaussian PSF. Thus, to obtain the SR image without sacrificing the image quality, one should acquire an additional image sequence by switching the detection PSF to a Gaussian PSF and recover the SR image with the second image sequence. However, in this case, the acquisition speed will be decreased and the raw datasets will be doubled. In the following, we propose a simple and flexible solution to this problem.

2.2 Generation of the hybrid PSF

Besides above method, it is also possible to simultaneously detect the images with Gaussian PSF and double-helix PSF by splitting the detection light into two paths with a polarized beam splitter. One path is modulated with the SLM and imaged onto a camera, and the other path is directed onto another camera. The use of two cameras dramatically increases the expenses to construct the system. To make it more economic, the two detection paths can also be combined with another beam splitter, which requires single camera for detection. However, this scheme needs elaborate calibration of the detection path and precise registration of the two image sequences. To circumvent above problems, we design another detection scheme by making use of the polarization selectivity of the liquid crystal spatial light modulator (LC-SLM), as shown in Fig. 1. LC-SLM can only modulate the horizontally polarized component of the incident light while keeping the vertically polarized component unchanged. Thus, we can achieve a natural separation and superposition of the Gaussian-PSF and DH-PSF components, that is, the horizontal polarization component is modulated by the SLM to produce the DH-PSF image, while the unmodulated vertical polarization component is utilized to generate the Gaussian-PSF image. The linear polarizer behind the objective can be rotated to arbitrarily adjust the energy ratio between the Gaussian-PSF image and the DH-PSF image. Compared with other detection schemes, this approach requires neither additional devices to generate the extra detection paths, nor image registration of the two image sequences, which provides a simple but flexible configuration.

 figure: Fig. 1.

Fig. 1. Optical arrangement to generate the Gaussian PSF and the double-helix PSF simultaneously.

Download Full Size | PDF

Overall, the detection PSF of the system is composed of a Gaussian PSF and a double-helix PSF. The former is employed to recover the super-resolution image, while the latter is adopted for obtaining the depth information. The proportion of the two components can be easily adjusted by rotating the polarizer.

2.3 Reconstruction algorithm

To recover the super-resolution(SR) image and the depth map of the specimen, a modified reconstruction workflow is developed by modifying the multifocal-excited, pinholed, scaled and summed (MPSS) algorithm of MSIM [13], as shown in Fig. 2. At first, some routine preprocessing (background removal and denoising, etc.) are carried out on the raw image sequence to reduce the impact of background noise. Then, the illumination vectors including basis vectors, shift vectors and offset vector are obtained with the automatic lattice detection method in Ref [13]. With the calibrated illumination vectors, expected locations of each illumination spot in each raw image can be easily reconstructed. Afterwards, the corresponding subimage of each excitation spot can be extracted with interpolation methods.

 figure: Fig. 2.

Fig. 2. Flowchart of the image recovery process.

Download Full Size | PDF

Each subimage is superposed by a Gaussian-PSF component and a DH-PSF component, as shown in Fig. 3. On the one hand, the rotation angle of the DH-PSF component is obtained with an angle estimator which will be discussed below, and the depth of the specimen is calculated with the premeasured look-up-table of the rotation angle and the depth. On the other hand, to remove the background and the DH-PSF component, the subimage is multiplied by a Gaussian synthetic pinhole (see Fig. 3). Then, the pinholed subimage is shrunk to half size around its center and added onto the processed new image. The standard deviation σ of the Gaussian synthetic pinhole is set as 2 (130 nm), so that the DH-PSF component in the image can be completely removed from the image. Finally, the depth map and the SR image of the entire field-of-view are obtained by summing up all the estimated depths and processed images. Owning to the independence of the processing of each subimage, the reconstruction algorithm allows parallel process, by which the whole reconstruction can speed up 5∼7 times with an 8-core processor like Intel Core i7-9700K.

 figure: Fig. 3.

Fig. 3. Postprocessing of each rodlike subimage.

Download Full Size | PDF

2.4 Design of the angle estimator

To estimate the rotation angle of the DH-PSF component in the subimage, an angle estimator is required in the reconstruction workflow. In the single molecule localization microscopy based on DH-PSF, several angle estimators including centroid estimator [28], Gaussian fitting estimator [29] and maximum likelihood estimator [30] are developed to recover the depth of the specimen. However, they are specially designed for point-like emitters, and the Gaussian PSF component in the subimage was not considered, which make them unable to be applied here.

In this paper, we specially develop a new angle estimator for the rodlike subimages, which takes Gaussian PSF into account and does not require the image is formed by a point emitter. As mentioned above, each subimage in this work is composed of a Gaussian-PSF image and a DH-PSF image, which form a rodlike image as shown in Fig. 3. The orientation of the short rod in the subimage reflects the depth of the specimen. Therefore, angle detection of the DH-PSF component equals to the angle estimation of line segment. Assuming that the line segment contains m pixels, the distance from the ith pixel to a line y = ax + b can be described as

$${D_i} = \frac{{|{{y_i} - ({a{x_i} + b} )} |}}{{\sqrt {1 + {a^2}} }}$$
The quadratic sum of the distances from the m feature points to the line is
$$S({a,b} )= \sum\limits_{i = 1}^m {\frac{{{{[{{y_i} - ({a{x_i} + b} )} ]}^2}}}{{1 + {a^2}}}} = \frac{1}{{1 + {a^2}}}\sum\limits_{i = 1}^m {{{[{{y_i} - ({a{x_i} + b} )} ]}^2}}. $$
If (ak,bk) satisfies $S({{a_k},{b_k}} )= \mathop {\textrm{min}}\limits_{a,b} S({a,b} )$, the line described by (ak,bk) can be considered as the minimum distance estimation of the line segment. Then, the azimuth angle of this line, that is, the rotation angle of the DH-PSF component can be calculated as $\theta = \textrm{arctan}({{a_k}} )$. Then, the depth of the specimen can be easily obtained with the premeasured look-up-table. Because above equations only involve some simple vector operations, this estimator can provide a high computing speed.

3. Results and discussions

3.1 Simulation results

To reveal the validity of the developed algorithm, we simulate the image formation and reconstruction process of a simulated fluorescence object, which is composed of eight regions extracted from a standard resolution test chart. Different regions are located at different depths, and the axial distance between adjacent blocks is 60 nm, as shown in Fig. 4(a). Three imaging modalities are simulated here, including wide-field illumination with Gaussian detection PSF, multifocal excitation with Gaussian detection PSF (MSIM), and multifocal excitation with double-helix detection PSF (HMSIM). Raw images here are generated with Eq. (2) by putting in different excitation and detection PSFs. Figures 4(b) and 4(c) are respectively the wide-field image and the super-resolution image recovered with the MPSS algorithm. Figures 4(d) and 4(e) are recovered super-resolution image and depth map with the developed algorithm. As shown in Fig. 4(c) and Fig. 4(d), the recovered SR image with HMSIM is almost identical to the reconstruction result of MSIM. In addition, the depth of each block in the depth map is also consistent with the preset depths. In other words, our approach can not only obtain the super-resolution image of the specimen, but also retains the depth information of the specimen.

 figure: Fig. 4.

Fig. 4. Simulative results of image formation and restoration. (a) Simulated fluorescent object. (b) Wide-field image. (c) Super-resolution image of MSIM. (d) and (e) The super-resolution image and the depth map, respectively, recovered with the developed HMSIM method.

Download Full Size | PDF

3.2 Experimental results

To demonstrate the performance of the developed method, we construct a DMD-based MSIM system with a hybrid detection PSF, whose light path is shown in Fig. 5(a). The illumination beam from a solid-state laser (λ=491 nm, Calypso 491, Cobolt AB Inc., Sweden) is expanded by a factor of 10 and directed into the total internal reflection prism (TIR-prism, US Patent 6185047B1). The collimated beam is then reflected onto the DMD chip (1024×768, V-7001, ViALUX GmbH, Germany), by which the illumination patterns can be easily generated. The illumination pattern is demagnified and projected onto the specimen at the focal plane by a projection system with a zoom factor of 1/90, which consists of a collimating lens (f=180 mm) and an objective lens (100×/NA1.49, Apo TIRF, Nikon Inc., Japan). The emitting fluorescence from the specimen is collected by the same objective and relayed onto the panel of LC-SLM (1920×1080 pixels, Pluto II, HoloEye Photonics AG, Germany) through a 1:1 4f system. A specially designed triangle reflector [31] was employed to reflect the collimated input beam onto the SLM, and then guide the retro-reflected modulated beam out of the SLM. The modulated fluorescence is filtered by a longpass filter (λc=505 nm) and collected by a zoom lens, and the specimen is eventually imaged onto a scientific CMOS camera (Orca Flash 4.0, 2048×2048 pixels, 16 bits, Hamamatsu Inc., Japan). Among them, multifocal excitation spots and double-helix PSF are respectively achieved by loading specific patterns on the DMD and SLM.

 figure: Fig. 5.

Fig. 5. Schematic of the hybrid multifocal structured illumination microscope (HMSIM). The structured illumination is generated by the DMD-based projection system, while the hybrid PSF in the detection path is produced with the SLM. The proportion of the double-helix PSF and the Gaussian PSF can be easily controlled by adjusting the orientation of the polarizer. CGH: computer-generated-hologram, DM: dichronic mirror.

Download Full Size | PDF

As mentioned above, the detection PSF of the system is composed of a Gaussian PSF and a double-helix PSF, and the proportion of the two components is adjusted by rotating the polarizer. In practice, the selection of the rotation angle of the polarizer involves a balance between the final lateral resolution and axial localization precision of the system. Either of the two components should not be too strong, so as to avoid introducing severe crosstalk between them. Thus, the maximal intensity of the Gaussian PSF and the DH-PSF should be comparable. Yet, the Gaussian PSF can be slightly stronger to maintain considerable lateral resolution. In our experiment, the rotation angle of the polarizer is set as 30 degree, resulting in an energy ratio of EGaus/EDH about 1:3. In the end, the maximal intensity of the Gaussian component is measured to be slightly higher than the double-helix component.

For multifocal illumination, the illumination pattern is translated in two dimensions until the entire field-of-view is uniformly illuminated. In order to minimize the image overlap among different excitation spots, equilateral triangular configuration of these spots is adopted here, as shown in Fig. 6(a). The length of the equilateral triangle determines the spacing of the spots and the step size of translation. For example, the equilateral triangle with side length of 16 pixels has a base of 16 pixels and a height of 13 pixels, which respectively correspond to the step number of horizontal shifts and vertical shifts, resulting in a total shift number of 16×13 = 208. The translation path is shown in Fig. 6(b), in which the horizontal and vertical directions are respectively selected as slow shift axis and fast shift axis. During the collection process, the loaded computer-generated-holograms (CGH) on the SLM stay unchanged, while DMD and sCMOS keep refreshing the illumination pattern and acquiring images continuously.

 figure: Fig. 6.

Fig. 6. Illumination pattern and its shift path. (a) Part of the first illumination pattern. (b) Shift path of the illumination patterns. Blue grids represent the on-state pixels of DMD, while white grids remark the off-state pixels.

Download Full Size | PDF

Before testing biological specimens, we first calibrate the lateral resolution of the system by observing fluorescent beads with a diameter of 40 nm (505/515, Thermo Fisher Scientific Inc., USA). To minimize the spherical aberration caused by the refractive index mismatch, the particle is firstly dried on the coverslip and then submerged in immersion oil. Figure 7 shows the measurement results for wide-field, MSIM and HMSIM. To qualify the spatial resolution, 30 isolated fluorescent beads are selected and their full width at half maximum (FWHM) are measured and counted, as shown in Fig. 7(g). The averaged FWHM values for wide-field, MSIM and HMSIM are 232 ± 12 nm, 143 ± 5 nm and 149 ± 6 nm, respectively. It is obvious that the achieved lateral resolution of MSIM and HMSIM are nearly the same, and both of them show an improvement of ∼60% than that of the wide-field. In addition, the adjacent two beads in Fig. 7(d) are too closed to be resolved. But, in the results of MSIM and HMSIM (Fig. 7(e) and 7(f)), they are separated distinctly, which is more clearly presented by the intensity profiles in Fig. 7(h).

 figure: Fig. 7.

Fig. 7. Imaging results of fluorescent beads in different modalities. (a) Wide-field image. (b) Recovered super-resolution (SR) image with MSIM. (c) Recovered SR image with HMSIM. (d)-(f) are the magnified views of the dash-boxed regions in (a)-(c), respectively. (g) and (h) are respectively the intensity profiles of along the two solid lines in (d)-(f).

Download Full Size | PDF

Next, we experimentally tested a specimen of 293-cells. The tubulin in 293-cells is labeled by DyLight488, which emits green fluorescence after being excited by the 491 nm laser. The fluorescence is collected by the detection path and eventually imaged on the sCMOS camera. The wide-field image is shown in Fig. 8(a), in which we can only see the fuzzy outline of tubulin. Figures 8(c) and 8(e) are respectively the super-resolution image reconstructed by MSIM and HMSIM. Compared with the wide-field image, the contrast and resolution of the super-resolution images are greatly improved, and the details of tubulin which cannot be seen clearly in the wide-field image show up. The two super-resolution images are almost identical, which implies the developed method presents a similar resolution as the MSIM. Figure 8(f) illustrates the depth map of tubulin, by which the height difference of the specimen at different lateral positions can be distinguished. Such depth information is difficult to be acquired for routine SR-SIM techniques.

 figure: Fig. 8.

Fig. 8. Imaging results of tubulin in 293-cells. (a) Wide-field image. (b) Recovered super-resolution (SR) image of MSIM without deconvolution. (c) Recovered SR image of MSIM with deconvolution. (d) Recovered SR image of HMSIM without deconvolution. (e) Recovered SR image of HMSIM with deconvolution. (f) Recovered depth map with HMSIM. (g)-(l) are respectively the magnified view of the dash-boxed region in (a)-(f). (m) The measured calibration curve of the rotation angle as a function of z-position for DH-PSF with N=6. (n) Intensity profiles along the yellow solid lines in (g)-(k).

Download Full Size | PDF

Careful determination of the system parameters is important to get high-quality images. As discussed in Section 2.3, the Gaussian-PSF component is extracted from the subimage by multiplying a synthetic pinhole. Thus, obtaining high-quality super-resolution image requires a thorough separation of the Gaussian component and the double-helix component. If the main-lobe distance of the double-helix PSF is too small, the image components formed by DH-PSF and Gaussian PSF will overlap. Then, the residual of the DH-PSF component in the subimage after pinholing will significantly impact the image quality of the final image. However, on the other hand, an overlarge main-lobe distance will not only reduce the energy efficiency and decrease the localization precision, but also requires a larger lattice period to avoid the crosstalk between adjacent excitation foci, thus sacrificing the speed of image acquisition. Therefore, we select the double-helix PSF with N=6 to generate the hybrid PSF because its inter-lobe distance (∼1 µm) is happened to be slightly larger than the full widths of the images formed by the Gaussian PSF and the double-helix PSF. In addition, the lattice period of the illumination pattern is set as 16 pixels, which generates illumination foci spaced by 16×13.7 µm /90≈2.44 µm (DMD pixel size is 13.7 µm). This space exceeds more than two times of the above inter-lobe distance, avoiding the crosstalk of adjacent excitation foci. At last, both the MSIM image and HMSIM image are produced from 16×13 raw images (each acquired in 10 ms), resulting in a total acquisition time of 2 seconds.

In the single molecule localization microscopy, the axial localization precision mostly relies on the signal-to-noise ratio (SNR), the angle estimator and the employed detection PSF [22,23]. For the developed microscope in this paper, the precision analysis gets much more complicated, because the imaging model in Eq. (2) involves not only the detection PSF, but also the specimen distribution and the excitation PSF. To simplify the effect of specimen information, we use fluorescent beads (40 nm in diameter) to roughly calibrate the axial localization precision, so as to determine the approximate value of localization precision. To this end, we used the above system to repeat the localization of the identical fluorescent beads for 100 times. Before acquiring the images, we adjust the power of the laser to keep the images acquired at a similar SNR level to that of tubulin. As shown in Fig. 9, the standard deviation of the axial positions of fluorescent beads is 20.8 nm. In other words, the axial localization precision of this imaging method for point emitters can reach 20.8 nm. In practice, disturbed by specimen information, the actual localization precision should be slightly lower than the calibrated value.

 figure: Fig. 9.

Fig. 9. Statistical result of the axial positions of an identical fluorescent bead.

Download Full Size | PDF

It’s worth noting that the effective ranges of depth estimation are set as 600 nm, as annotated in Fig. 4 and Fig. 8. This can be understood by considering the image formation process indicated by Eq. (2). The generated image by single focus is codetermined by the specimen distribution, the detection PSF and the excitation PSF. The brightness of the double-helix detection PSF almost keeps invariant within a range of nearly 2 µm. However, the Gaussian excitation PSF can only maintain bright at a range of 600 nm. Thus, for specimens away from this depth range, the generated image will rapidly fade away, which brings extremely low SNR for depth estimation. Therefore, to guarantee the fidelity of the localization data, we restrain the depth range of our method by the axial length of the excitation PSF, which is nearly 600 nm.

Besides, the estimation of the depth in the developed algorithm is based on the assumption that the sample is sparse in the z direction. To be more precisely, the specimen should distribute within single layer in z direction and be as thin as possible. For thick or multi-layer samples in z direction, the estimated depth should be an average-weighted depth of the specimen at the corresponding lateral position, i.e., the measured depth information will not be able to depict the real 3D distribution of the specimen. Therefore, in this paper, we only recommend the proposed method for the observation of thin and sparsely-labeled specimens like tubulin, mitochondria, and endoplasmic reticulum, etc.

4. Conclusion

In summary, we have developed a super-resolution fluorescence microscope with axial localization capability by introducing a hybrid PSF in the detection path of multifocal structured illumination microscopy. The hybrid detection PSF is composed of a Gaussian PSF and a double-helix PSF, which is generated by utilizing the polarization selectivity of the LC-SLM, greatly simplifying the optical configuration. A modified reconstruction algorithm is designed to accommodate the hybrid detection PSF. Through theoretical simulation and experimental verification, we demonstrate that the system can obtain not only the super-resolution image with a lateral resolution of 149 nm, but also the depth map with an axial localization precision of 20.8 nm. This imaging method is anticipated to provide a tool for observing the precise spatial distribution in 3D of thin specimens.

Funding

National Key Research and Development Program of China (2017YFC0110100); National Natural Science Foundation of China (61522511, 81427802, 91750106).

Disclosures

The authors declare no conflicts of interest related to this article.

References

1. S. J. Sahl, S. W. Hell, and S. Jakobs, “Fluorescence nanoscopy in cell biology,” Nat. Rev. Mol. Cell Biol. 18(11), 685–701 (2017). [CrossRef]  

2. Y. M. Sigal, R. Zhou, and X. Zhuang, “Visualizing and discovering cellular structures with super-resolution microscopy,” Science 361(6405), 880–887 (2018). [CrossRef]  

3. R. Gao, S. M. Asano, S. Upadhyayula, I. Pisarev, D. E. Milkie, T.-L. Liu, V. Singh, A. Graves, G. H. Huynh, and Y. Zhao, “Cortical column and whole-brain imaging with molecular contrast and nanoscale resolution,” Science 363(6424), eaau8302 (2019). [CrossRef]  

4. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319(5864), 810–813 (2008). [CrossRef]  

5. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

6. T. A. Klar and S. W. Hell, “Subdiffraction resolution in far-field fluorescence microscopy,” Opt. Lett. 24(14), 954–956 (1999). [CrossRef]  

7. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

8. F. Macias-Garza, A. C. Bovik, K. R. Diller, S. J. Aggarwal, and J. Aggarwal, “The missing cone problem and low-pass distortion in optical serial sectioning microscopy,” in ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing, (IEEE, 1988), 890–893.

9. M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008). [CrossRef]  

10. L. Shao, B. Isaac, S. Uzawa, D. A. Agard, J. W. Sedat, and M. G. Gustafsson, “I5S: wide-field light microscopy with 100-nm-scale resolution in three dimensions,” Biophys. J. 94(12), 4971–4983 (2008). [CrossRef]  

11. C. B. Müller and J. Enderlein, “Image scanning microscopy,” Phys. Rev. Lett. 104(19), 198101 (2010). [CrossRef]  

12. C. J. Sheppard, S. B. Mehta, and R. Heintzmann, “Superresolution by image scanning microscopy using pixel reassignment,” Opt. Lett. 38(15), 2889 (2013). [CrossRef]  

13. A. G. York, S. H. Parekh, D. Dalle Nogare, R. S. Fischer, K. Temprine, M. Mione, A. B. Chitnis, C. A. Combs, and H. Shroff, “Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy,” Nat. Methods 9(7), 749–754 (2012). [CrossRef]  

14. G. M. De Luca, R. M. Breedijk, R. A. Brandt, C. H. Zeelenberg, B. E. de Jong, W. Timmermans, L. N. Azar, R. A. Hoebe, S. Stallinga, and E. M. Manders, “Re-scan confocal microscopy: scanning twice for better resolution,” Biomed. Opt. Express 4(11), 2644–2656 (2013). [CrossRef]  

15. S. Roth, C. J. Sheppard, K. Wicker, and R. Heintzmann, “Optical photon reassignment microscopy (OPRA),” Biomed. Opt. Phase Microsc. Nanosc. 2(1), 5 (2013). [CrossRef]  

16. Y. Wu and H. Shroff, “Faster, sharper, and deeper: structured illumination microscopy for biological imaging,” Nat. Methods 15(12), 1011–1019 (2018). [CrossRef]  

17. A. Jesacher, M. Ritschmarte, and R. Piestun, “Three-dimensional information from two-dimensional scans: a scanning microscope with postacquisition refocusing capability,” Optica 2(3), 210 (2015). [CrossRef]  

18. C. Roider, R. Piestun, and A. Jesacher, “3D image scanning microscopy with engineered excitation and detection,” Optica 4(11), 1373–1381 (2017). [CrossRef]  

19. S. Li, J. Wu, H. Li, D. Lin, B. Yu, and J. Qu, “Rapid 3D image scanning microscopy with multi-spot excitation and double-helix point spread function detection,” Opt. Express 26(18), 23585–23593 (2018). [CrossRef]  

20. O. Tzang, D. Feldkhun, A. Agrawal, A. Jesacher, and R. Piestun, “Two-photon PSF-engineered image scanning microscopy,” Opt. Lett. 44(4), 895–898 (2019). [CrossRef]  

21. C. Roider, R. Heintzmann, R. Piestun, and A. Jesacher, “Deconvolution approach for 3D scanning microscopy with helical phase engineering,” Opt. Express 24(14), 15456–15467 (2016). [CrossRef]  

22. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. 106(9), 2995–2999 (2009). [CrossRef]  

23. S. Jia, J. C. Vaughan, and X. Zhuang, “Isotropic three-dimensional super-resolution imaging with a self-bending point spread function,” Nat. Photonics 8(4), 302–306 (2014). [CrossRef]  

24. Z. Wang, Y. Cai, Y. Liang, X. Zhou, S. Yan, D. Dan, P. R. Bianco, M. Lei, and B. Yao, “Single shot, three-dimensional fluorescence microscopy with a spatially rotating point spread function,” Biomed. Opt. Express 8(12), 5493 (2017). [CrossRef]  

25. S. R. P. Pavani and R. Piestun, “High-efficiency rotating point spread functions,” Opt. Express 16(5), 3484–3489 (2008). [CrossRef]  

26. S. Prasad, “Rotating point spread function via pupil-phase engineering,” Opt. Lett. 38(4), 585–587 (2013). [CrossRef]  

27. M. Baránek and Z. Bouchal, “Optimizing the rotating point spread function by SLM aided spiral phase modulation,” in XIX Polish-Slovak-Czech Optical Conference on Wave and Quantum Aspects of Contemporary Optics, (International Society for Optics and Photonics, 2014), 94410N.

28. S. R. P. Pavani, J. G. DeLuca, and R. Piestun, “Polarization sensitive, three-dimensional, single-molecule imaging of cells with a double-helix system,” Opt. Express 17(22), 19644–19655 (2009). [CrossRef]  

29. M. A. Thompson, M. D. Lew, M. Badieirostami, and W. Moerner, “Localizing and tracking single nanoscale emitters in three dimensions with high spatiotemporal resolution using a double-helix point spread function,” Nano Lett. 10(1), 211–218 (2010). [CrossRef]  

30. S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for superresolution microscopy with aberrations and engineered point spread functions,” Proc. Natl. Acad. Sci. 109(3), 675–679 (2012). [CrossRef]  

31. Y. Liang, M. Lei, S. Yan, M. Li, Y. Cai, Z. Wang, X. Yu, and B. Yao, “Rotating of low-refractive-index microparticles with a quasi-perfect optical vortex,” Appl. Opt. 57(1), 79–84 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Optical arrangement to generate the Gaussian PSF and the double-helix PSF simultaneously.
Fig. 2.
Fig. 2. Flowchart of the image recovery process.
Fig. 3.
Fig. 3. Postprocessing of each rodlike subimage.
Fig. 4.
Fig. 4. Simulative results of image formation and restoration. (a) Simulated fluorescent object. (b) Wide-field image. (c) Super-resolution image of MSIM. (d) and (e) The super-resolution image and the depth map, respectively, recovered with the developed HMSIM method.
Fig. 5.
Fig. 5. Schematic of the hybrid multifocal structured illumination microscope (HMSIM). The structured illumination is generated by the DMD-based projection system, while the hybrid PSF in the detection path is produced with the SLM. The proportion of the double-helix PSF and the Gaussian PSF can be easily controlled by adjusting the orientation of the polarizer. CGH: computer-generated-hologram, DM: dichronic mirror.
Fig. 6.
Fig. 6. Illumination pattern and its shift path. (a) Part of the first illumination pattern. (b) Shift path of the illumination patterns. Blue grids represent the on-state pixels of DMD, while white grids remark the off-state pixels.
Fig. 7.
Fig. 7. Imaging results of fluorescent beads in different modalities. (a) Wide-field image. (b) Recovered super-resolution (SR) image with MSIM. (c) Recovered SR image with HMSIM. (d)-(f) are the magnified views of the dash-boxed regions in (a)-(c), respectively. (g) and (h) are respectively the intensity profiles of along the two solid lines in (d)-(f).
Fig. 8.
Fig. 8. Imaging results of tubulin in 293-cells. (a) Wide-field image. (b) Recovered super-resolution (SR) image of MSIM without deconvolution. (c) Recovered SR image of MSIM with deconvolution. (d) Recovered SR image of HMSIM without deconvolution. (e) Recovered SR image of HMSIM with deconvolution. (f) Recovered depth map with HMSIM. (g)-(l) are respectively the magnified view of the dash-boxed region in (a)-(f). (m) The measured calibration curve of the rotation angle as a function of z-position for DH-PSF with N=6. (n) Intensity profiles along the yellow solid lines in (g)-(k).
Fig. 9.
Fig. 9. Statistical result of the axial positions of an identical fluorescent bead.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

P N ( ρ , φ ) = { exp [ i ( 2 n 1 ) φ ] , R n 1 N < ρ R n N , n = 1 , 2 , , N 0 , ρ > R
d ( x , y ) = + [ o ( x , y ; z ) h ex ( x , y ; z ) ] h det ( x , y ; z ) d z ,
h d e t ( x , y ; z ) = r o t [ h 0 ( x , y + Δ y / 2 ) + h 0 ( x , y Δ y / 2 ) , k θ ( z ) z ] ,
{ δ ± ( z ) = δ ( ± Δ y 2 cos ( k θ ( z ) z ) , ± Δ y 2 sin ( k θ ( z ) z ) ) h 0 r ( x , y ; z ) = rot ( h 0 ( x , y ) , k θ ( z ) z ) ,
h det ( x , y ; z ) = h DH ( x , y ; z ) = h 0 r ( x , y ; z ) [ δ + ( z ) + δ ( z ) ]
d ( x , y ) = + [ o ( x , y ; z ) h ex ( x , y ; z ) ] h 0 r ( x , y ; z ) [ δ + ( z ) + δ ( z ) ] d z .
D i = | y i ( a x i + b ) | 1 + a 2
S ( a , b ) = i = 1 m [ y i ( a x i + b ) ] 2 1 + a 2 = 1 1 + a 2 i = 1 m [ y i ( a x i + b ) ] 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.