Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Divide and conquer: real-time maximum likelihood fitting of multiple emitters for super-resolution localization microscopy

Open Access Open Access

Abstract

Multi-emitter localization has great potential for maximizing the imaging speed of super-resolution localization microscopy. However, the slow image analysis speed of reported multi-emitter localization algorithms limits their usage in mostly off-line image processing with small image size. Here we adopt the well-known divide and conquer strategy in computer science and present a fitting-based method called QC-STORM for fast multi-emitter localization. Using simulated and experimental data, we verify that QC-STORM is capable of providing real-time full image processing on raw images with 100 µm × 100 µm field of view and 10 ms exposure time, with comparable spatial resolution as the popular fitting-based ThunderSTORM and the up-to-date non-iterative WindSTORM. This study pushes the development and practical use of super-resolution localization microscopy in high-throughput or high-content imaging of cell-to-cell differences or discovering rare events in a large cell population.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Super-resolution localization microscopy (SRLM) provides a remarkable improvement in the spatial resolution of far-field fluorescence microscopy with a relatively simple setup and user-friendly experimental procedures, thus becomes an important tool for various biomedical researches [1]. The resolution improvement of SRLM is usually achieved upon sparse excitation and precise localization of a large number of fluorescent emitters. Therefore, thousands and even tens of thousands of raw images are necessary for obtaining a full list of molecule localizations that is further used for reconstructing a final super-resolution image with a sufficient sampling frequency [24].

It is widely accepted that the imaging speed and/or imaging throughput of SRLM can be significantly improved by allowing multiple emitters, rather than only one emitter, to be localized within a region-of-interest (ROI). However, the reported multi-emitter localization algorithms normally run at a very slow speed, typically several hours for a super-resolution image with a small field of view (FOV) [5]. Recently, Ma et al presented a non-iterative multi-emitter localization algorithm, called WindSTORM [6], which runs at a very fast speed. Actually, WindSTORM is capable of localizing experimentally up to 0.7 million emitters per second for bright emitters without compromising localization accuracy, thus enabling real-time image processing for an FOV of 40 µm × 40 µm with a spatial resolution of 53 nm (measured by the width of the microtubules in a reconstructed image). However, currently WindSTORM is not able to process 3D raw images.

Instead of producing a super-resolution image from a full list of molecule localizations, researchers are now exploring the use of deep learning to provide a super-resolution image directly from raw images or from a much smaller number of molecule localizations [7]. For example, Deep-STORM has been developed to use a deep convolutional neural network to create a super-resolved image from raw images directly, and thus achieves up to three orders of magnitude faster speed than the conventional localization-based methods [8]. However, the size of the neural network and the available memory of commercial graphics processing unit (GPU) limit the maximum size of raw images that can be processed by Deep-STORM. Another deep learning based method, called ANNA-PALM, uses machine learning to generate a super-resolution image from sparse localizations or even diffraction-limited input images, and thus massively accelerates the imaging speed [9]. However, ANNA-PALM requires a long post-processing time. And, it is reported that the performance of ANNA-PALM is affected by the amount and variety of training data, and that an effective algorithm is necessary to be used to identify and reduce image artifacts [9]. For some applications where the training data is not easy to be obtained or even unavailable (for example, rare clinical samples with unknown structures), the traditional localization-based approaches would be more suitable and/or more reliable due to the lower risk of image artifacts.

Taking the above discussions into consideration, we conclude that exploring new multi-emitter localization algorithm for enhancing the imaging speed without compromising the spatial resolution and the user-friendly features of SRLM is still on heavy demand. The image analysis speed is preferable to be real-time, so that the acquisition parameters can be optimized during SRLM experiments [1012]. Note that high imaging throughput and real-time acquisition optimization are both required for applying SRLM in high-throughput or high-content imaging of a large cell population [5,10,13,14], which are currently used to reduce selection bias, reveal cell-to-cell differences, and discover rare phenotypes or events [15,16]. Of course, it would be interesting to investigate how to combine localization algorithms with deep learning to further improve the imaging speed and/or imaging throughput of SRLM, without compromising other requirements from high-throughput localization microscopy.

In this paper, we propose a new method, called QC-STORM, for fast maximum likelihood estimation (MLE) of the locations of multiple emitters. Using the well-known divide and conquer strategy in computer science, QC-STORM firstly extracts ROIs from raw images, and then sends them to a series of MLE-based localization algorithms that are designed to provide fast image processing on ROIs containing a fixed number of emitters. Using simulated and experimental datasets, we compare the localization performance among QC-STORM, ThunderSTORM and WindSTORM. We verify experimentally that QC-STORM enables real-time data processing for an FOV of 100 µm × 100 µm and an exposure time of 10 ms, with similar or even better spatial resolution than that from the popular fitting-based ThunderSTORM and the non-iterative WindSTORM. We further combine QC-STORM with ANNA-PALM to triple the imaging speed. The open-source codes and the QC-STORM plug-in for ImageJ and Micro-Manager are available at: https://github.com/SRMLabHUST/QC-STORM.

2. Methods

2.1 MLEbfgs for fast localization of sparse emitters

Among the numerous reported molecule localization algorithms, it is widely accepted that the MLE-based algorithms are able to achieve the highest localization precision [17]. However, MLE-based algorithms usually suffer from high computational intensity [17], and thus are normally combined with GPU computation for a faster localization speed. Here we make several optimizations to the mathematical model and the GPU computation of a conventional MLE-based algorithm, so that the localization speed of this algorithm can be further improved. Similar to previous works [18,19], we use Gaussian function to model the point spread function (PSF) of our imaging system. For astigmatism 3D imaging, the observed signal model is:

$${I_{i,j}} = A{e^{ - \left( {\frac{{{{({i - {x_0}} )}^2}}}{{2\sigma_x^2}} + \frac{{{{({j - {y_0}} )}^2}}}{{2\sigma_y^2}}} \right)}} + B$$
here, Ii,j is the theoretical signal intensity at pixel (i, j), A and B are the peak signal and background intensities, respectively, (x0, y0) is the center position of the fluorescent molecule, and σx and σy are standard deviation width of the Gaussian PSF along horizontal and vertical directions, respectively. For 2D PSF model, the two widths are merged into a single width.

Practically, since the observed signal is usually contaminated by noises (mainly photon shot noise), based on maximizing the probability of the observed signal [19], the MLE optimization problem for finding the molecule center by minimizing the loss is derived as:

$$\min L = \sum\limits_{1 \le i,j \le N} {({{I_{i,j}} - {q_{i,j}}\ln {I_{i,j}}} )}$$
here, the qij is the observed signal intensity at pixel (i, j), and N is the size of the extracted ROI. Taking the fluorescence signal intensity into account, the optimization target L is a negative number with a big absolute value (typically larger than 105). If L is represented by a single-precision floating-point number, the accuracy of the decimal part of L is less than 10−2, meaning that it is impossible to achieve high accurate gradient for the parameter optimization. Therefore, L must be represented by a double-precision floating-point number, which results in an order of magnitude decrease in the computation speed than that using a single-precision floating-point number in typical graphics cards [20].

To increase the computation speed, we apply a key modification to this MLE optimization model to convert the computing from double-precision floating-point number into single-precision floating-point number:

$$\min L = \sum\limits_{1 \le i,j \le N} {{F_{i,j}}} = \sum\limits_{1 \le i,j \le N} {({{I_{i,j}} - {q_{i,j}} - {q_{i,j}}({\ln {I_{i,j}} - \ln {q_{i,j}}} )} )}$$
where Fij is used for simplicity. By subtracting two constants that are calculated from the observed signal, the absolute value of L can be theoretically reduced to zero, thus enabling an accurate parameter optimization using a single-precision floating-point number. Moreover, this treatment does not influence the gradient calculation and the following optimization. The modification is simple and can be potentially used for accelerating existing localization algorithms.

Then, we rewrite the PSF model as:

$${I_{i,j}} = 128 \times A{e^{ - 0.1 \times \sigma ({{{({i - {x_0}} )}^2} + {{({j - {y_0}} )}^2}} )}} + 64 \times B$$
The coefficients in Eq. (4) are empirical scaling factors for ensuring that all fitting parameters are within a similar dynamic range, and thus improving the algorithm robustness [21]. Additionally, the division operation in Eq. (1) is replaced by the multiplication shown in Eq. (4) to accelerate computation.

Finally, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm [21] is introduced to solve Eq. (3). The BFGS algorithm is an iterative optimization algorithm for solving nonlinear optimization problems. Compared with the widely used Levenberg-Marquadt algorithm [22,23], the BFGS algorithm can be calculated more efficiently [24]. Moreover, the computational workflow of the BFGS algorithm is more straightforward and has no conditional judgement that leads to parallel fitting of ROIs on different branches. Thus, the BFGS algorithm is better for GPU acceleration. The modified MLE-based algorithm is called MLEbfgs. Moreover, the algorithm is accelerated by GPU computation that simultaneous fits thousands of independent molecules. The source codes of implementing the MLEbfgs algorithm are described in: https://github.com/SRMLabHUST/QC-STORM.

2.2 Weighted MLE for sparse emitters with signal contamination

The most important task in SRLM is to localize precisely the center positions of the molecules in an extracted ROI. Due to the random activation nature of molecules, the extracted ROI may contain incomplete fluorescence emission signal from adjacent molecules (here we called signal contamination). If the PSF model for molecule localization considers only one molecule, the molecule center cannot be localized precisely (Fig. 1(a)). In the past, this problem is mainly solved by using a multi-emitter localization algorithm [25]. However, most reported multi-emitter localization algorithms require usually many fitting parameters and a large iteration number, and thus run at a much slower speed than the molecule localization algorithms for sparse emitters.

 figure: Fig. 1.

Fig. 1. Principle of MLEwt for sparse emitter localization with signal contamination. (a) Comparison on the recovered signal from conventional MLE (MLEbfgs, upper) and weighted MLE (MLEwt, lower). A contaminated ROI (11 × 11 pixels) is extracted from a raw image taken from astigmatism 3D imaging, and is processed using either MLEbfgs or MLEwt. The signal is recovered using the fitted parameters. Note that this ROI can be also fitted using a multi-emitter model at the expense of a slow data processing speed. (b) MLE loss maps from an ROI with signal contamination. In both maps, the fitting parameters other than x and y positions were fixed and adopted from the fitted result of MLEwt. (c) Averaged horizontal intensity profile of the extracted ROI in (a). The σleft and σright denote the estimated standard deviation width (in pixel) for the Gaussian weight function in the left and right sides, respectively.

Download Full Size | PDF

Weighted maximum likelihood estimation (WLE), which applies a larger weight for high reliable data and a lower weight for low reliable data, is an efficient method to suppress the signal contamination. Previously, WLE has been used to increase the robustness of MLE-based fitting and to reduce bias results from contaminated signal [26,27]. In SRLM, WLE has also been used for improving localization precision by correcting the PSF distortion in 3D imaging [28]. Typically, different weights should be developed according to the types of application and contamination.

Here we introduce a Gaussian function weighted MLE (MLEwt) to suppress the signal contamination from adjacent molecules. As illustrated in Fig. 1(a), for a target molecule that doesn’t overlap severely with an adjacent molecule, the use of a traditional MLE-based algorithm presents apparently incorrect localization results. In contrast, if we first estimate the PSF width of the target molecule, and then apply a weight function, which is Gaussian function and is based on the estimated PSF width, to the MLE-based algorithm for sparse emitters, the localization precision can be significantly improved (Fig. 1(a)). From the MLE loss maps (Fig. 1(b)), we found that there are two local minima for MLEbfgs, but only one for MLEwt. Since the goal of MLE is to find the best parameters by minimizing the loss (Eq. (3)), the two local minima (as shown in Fig. 1(b)) prevent the use of MLEbfgs to find the correct center position of the target molecule.

Below is a brief description of MLEwt. Firstly, we apply a Gaussian weight function to the MLE optimization model (see Eq. (3)):

$$\min L = \sum\limits_{1 \le i,j \le N} {{W_{i,j}}{F_{i,j}}}$$
The Gaussian weight function, ${W_{i,j}}$, is calculated using:
$${W_{i,j}} = {e^{ - \left( {\frac{{{{({i - {x_w}} )}^2}}}{{2\sigma_{wx}^2}} + \frac{{{{({j - {y_w}} )}^2}}}{{2\sigma_{wy}^2}}} \right)}}$$
where (xw, yw) is the center position of the Gaussian weight function, and is directly assigned by the center of the extracted ROI since the target molecule is made to be centered in this ROI during the ROI identification and extraction step. σwx and σwy are the estimated widths of the PSF in horizontal and vertical directions, respectively. For 2D imaging, the smaller one of them is considered as the PSF width.

To estimate the PSF width of the target molecule, we first average the signal of the ROI along horizontal and vertical directions, respectively, to obtain two one-dimensional Gaussian shape curves (Fig. 1(c)). Then, the standard deviation widths of the Gaussian shape curves in four sides (including left, right, top and bottom) are calculated using the relationship among Gaussian integral, the area and the peak of the Gaussian shape curves:

$$\int_0^{\ +\ \infty } {A{e^{ - \frac{{{x^2}}}{{2{\sigma ^2}}}}}} dx = A\sigma \sqrt {\frac{\pi }{2}} $$
To better calculate the area of the Gaussian shape curves, we apply a 10-fold cubic spline interpolation before the numerical integration of the Gaussian shape curves. Finally, the smaller value of the estimated widths in the left and the right sides is used as the final horizontal PSF width, and the smaller value of the estimated widths in the top and the bottom sides is used as the final vertical PSF width.

An extracted ROI is determined to be contaminated by adjacent molecules if the estimated width of the corresponding Gaussian shape curves in one side is 40% larger than that in the other side. For contaminated ROIs, the estimated PSF widths of the Gaussian shape curves divided by 1.2 are used as the widths of the final Gaussian weight function. For uncontaminated ROIs, the same unit weights are used for all pixels. These parameters are empirically determined to avoid incorrect classification on the uncontaminated molecules. And, since the center of the weight function is not absolutely matched with the ground-truth position of the target molecule, applying MLEwt to uncontaminated molecules may reduce localization precision. Moreover, for truly contaminated molecules, a narrower weight function is helpful for improving localization precision, which is determined by the distance from nearest neighbor molecules.

Finally, Eq. (5) is solved using the same optimization method as that for Eq. (3). Because the Gaussian weight function is calculated before the iterative optimization process for solving Eq. (5), MLEwt runs at a similar speed as the unweighted MLE-based algorithm, MLEbfgs.

2.3 MLE for two or three emitters

The MLE-based algorithm for sparse emitter localization can be extended to analyze ROIs with multiple emitters if we add multiple PSFs to the observed signal model [29]. For astigmatism 3D imaging, the signal model is:

$${I_{i,j}} = \sum\limits_{k = 1}^M {{A_k}{e^{ - \left( {\frac{{{{({i - {x_0}_k} )}^2}}}{{2\sigma_{xk}^2}} + \frac{{{{({j - {y_0}_k} )}^2}}}{{2\sigma_{yk}^2}}} \right)}}} + B$$
here the M is the maximum number of fitted molecules. For 2D imaging, the two widths are merged into a single width. The main difference between multi-emitter MLE and sparse-emitter MLE is that the former has more fitting parameters, and thus requires a larger iteration number. Practically, the iteration numbers and the fitting parameters used in this study are: 5 parameters and 8 iterations for 2D sparse localization, 6 parameters and 11 iterations for 3D sparse localization, 8 parameters and 13 iterations for 2D two-emitter localization, and 11 parameters and 16 iterations both 2D three-emitter localization and astigmatism 3D two-emitter localization, respectively. The numbers of fitted parameters are not adjustable since they are determined by the PSF model and the number emitters to be fitted. The numbers of iteration are determined by increasing the iterations until the improvement in localization precision is less than 0.5 nm. The MLE-based algorithms for two- or three-emitter localization are solved using the same optimization method as that for MLEbfgs.

2.4 ROI identification and extraction

The ROI identification and extraction is modified from a reported algorithm called MaLiang [19]. The modifications mainly include: 1) A full GPU computation is used to avoid frequent memory copy between CPU and GPU; 2) A larger annular filter (9 × 9) is used for non-uniform background removal, which is necessary for astigmatism 3D imaging; and 3) A different method is developed to determine the threshold for ROI identification in raw images with high emitter density, since directly calculating the standard deviation of background-removed images will result in overestimation of background noise intensity. The workflow is shown in Fig. 2. For the last modification, we equally split the background-removed image into four small images with the same size, and then generate a noise intensity image where the pixel values equal to the smallest pixel values in the same positions of the four images. This treatment ensures rejection of fluorescent signal from this raw image because of the random activation nature of fluorescent emitters during experiments (that is, it is almost impossible to have emitters at the same position of all four images). The threshold for ROI identification is calculated by 8.5 times the standard deviation of the noise intensity image. After the smoothed image is enhanced by a 3 × 3 high-pass filter, where the center pixel is set to be 2 and the other pixels are set to be −0.125, the targeted molecules are identified based on this threshold and local maximum. The high-pass filter is not used for processing astigmatism 3D raw images. Finally, ROI centered on the targeted molecules are extracted from the raw images. In this study, the ROI sizes are set to be 7 × 7 for 2D, and 11 × 11 for astigmatism 3D raw images, respectively.

 figure: Fig. 2.

Fig. 2. The workflow for ROI identification and extraction.

Download Full Size | PDF

2.5 QC-STORM plug-in

We developed a Java-based user-friendly plug-in, called QC-STORM, to include all the image processing steps for fast MLE fitting of sparse and multiple emitters, as well as super-resolution image reconstruction and some image statistics. The image reconstruction is based on the popular Gaussian rendering method, where the width of Gaussian kernel is determined by localization precision. This plug-in can be easily integrated into the widely-used ImageJ software for off-line image processing or the Micro-Manager software for online image processing. Note that the performance of QC-STORM mainly depends on the floating-point operation speed of GPU and the memory bandwidth of CPU. By combing a Dell precision T7610 workstation with a good graphics card (see Section 2.7 for details), QC-STORM is capable of providing real-time processing for raw images with a size of 1024 × 1024 pixels and an acquisition time of 10 ms. QC-STORM is also capable of providing statistical information, including mainly emitter brightness, background, signal-noise-ratio (SNR), localization precision and PSF widths, for real-time acquisition optimization during SRLM experiments. Here QC is the acronym of quality control, since we aim to obtain quality control in the image acquisition process with the assistance of this plug-in.

2.6 Sample preparation and super-resolution fluorescence imaging

SRLM experiments were performed on an Olympus IX73 inverted optical microscope. A 640 nm and a 405 nm laser were combined and coupled into a multimode optical fiber. The fiber output was collimated and zoomed by relay lenses, and finally focused onto the biological sample using an oil-immersion objective (100×, NA 1.4, Olympus) to provide a homogeneous flat-field illumination [30]. Alexa 647 or CF 680 labeled U-2 OS cells were imaged with the 640 nm laser at an illumination intensity of 7 kW/cm2 while soaked with standard STORM buffer (50 mM Tris, pH 8.0, 10 mM NaCl, 10% glucose, 100 mM mercaptoethylamine, 500 µg/mL glucose oxidase, 40 µg/mL catalase) [31]. Images were acquired with a popular sCMOS camera (Flash 4.0 V3, Hamamatsu Photonics) at an exposure time of 10 ms and a pixel size of 108 nm. The FOV is 110 µm × 110 µm.

2.7 Image simulation and algorithm evaluation

We simulated 2D and astigmatism 3D raw images with various activation density for algorithm evaluation. For 2D images, Gaussian PSF with a full width at half maximum (FWHM) of 300 nm was used. For astigmatism 3D images, the molecules were randomly distributed within a depth range of 600 nm, and the PSF width in different depths were calculated from the open dataset of real astigmatism 3D microtubule images [32]. For both 2D and astigmatism 3D images, the total photon was simulated to follow a log-normal distribution with a mean of 5000 photons and a standard deviation of 2000 photons, and the background was set as 100 photon per pixel [33]. At each activation density, we simulated 200 images with 256 × 256 pixels and 100 nm pixel size. The molecules were randomly distributed in the images and the number of molecules were calculated by multiplying the image area by the activation density. Moreover, the simulated images considered shot noise and camera specifications (0.77 quantum efficiency, 1.6 e- read noise).

We compared the localization performance of our QC-STORM with ThunderSTORM [29] and WindSTORM [6] using the same simulated image datasets. ThunderSTORM is a widely-used and comprehensive ImageJ plug-in for SRLM, while WindSTORM is a newly developed non-iterative algorithm for fast localization of high-density emitters. ThunderSTORM provides fitting modes for sparse emitter and multiple emitters, respectively. We calculated the Fourier ring correlation (FRC) resolution [34] for experimental raw images. Before calculating the FRC resolution, localizations of molecule that emits fluorescence in consecutive frames were detected and merged into a single localization using a distance threshold of 50 nm. All evaluations were performed on a Dell precision T7610 workstation, which has two Intel Xeon E5-2630 V2 CPU working at 2.6 GHz and 56 GB memory, and a Gigabyte GeForce RTX 2080 Ti Gaming OC graphics card with 11 GB memory.

3. Results and discussion

3.1 The Divide and Conquer strategy

After ROI identification and extraction using the method described in Section 2.4, we apply the divide and conquer strategy to classify and process the ROIs. The classification is based on two parameters: the distance between the nearest neighboring ROI centers, and the estimated PSF widths. As seen in Fig. 3(a), we first need to determine whether an ROI contains only one emitter. For this purpose, we consider two criteria: 1) Whether the distance between the centers of this ROI and its nearest neighboring ROI is larger than a threshold (that is, ROI width / 2 + 1, or 4.5 pixels for the 2D images in this study); and 2) Whether the averaged PSF width estimated by MLEwt along the four directions (see Section 2.2 for details) is smaller than a threshold (that is, ROI width / 2 / 2.35, or 1.49 pixels for the 2D images in this study). The ROI is considered to contain only one emitter if both answers are YES. Otherwise, the ROI is classified to have multiple emitters.

 figure: Fig. 3.

Fig. 3. MLE fitting using the divide and conquer strategy. (a) The workflow. (b-c) The percentage of ROIs containing different number of emitters under different activation density. Here (b) is for 2D, and (c) is for astigmatism 3D raw images.

Download Full Size | PDF

For the ROI considered to have only one emitter, a further judgment related signal contamination is taken, using the criterion described in Section 2.2. Then, the ROI is processed with either MLEwt or MLEbfgs, depending on whether the signal is contaminated or not. Since signal contamination commonly results in a larger estimated PSF width, a third judgement related to the fitting quality is taken. If the estimated PSF width is larger than a threshold (that is, ROI width / 2 / 2.35, or 1.49 pixels for the 2D images in this study), the fitting is not considered as a good fitting, and the ROI will be further processed using the two-emitter and three-emitter localization algorithms. The same criterion is used for determining the quality of two-emitter fitting. For the localized emitters from a single ROI, the emitter in the center of the ROI will be added into the localization list, while the other localized emitters in this ROI will be compared to the identified emitters in the ROI identification step (see Section 2.4) to avoid reduplicated localization. Only those localized emitters that are not identified previously will be added into the localization list.

To reduce program complexity and ensure real-time image processing under our current hardware configuration, the MLE-based ROI processing in this study is terminated after three-emitter fitting for 2D images, and two-emitter fitting for astigmatism 3D images, respectively. Of course, the divide and conquer strategy shown in Fig. 3(a) could be extended to include more emitters for higher activation density, if there are more powerful computation resources. Currently, basing on simulated raw images, we found out that real-time image processing could be made possible for raw images with a size of 1024 × 1024 pixels and an acquisition time of 10 ms. In this case, the activation density could be up to 3.5 molecule/µm2 for 2D (Fig. 3(b)) and 1.5 molecule/µm2 for astigmatism 3D raw images (Fig. 3(c)), respectively, while the rejection rate of ROIs is still less than 10%.

3.2 Localization of sparse emitters using simulated datasets

We compared the localization performance among our MLE-based localization algorithms (MLEbfgs and MLEwt) and the popular fitting-based ThunderSTORM using simulated datasets. According to Sage et al [35], we used root-mean-square error (RMSE) to quantify the localization precision, and Jaccard index to quantify the detection rate. We performed Jaccard index calculations based on the pairing between the found localizations and the ground truth with 100 nm threshold. For 2D simulated datasets, MLEbfgs achieves similar and/or slightly better localization precision (Fig. 4(a)) and detection rate (Fig. 4(b)) than ThunderSTORM working at the sparse emitter mode, while a combination of MLEwt with MLEbfgs significantly improves the localization precision and the detection rate in all activation densities. At a low activation density (such as 0.1 molecule/µm2), all methods achieve a localization precision that approaches the Cramer-Rao lower bound (CRLB) of 4.8 nm [23] (Fig. 4(a)). For astigmatism 3D simulation, the superior localization performance of the combination of MLEwt with MLEbfgs over ThunderSTORM is also observed (Figs. 5(a)– 5(c)). And, it is worthy to point out that, for both 2D and astigmatism 3D simulations, the combination of MLEwt with MLEbfgs provides a doubled maximum activation density than ThunderSTORM, when we use 0.5 as the threshold of Jaccard index. These findings prove the usefulness of the weighted MLE localization algorithm (MLEwt) for processing raw images with contaminated signal.

 figure: Fig. 4.

Fig. 4. Comparison on the localization performance among MLEbfgs, MLEwt and ThunderSTORM for simulated 2D raw images. (a) The dependence of RMSE on activation density. (b) The dependence of Jaccard index on activation density. (c) The dependence of run time on activation density. Here the run time accounts for all data processing routine, including ROI extraction, molecule localization, and super-resolution image rendering, to process a single raw image. The simulated images contains 1024 × 1024 pixels with a pixel size of 100 nm. Both the QC-STORM and the ThunderSTORM were set to work at the sparse emitter mode when the run time was evaluated. Both the MLEbfgs and the MLEwt algorhthms were used when the QC-STORM is working in the sparse emitter mode. The letter S denotes sparse emitter mode.

Download Full Size | PDF

It is interesting to evaluate the data processing speed of our MLE-based algorithms. For 2D imaging where the ROI size is 7 × 7 pixels, we found that the localization speed is 8.7 × 106 ROIs per second for MLEbfgs and 8.2 × 106 ROIs per second for MLEwt, respectively. Compared with MATLAB implemented MLEbfgs, which only achieves a localization speed of 150 ROIs per second, the speed is increased by 5.8 × 103 times after applying GPU computation. Meanwhile, for astigmatism 3D imaging where the ROI size is 11 × 11 pixels, we achieve a localization speed of 2.4 × 106 ROIs per second when MLEwt is used.

We further compared the full data processing routine, rather than the localization speed only, between our QC-STORM and the popular ThunderSTORM, using simulated raw images with a size of 1024 × 1024 pixels (corresponding to an FOV of 102 µm × 102 µm, when the pixel size is 100 nm). We found that, when both plug-ins are working at the sparse emitter mode, QC-STORM is about two orders of magnitude faster than ThunderSTORM for both 2D and astigmatism 3D raw images (Fig. 4(c) and Fig. 5(d)). These data also point out that QC-STORM is capable of achieving real-time processing within 10 ms exposure time for an activation density of up to 3 molecules / µm2 for 2D imaging (Figs. 4(b)–4(c)), and 1 molecules / µm2 for astigmatism 3D imaging (Figs. 5(c)–5(d)), while the detection rate is still acceptable (that is, the Jaccard index is better than 0.5). Interestingly, although Sage et al mentioned that the reported algorithms for 2D localization of sparse emitters is near optimal [32], here we show an improved localization precision and/or detection rate after combining weighted MLE (MLEwt) with conventional MLE (MLEbfgs), with only a slight speed degradation when using weighted MLE.

 figure: Fig. 5.

Fig. 5. Comparison on the localization performance among MLEbfgs, MLEwt and ThunderSTORM for simulated astigmatism 3D raw images. (a) The dependence of lateral RMSE on activation density. (b) The dependence of axial RMSE on activation density. (c) The dependence of Jaccard index on activation density. (d) The dependence of run time on activation density. Here the run time accounts for all data processing routine in a single raw image. Other settings are similar to those in Fig. 4.

Download Full Size | PDF

3.3 Localization of multiple emitters using simulated datasets

As seen in Fig. 3(a), we combined the divide and conquer strategy with a series of MLE-based localization algorithms to process raw images with high activation density. Therefore, it would be more useful to assess the overall performance of QC-STORM working at multi-emitter mode (corresponding to a combination of MLEbfgs, MLEwt, MLE2e and MLE3e), rather than individual multi-emitter algorithms (that is, MLE2e, MLE3e). Using simulated 2D and astigmatism 3D images, we compared the localization performance of QC-STORM with ThunderSTORM and found that ThunderSTORM exhibits slightly better performance in both localization precision (Fig. 6(a), Figs. 7(a)–7(b)) and detection rate (Fig. 6(b), Fig. 7(c)), but runs at three orders of magnitude slower speed than QC-STORM (Fig. 6(c), Fig. 7(d)). For the comparison between QC-STORM and WindSTORM using 2D simulated images, WindSTORM presents better localization precision and detection rate than QC-STORM (Figs. 6(a)–6(b)). Besides the Jaccard index that synthetically evaluates true positive and false negative rate, the true positive and false negative rate can also be independently evaluated [35]. For both simulated 2D and astigmatism 3D images, QC-STORM exhibits both higher true positive rate and false negative rate than ThunderSTORM. For simulated 2D images, QC-STORM presents lower true positive rate but higher false negative rate than WindSTORM.

 figure: Fig. 6.

Fig. 6. Comparison on the localization performance of three software for 2D raw images. (a) The dependence of RMSE on activation density. (b) The dependence of Jaccard index on activation density. (c) The dependence of run time on activation density. Here the run time accounts for all data processing routine in a single raw image. Other settings are similar to those in Fig. 4. The letter M denotes multi-emitter mode, and WindSTORM has only a multi-emitter mode. We don’t show the run time of WindSTORM, since the GPU version of WindSTORM is not running correctly under our hardware configurations.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Comparison on the localization performance of ThunderSTORM and QC-STORM for astigmatism 3D raw images. (a-b) The dependence of lateral and axial RMSE on activation density. (c) The dependence of Jaccard index on activation density. (d) The dependence of run time on activation density. Here the run time accounts for all data processing time in a single image of 1024 × 1024 pixels. The letter M denotes multi-emitter mode.

Download Full Size | PDF

Because the current GPU version of WindSTORM is not compatible with our Dell precision T7610 workstation and GeForce RTX 2080 Ti graphic card, we estimated the run time of WindSTORM using the previously reported value, after considering the difference of image size. The run time of the GPU-based WindSTORM was previously measured with a GTX 1080 graphics card, and could possibly have a 50% reduction when a RTX 2080 Ti graphics card is used. Therefore, although the run time of WindSTORM may not depend linearly on the performance (or single floating-point operation speed) of GPU, we can still roughly estimate that WindSTORM runs slightly slower than QC-STORM, after using GPU to accelerate both software. Note that, in this study the localization precision and detection rate of WindSTORM were measured using its MATLAB version, and that currently WindSTORM is not capable of processing 3D images.

Considering that QC-STORM provides advantageous performance in the run time, we further investigated the localization speed of our MLE-based localization algorithms for two emitters (MLE2e) and three emitters (MLE3e). For 2D imaging with an ROI size of 7 × 7 pixels, we achieve a localization speed of 3.7 × 106 and 1.9 × 106 ROIs per second for MLE2e and MLE3e, respectively. And, for astigmatism 3D imaging with an ROI size of 11 × 11 pixels, we achieve a localization speed of 7.6 × 105 ROIs per second for two-emitter localization. These findings indicate that our MLE-based algorithms for multiple emitter run at only 2-4 times slower speed than the fast MLE-based algorithms for sparse emitters (that is, MLEbfgs and MLEwt).

We also compared the localization performance of QC-STORM with ThunderSTORM and 3D-DAOSTORM using open datasets [32]. The open datasets contain 2D and astigmatism 3D images at both low activation density and high activation density, and low SNR and high SNR. The results are shown in Fig. 8. For both 2D and astigmatism 3D images, the localization precision and detection rate of QC-STORM are comparable to those of ThunderSTORM, and are not as good as 3D-DAOSTORM [36]. It should be noted that the reported localization performance of QC-STORM [32] was tested using only MLEbfgs under a relative low-end graphics card (compared with RTX 2080 Ti in this study). The updated results have been recently uploaded to the website using the name of QC-STORM-2019 [37].

 figure: Fig. 8.

Fig. 8. Comparison on the localization performance of three software using open datasets. (a) Evaluation of RMSE and Jaccard index on four 2D imaging datasets. (b) Evaluation of lateral and axial RMSE, and Jaccard index on four astigmatism 3D imaging datasets. The acronym LD and HD denote low density and high density, respectively. MT1 and MT2 are high SNR datasets. MT3, MT4, ER1 and ER2 are low SNR datasets.

Download Full Size | PDF

For the computation speed, we failed to run 3D-DAOSTORM using our simulated data. However, 3D-DAOSTORM has been shown to have 2-3 orders of magnitude slower speed than the GPU accelerated WindSTORM [6], while the latter runs at similar speed as our QC-STORM.

3.4 Testing QC-STORM using experimental data

We performed 2D STORM imaging of the microtubules in U-2 OS cells, and captured a total of 1000 raw images with an image size of 1024 × 1024 pixels and a pixel size of 108.3 nm (corresponding to an FOV of 110 µm × 110 µm) under high activation density (Fig. 9(a)). We found that QC-STORM is able to localize a total of 6.6 × 103 molecules per raw image. Based on dividing the localization number by underlying structure area [38,39], the localization density was estimated to be 3.3 molecule/µm2. The percentage of sparse emitter fitting in processing the raw images is only about 30%, indicating that the raw images were acquired under high activation density. QC-STORM spends 6 seconds to finish the full processing of 1000 raw images (including ROI extraction, molecule localization and super-resolution image reconstruction). Therefore, the total image processing time per raw image is 6 ms, which is less than the exposure time of 10 ms. For WindSTORM, the reported time for processing 2000 images with an image size of 512 × 512 pixels is 6.5 seconds [6], corresponding to 13 seconds for processing 1000 images with an image size of 1024 × 1024 pixels. This time can be roughly shorten to 6.5 seconds after replacing the previous GTX 1080 graphics card with the current RTX 2080 Ti graphics card. For ThunderSTORM, we estimated that a total of 22 hours is required to process 1000 raw images with 1024 × 1024 pixels in each image. Therefore, the full processing speed of QC-STORM is 1.32 × 104 times over ThunderSTORM, and is slightly faster than WindSTORM. The total image processing times for the three software are shown in Fig. 9(a).

 figure: Fig. 9.

Fig. 9. Evaluation of the localization performance among QC-STORM, ThunderSTORM and WindSTORM using experimental 2D images. (a) A super-resolution image reconstructed from a total of 1000 raw images with 1024 × 1024 pixels. The images were processed by QC-STORM. The estimated full data processing times for different software are shown in the top-right corner. (b) Enlarged super-resolution images of the boxed region in (a). A representative raw image shows a non-uniform fluorescence background. Three software were used to process the raw images independently. The FRC resolution and the total number of localizations are shown in the lower-right corners. (c) Enlarged super-resolution images of the two ROIs indicated in (b). The upper row shows an area with high fluorescence background, and the lower row is for an area with low fluorescence background. The localization number and the FRC resolution of these ROIs are also shown in these images. (d) Spatial resolution characterized by the intensity profiles of the marked positions in (c).

Download Full Size | PDF

We calculated the total number of localized molecules and the FRC resolution of a zoom-in image with 256 × 256 pixels (Fig. 9(b)). We found that QC-STORM localizes a total of 6.1 × 105 molecules, which is less than that from ThunderSTORM (7.2 × 105) or WindSTORM (6.9 × 105). However, the FRC resolution of QC-STORM (64 nm) is the same as that from ThunderSTORM (64 nm), but is significantly better than that from WindSTORM (75 nm). Note that WindSTORM achieves a slightly higher localization precision than QC-STORM in simulated data, but a lower FRC resolution in this experiment data. This is probably due to: 1) the SNR of the experimental data is much more complex than that in the simulated data, and thus results in localizations with different localization precision; and 2) QC-STORM filters out localizations with low precision (see the source codes of QC-STORM for more details: https://github.com/SRMLabHUST/QC-STORM.). Actually, for an ROI with high fluorescence background (Fig. 9(c)), which usually suffers from low localization precision, the super-resolution image reconstructed from QC-STORM shows fewer scattered localizations than those from ThunderSTORM and WindSTORM. As a result, the FRC resolution achieved by QC-STORM is higher than those from both ThunderSTORM and WindSTORM (Fig. 9(c)). On the other hand, for an ROI with low fluorescence background (Fig. 9(c)), QC-STORM exhibits the same FRC resolution as ThunderSTORM, and slightly higher FRC resolution than WindSTORM. These results are also verified by the intensity profile analysis of two cross-linked microtubules (Fig. 9(d)).

We also evaluated the localization performance of QC-STORM using an experimental open dataset with sparse emitters from astigmatism 3D imaging (Fig. 10(a)) [32]. QC-STORM requires 34 s to process the 112683 images with 324 × 271 pixels in each image, corresponding to a full data processing time of 0.3 ms per image. For the same open dataset, ThunderSTORM requires 4.6 hours to finish the full image procession, showing a much slower time of 146 ms per image. Moreover, QC-STORM achieves an FRC resolution of 36 nm, which is better than that from ThunderSTORM (44 nm). The better resolution of QC-STORM over ThunderSTORM is also verified by the line profile analysis of microtubules (Figs. 10(b)–10(c), upper row), where the two microtubules are clearly resolved from QC-STORM, but are blurred from ThunderSTORM. The reason is that MLEwt in the QC-STORM plug-in provides more reliable localizations for contaminated molecules, which occur with high probability at the cross-linked region of microtubules. Additionally, we observe that the calculated depths of the microtubules are slightly different between QC-STORM and ThunderSTORM (Fig. 10(b)). This difference probably results from the different methods for calibration curve measurement. The calibration curve in our QC-STORM is measured between z depth and (σx2 - σy2) [22], while ThunderSTORM adopt the calculation method from Huang et al [18]. Actually, the same open dataset has been analyzed by several software [37], and the reported depths are different. However, it is hard to determine which result is more accurate, since no ground-truth is provided for this experimental open dataset.

 figure: Fig. 10.

Fig. 10. Evaluation of the localization performance between QC-STORM and ThunderSTORM using an experimental 3D open dataset. (a) A super-resolution image reconstructed from QC-STORM. (b) Enlarged super-resolution images reconstructed from the two software. The images correspond to the marked area in (a). The upper images were processed from the original open dataset with low activation density, while the lower images were processed from a modified open dataset with high activation density. The FRC resolution and the total localization number are shown in the upper-left corners. (c) Spatial resolution characterized by the intensity profiles of two microtubules at the marked positions in (b).

Download Full Size | PDF

To evaluate the performance of QC-STORM in processing 3D raw images with high activation density, we modified the experimental open dataset by merging every 10 images into one image and obtain 11268 raw images. We found that QC-STORM takes 8 s to process the modified dataset (or 0.7 ms per image), while ThunderSTORM spends 26 hours for the same dataset (or 8 s per image). Moreover, the FRC resolution achieved by QC-STORM is 50 nm, which is slightly higher than that by ThunderSTORM (53 nm). The intensity profile analysis on two closely microtubules also confirms this finding (Figs. 10(b)–10(c)).

3.5 QC-STORM for real-time image acquisition feedback

Benefiting from the fast data processing speed, QC-STORM can provide real-time statistics of many parameters (for example, fluorescence signal intensity, photon background intensity, SNR, PSF width, localization precision). These statistics are useful for enabling online data acquisition optimization [1012]. Here we demonstrate that SNR can be used for active axial drift correction, and the percentage of sparse emitters can be used for optimizing activation laser intensity.

The axial drift correction is based on real-time maximization of SNR during image acquisition. Compared with the drift correction method using fiducial markers or bright field images [4042], our method doesn’t require extra costs or efforts in the sample preparation or imaging system modification, thus is beneficial for cost-efficient localization microscopy [43,44]. Our drift correction method is described below. After acquiring every 500 images (or other user-defined number, depending on the exposure time for each raw image and the stability of the imaging system), QC-STORM drives a step motor (SS1704A20A with MD-2322 controller, Samsr motor, Japan) which is connected to the focusing knob of the microscope. Raw images are captured from 5 successive focal planes after moving the focusing knob with 30 nm step size, and then analyzed using QC-STORM to find a focal plane where the SNR is maximal. This process can be repeated several times for correcting larger focal drift or moving to a new FOV. Moreover, the drift correction is made to be automatic and the raw images used for the drift correction are not mixed with the raw images for SRLM experiments. That is to say, the axial drift correction from QC-STORM is almost transparent to SRLM experiments. We assessed the validity of this axial drift correction method by comparing 2D imaging results with or without axial drift correction. A total of 8000 raw images were acquired and analyzed from the microtubules in U-2 OS cells. The results show that both the SNR and the total photon number are kept stable during the whole image acquisition process when the axial drift correction is applied, while these two parameters decrease notably (15% - 30%) when the axial drift correction is turned off (Figs. 11(a)–11(b)).

We can also use QC-STORM to provide feedback for controlling the intensity of activation laser, thus stabilizing the activation density of molecules. Firstly, we estimate the activation density using the percentage of single emitter localization (see Fig. 3(b)). Then, we calculate the error between the observed activation density and the user expected activation density. The error is used as the input in a proportional–integral–derivative (PID) controller, which is further used to adjust the activation laser intensity via an analog-to-digital converter. For simplicity, the derivative coefficient is set to 0, and the proportional and integral coefficients are acquired by preliminary experiments to enable a stable and quick response. Through experimental 2D imaging of the microtubules in U-2 OS cells, we verify that the number of activated molecules (Fig. 11(c)) and the percentage of single molecule localization (Fig. 11(d)) in each raw image can be both stabilized during the whole image acquisition process, after enabling the activation density feedback. In contrast, when the activation density feedback is not applied, the number of activated molecules per frame decreases by 50% at the end of the experiment (Fig. 11(c)). Note that the jump in the number of activated molecules is due to overcompensation from the axial drift correction.

 figure: Fig. 11.

Fig. 11. Real-time feedback control on image SNR and activation density. (a-b) The dependence of normalized SNR and total signal photon on the number of raw image frames. The axial drift correction is turned ON (with correction) or OFF (no correction). (c-d) The dependence of the number of localized molecules and the percentage of sparse emitter localization on the number of raw image frames. The activation density control is turned ON (with feedback) or OFF (no feedback).

Download Full Size | PDF

3.6 Combing QC-STORM with ANNA-PALM for higher imaging speed

QC-STORM enables real-time image processing for ROIs containing multiple emitters, thus significantly improves the imaging speed of SRLM for a given resolution [25,45]. On the other hand, a deep-learning based method called ANNA-PALM is able to generate a super-resolution image using a significantly smaller number of localizations, thus massively increasing the imaging speed compared with traditional methods using sparse emitter localization. Considering that ANNA-PALM still requires a sufficient number of localizations from conventional localization algorithms, here we try to combine QC-STORM with ANNA-PALM to provide a further improvement in the imaging speed. Using images with an activation density of 3.5 × 103 molecules per image (Fig. 12), we investigated the relationship between the quality of reconstructed super-resolution images and the number of raw images. We analyzed the intensity profile analysis of two cross-linked microtubules (Figs. 12(a)–12(b)), and found out that QC-STORM requires a total of 800 images to resolve the two microtubules, while a combination of ANNA-PALM with QC-STORM can resolve the structure using only 300 images (Fig. 12(c)). However, when combining ANNA-PALM with sparse emitter localization (as demonstrated in the original paper [9]), a total of 1000 raw images would be required to reconstruct a super-resolution image with similar resolving power. That is to say, the imaging speed of ANNA-PALM could be further improved (3 times in this study) after combining ANNA-PALM with a suitable multi-emitter localization algorithm (QC-STORM in this study). And, it is worthy to point out that the combined approach provides a larger separation distance for the two microtubules (Fig. 12(c), 105 nm from QC-STORM and 118 nm from the combination), indicating a further need to investigate the benefits and drawbacks from the combined approach. Moreover, ANNA-PALM requires 5 minutes to process the super-resolution image shown in Fig. 12(a) (80 µm × 86 µm, 20 nm pixel size), which is apparently too slow to be used in real-time image acquisition optimization during SRLM experiments.

 figure: Fig. 12.

Fig. 12. The improvement of imaging speed after combining ANNA-PALM with QC-STORM. (a) A super-resolution image reconstructed from QC-STORM. Here a total of 800 raw images was used, and the FOV equals to 80 µm × 86 µm. (b) Super-resolution images reconstructed from different number of raw images. These images were originated from the rectangular area marked in (a). QC-STORM or a combination of QC-STORM and ANNA-PALM was used to process the raw images. (c) Line profiles of the marked areas in (b).

Download Full Size | PDF

4. Conclusion

We adopt the divide and conquer strategy and present a user-friendly Java-based plug-in called QC-STORM that is capable of providing fast MLE fitting of multiple emitters. QC-STORM is consisted of a series of MLE-based localization algorithms, including MLEbfgs, MLEwt, MLE2e and MLE3e, and is capable of processing simulated and experimental 2D and 3D ROIs with 3-4 orders of magnitude faster speed than the popular fitting-based ThunderSTORM, with only a slightly worse localization precision. Compared with the newly reported non-iterative WindSTORM, QC-STORM exhibits a similar image processing speed, with a reduction on localization precision and detection rate. Note that WindSTORM for 3D imaging is not currently available, probably because the complicated pre-processing procedures in the ROI identification results in distorted emission patterns for 3D localization.

We demonstrate that QC-STORM could provide real-time feedback for active axial drift correction and activation density optimization. We also show that combining QC-STORM with ANNA-PALM could further increase the imaging speed. We provide user-friendly plug-ins of QC-STORM for ImageJ and Micro-Manager software.

Finally, we point out that the performance of the current QC-STORM is still limited by the setting of maximum number of emitters to be localized (that is, 3 emitters for 2D imaging and 2 emitters for 3D imaging), because currently we aim to provide real-time data processing and image acquisition optimization for raw images with 1024 × 1024 pixels and 10 ms exposure time (which is popularly used in SRLM with sCMOS cameras). Further improvement on the performance of QC-STORM could be made possible by using multiple GPU, or a better pre-processing method for ROI identification.

Funding

National Natural Science Foundation of China (NSFC) (81427801, 81827901); National Basic Research Program of China (2015CB352003); Science Fund for Creative Research Groups (61721092); Fundamental Research Funds for the Central Universities (2018KFYXKJC039); Director Fund of WNLO.

Acknowledgments

We thank Daniel Sage for evaluating QC-STORM with open datasets.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. Y. M. Sigal, R. Zhou, and X. Zhuang, “Visualizing and discovering cellular structures with super-resolution microscopy,” Science 361(6405), 880–887 (2018). [CrossRef]  

2. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

3. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006). [CrossRef]  

4. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

5. P. Almada, S. Culley, and R. Henriques, “PALM and STORM: Into large fields and high-throughput microscopy with sCMOS detectors,” Methods 88, 109–121 (2015). [CrossRef]  

6. H. Ma, J. Xu, and Y. Liu, “WindSTORM: Robust online image processing for high-throughput nanoscopy,” Sci. Adv. 5(4), eaaw0683 (2019). [CrossRef]  

7. R. Strack, “Deep learning advances super-resolution imaging,” Nat. Methods 15(6), 403 (2018). [CrossRef]  

8. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018). [CrossRef]  

9. W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018). [CrossRef]  

10. S. J. Holden, T. Pengo, K. L. Meibom, C. F. Fernandez, J. Collier, and S. Manley, “High throughput 3D super-resolution microscopy reveals Caulobacter crescentus in vivo Z-ring organization,” Proc. Natl. Acad. Sci. USA111(12), 4566–4571 (2014). [CrossRef]  

11. A. Kechkar, D. Nair, M. Heilemann, D. Choquet, and J.-B. Sibarita, “Real-Time Analysis and Visualization for Single-Molecule Based Super-Resolution Microscopy,” PLoS One 8(4), e62918 (2013). [CrossRef]  

12. M. Štefko, B. Ottino, K. M. Douglass, and S. Manley, “Autonomous illumination control for localization microscopy,” Opt. Express 26(23), 30882–30900 (2018). [CrossRef]  

13. A. E. S. Barentine, Y. Lin, M. Liu, P. Kidd, L. Balduf, M. R. Grace, S. Wang, J. Bewersdorf, and D. Baddeley, “3D Multicolor Nanoscopy at 10,000 Cells a Day,” bioRxiv, 606954 (2019).

14. A. Beghin, A. Kechkar, C. Butler, F. Levet, M. Cabillic, O. Rossier, G. Giannone, R. Galland, D. Choquet, and J.-B. Sibarita, “Localization-based super-resolution imaging meets high-content screening,” Nat. Methods 14(12), 1184–1190 (2017). [CrossRef]  

15. M. Shannon and D. M. Owen, “Bridging the Nanoscopy-Immunology Gap,” Front. Phys. 6, 157 (2019). [CrossRef]  

16. M. Mattiazzi Usaj, E. B. Styles, A. J. Verster, H. Friesen, C. Boone, and B. J. Andrews, “High-Content Screening for Quantitative Cell Biology,” Trends Cell Biol. 26(8), 598–611 (2016). [CrossRef]  

17. A. Small and S. Stahlheber, “Fluorophore localization algorithms for super-resolution microscopy,” Nat. Methods 11(3), 267–279 (2014). [CrossRef]  

18. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319(5864), 810–813 (2008). [CrossRef]  

19. T. Quan, P. Li, F. Long, S. Zeng, Q. Luo, P. N. Hedde, G. U. Nienhaus, and Z.-L. Huang, “Ultra-fast, high-precision image analysis for localization-based super resolution microscopy,” Opt. Express 18(11), 11867–11876 (2010). [CrossRef]  

20. “GeForce 20 series,”https://en.wikipedia.org/wiki/GeForce_20_series.

21. J. Nocedal and S. J. Wright, Numerical optimization (Springer, 2006).

22. Y. Li, M. Mund, P. Hoess, J. Deschamps, U. Matti, B. Nijmeijer, V. J. Sabinina, J. Ellenberg, I. Schoen, and J. Ries, “Real-time 3D single-molecule localization using experimental point spread functions,” Nat. Methods 15(5), 367–369 (2018). [CrossRef]  

23. C. S. Smith, N. Joseph, B. Rieger, and K. A. Lidke, “Fast, single-molecule localization that achieves theoretically minimum uncertainty,” Nat. Methods 7(5), 373–375 (2010). [CrossRef]  

24. I. Mukherjee and S. Routroy, “Comparing the performance of neural networks developed by using Levenberg–Marquardt and Quasi-Newton with the gradient descent algorithm for modelling a multiple response grinding process,” Expert Syst. Appl. 39(3), 2397–2407 (2012). [CrossRef]  

25. F. Huang, S. L. Schwartz, J. M. Byars, and K. A. Lidke, “Simultaneous multiple-emitter fitting for single molecule super-resolution imaging,” Biomed. Opt. Express 2(5), 1377–1393 (2011). [CrossRef]  

26. E. S. Ahmed, A. I. Volodin, and A. A. Hussein, “Robust weighted likelihood estimation of exponential parameters,” IEEE Trans. Reliab. 54(3), 389–395 (2005). [CrossRef]  

27. C. Field and B. Smith, “Robust Estimation: A Weighted Maximum Likelihood Approach,” Int. Stat. Rev. 62(3), 405–424 (1994). [CrossRef]  

28. C. H. Bohrer, X. Yang, Z. Lyu, S.-C. Wang, and J. Xiao, “Improved single-molecule localization precision in astigmatism-based 3D superresolution imaging using weighted likelihood estimation,” bioRxiv, 304816 (2018).

29. M. Ovesný, P. Křížek, J. Borkovec, Z. Švindrych, and G. M. Hagen, “ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging,” Bioinformatics 30(16), 2389–2390 (2014). [CrossRef]  

30. Z. Zhao, B. Xin, L. Li, and Z.-L. Huang, “High-power homogeneous illumination for super-resolution localization microscopy with large field-of-view,” Opt. Express 25(12), 13382–13395 (2017). [CrossRef]  

31. S. van de Linde, A. Löschberger, T. Klein, M. Heidbreder, S. Wolter, M. Heilemann, and M. Sauer, “Direct stochastic optical reconstruction microscopy with standard fluorescent probes,” Nat. Protoc. 6(7), 991–1009 (2011). [CrossRef]  

32. D. Sage, T.-A. Pham, H. Babcock, T. Lukes, T. Pengo, J. Chao, R. Velmurugan, A. Herbert, A. Agrawal, S. Colabrese, A. Wheeler, A. Archetti, B. Rieger, R. Ober, G. M. Hagen, J.-B. Sibarita, J. Ries, R. Henriques, M. Unser, and S. Holden, “Super-resolution fight club: assessment of 2D and 3D single-molecule localization microscopy software,” Nat. Methods 16(5), 387–395 (2019). [CrossRef]  

33. J. Min, C. Vonesch, H. Kirshner, L. Carlini, N. Olivier, S. Holden, S. Manley, J. C. Ye, and M. Unser, “FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data,” Sci. Rep. 4(1), 4577 (2015). [CrossRef]  

34. R. P. J. Nieuwenhuizen, K. A. Lidke, M. Bates, D. L. Puig, D. Grünwald, S. Stallinga, and B. Rieger, “Measuring image resolution in optical nanoscopy,” Nat. Methods 10(6), 557–562 (2013). [CrossRef]  

35. D. Sage, H. Kirshner, T. Pengo, N. Stuurman, J. Min, S. Manley, and M. Unser, “Quantitative evaluation of software packages for single-molecule localization microscopy,” Nat. Methods 12(8), 717–724 (2015). [CrossRef]  

36. H. Babcock, Y. M. Sigal, and X. Zhuang, “A high-density 3D localization algorithm for stochastic optical reconstruction microscopy,” Biomed. Opt. Phase Microsc. Nanosc. 1(1), 6 (2012). [CrossRef]  

37. “Single-Molecule Localization Microscopy • Software Benchmarking,”http://bigwww.epfl.ch/smlm/challenge2016/index.html.

38. F. Huang, T. M. P. Hartwich, F. E. Rivera-Molina, Y. Lin, W. C. Duim, J. J. Long, P. D. Uchil, J. R. Myers, M. A. Baird, W. Mothes, M. W. Davidson, D. Toomre, and J. Bewersdorf, “Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms,” Nat. Methods 10(7), 653–658 (2013). [CrossRef]  

39. S. A. Jones, S.-H. Shim, J. He, and X. Zhuang, “Fast, three-dimensional super-resolution imaging of live cells,” Nat. Methods 8(6), 499–505 (2011). [CrossRef]  

40. R. McGorty, D. Kamiyama, and B. Huang, “Active microscope stabilization in three dimensions using image correlation,” Biomed. Opt. Phase Microsc. Nanosc. 2(1), 3 (2013). [CrossRef]  

41. P. Bon, N. Bourg, S. Lécart, S. Monneret, E. Fort, J. Wenger, and S. Lévêque-Fort, “Three-dimensional nanometre localization of nanoparticles to enhance super-resolution microscopy,” Nat. Commun. 6(1), 7764 (2015). [CrossRef]  

42. S. H. Lee, M. Baday, M. Tjioe, P. D. Simonson, R. Zhang, E. Cai, and P. R. Selvin, “Using fixed fiduciary markers for stage drift correction,” Opt. Express 20(11), 12177–12183 (2012). [CrossRef]  

43. T. Holm, T. Klein, A. Löschberger, T. Klamp, G. Wiebusch, S. van de Linde, and M. Sauer, “A Blueprint for Cost-Efficient Localization Microscopy,” ChemPhysChem 15(4), 651–654 (2014). [CrossRef]  

44. H. Ma, R. Fu, J. Xu, and Y. Liu, “A simple and cost-effective setup for super-resolution localization microscopy,” Sci. Rep. 7(1), 1542 (2017). [CrossRef]  

45. S. J. Holden, S. Uphoff, and A. N. Kapanidis, “DAOSTORM: an algorithm for high-density super-resolution microscopy,” Nat. Methods 8(4), 279–280 (2011). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Principle of MLEwt for sparse emitter localization with signal contamination. (a) Comparison on the recovered signal from conventional MLE (MLEbfgs, upper) and weighted MLE (MLEwt, lower). A contaminated ROI (11 × 11 pixels) is extracted from a raw image taken from astigmatism 3D imaging, and is processed using either MLEbfgs or MLEwt. The signal is recovered using the fitted parameters. Note that this ROI can be also fitted using a multi-emitter model at the expense of a slow data processing speed. (b) MLE loss maps from an ROI with signal contamination. In both maps, the fitting parameters other than x and y positions were fixed and adopted from the fitted result of MLEwt. (c) Averaged horizontal intensity profile of the extracted ROI in (a). The σleft and σright denote the estimated standard deviation width (in pixel) for the Gaussian weight function in the left and right sides, respectively.
Fig. 2.
Fig. 2. The workflow for ROI identification and extraction.
Fig. 3.
Fig. 3. MLE fitting using the divide and conquer strategy. (a) The workflow. (b-c) The percentage of ROIs containing different number of emitters under different activation density. Here (b) is for 2D, and (c) is for astigmatism 3D raw images.
Fig. 4.
Fig. 4. Comparison on the localization performance among MLEbfgs, MLEwt and ThunderSTORM for simulated 2D raw images. (a) The dependence of RMSE on activation density. (b) The dependence of Jaccard index on activation density. (c) The dependence of run time on activation density. Here the run time accounts for all data processing routine, including ROI extraction, molecule localization, and super-resolution image rendering, to process a single raw image. The simulated images contains 1024 × 1024 pixels with a pixel size of 100 nm. Both the QC-STORM and the ThunderSTORM were set to work at the sparse emitter mode when the run time was evaluated. Both the MLEbfgs and the MLEwt algorhthms were used when the QC-STORM is working in the sparse emitter mode. The letter S denotes sparse emitter mode.
Fig. 5.
Fig. 5. Comparison on the localization performance among MLEbfgs, MLEwt and ThunderSTORM for simulated astigmatism 3D raw images. (a) The dependence of lateral RMSE on activation density. (b) The dependence of axial RMSE on activation density. (c) The dependence of Jaccard index on activation density. (d) The dependence of run time on activation density. Here the run time accounts for all data processing routine in a single raw image. Other settings are similar to those in Fig. 4.
Fig. 6.
Fig. 6. Comparison on the localization performance of three software for 2D raw images. (a) The dependence of RMSE on activation density. (b) The dependence of Jaccard index on activation density. (c) The dependence of run time on activation density. Here the run time accounts for all data processing routine in a single raw image. Other settings are similar to those in Fig. 4. The letter M denotes multi-emitter mode, and WindSTORM has only a multi-emitter mode. We don’t show the run time of WindSTORM, since the GPU version of WindSTORM is not running correctly under our hardware configurations.
Fig. 7.
Fig. 7. Comparison on the localization performance of ThunderSTORM and QC-STORM for astigmatism 3D raw images. (a-b) The dependence of lateral and axial RMSE on activation density. (c) The dependence of Jaccard index on activation density. (d) The dependence of run time on activation density. Here the run time accounts for all data processing time in a single image of 1024 × 1024 pixels. The letter M denotes multi-emitter mode.
Fig. 8.
Fig. 8. Comparison on the localization performance of three software using open datasets. (a) Evaluation of RMSE and Jaccard index on four 2D imaging datasets. (b) Evaluation of lateral and axial RMSE, and Jaccard index on four astigmatism 3D imaging datasets. The acronym LD and HD denote low density and high density, respectively. MT1 and MT2 are high SNR datasets. MT3, MT4, ER1 and ER2 are low SNR datasets.
Fig. 9.
Fig. 9. Evaluation of the localization performance among QC-STORM, ThunderSTORM and WindSTORM using experimental 2D images. (a) A super-resolution image reconstructed from a total of 1000 raw images with 1024 × 1024 pixels. The images were processed by QC-STORM. The estimated full data processing times for different software are shown in the top-right corner. (b) Enlarged super-resolution images of the boxed region in (a). A representative raw image shows a non-uniform fluorescence background. Three software were used to process the raw images independently. The FRC resolution and the total number of localizations are shown in the lower-right corners. (c) Enlarged super-resolution images of the two ROIs indicated in (b). The upper row shows an area with high fluorescence background, and the lower row is for an area with low fluorescence background. The localization number and the FRC resolution of these ROIs are also shown in these images. (d) Spatial resolution characterized by the intensity profiles of the marked positions in (c).
Fig. 10.
Fig. 10. Evaluation of the localization performance between QC-STORM and ThunderSTORM using an experimental 3D open dataset. (a) A super-resolution image reconstructed from QC-STORM. (b) Enlarged super-resolution images reconstructed from the two software. The images correspond to the marked area in (a). The upper images were processed from the original open dataset with low activation density, while the lower images were processed from a modified open dataset with high activation density. The FRC resolution and the total localization number are shown in the upper-left corners. (c) Spatial resolution characterized by the intensity profiles of two microtubules at the marked positions in (b).
Fig. 11.
Fig. 11. Real-time feedback control on image SNR and activation density. (a-b) The dependence of normalized SNR and total signal photon on the number of raw image frames. The axial drift correction is turned ON (with correction) or OFF (no correction). (c-d) The dependence of the number of localized molecules and the percentage of sparse emitter localization on the number of raw image frames. The activation density control is turned ON (with feedback) or OFF (no feedback).
Fig. 12.
Fig. 12. The improvement of imaging speed after combining ANNA-PALM with QC-STORM. (a) A super-resolution image reconstructed from QC-STORM. Here a total of 800 raw images was used, and the FOV equals to 80 µm × 86 µm. (b) Super-resolution images reconstructed from different number of raw images. These images were originated from the rectangular area marked in (a). QC-STORM or a combination of QC-STORM and ANNA-PALM was used to process the raw images. (c) Line profiles of the marked areas in (b).

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I i , j = A e ( ( i x 0 ) 2 2 σ x 2 + ( j y 0 ) 2 2 σ y 2 ) + B
min L = 1 i , j N ( I i , j q i , j ln I i , j )
min L = 1 i , j N F i , j = 1 i , j N ( I i , j q i , j q i , j ( ln I i , j ln q i , j ) )
I i , j = 128 × A e 0.1 × σ ( ( i x 0 ) 2 + ( j y 0 ) 2 ) + 64 × B
min L = 1 i , j N W i , j F i , j
W i , j = e ( ( i x w ) 2 2 σ w x 2 + ( j y w ) 2 2 σ w y 2 )
0   +   A e x 2 2 σ 2 d x = A σ π 2
I i , j = k = 1 M A k e ( ( i x 0 k ) 2 2 σ x k 2 + ( j y 0 k ) 2 2 σ y k 2 ) + B
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.