Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Live-SIMBA: an ImageJ plug-in for the universal and accelerated single molecule-guided Bayesian localization super resolution microscopy (SIMBA) method

Open Access Open Access

Abstract

Live-cell super-resolution fluorescence microscopy techniques allow biologists to observe subcellular structures, interactions and dynamics at the nanoscale level. Among of them, single molecule-guided Bayesian localization super resolution microscopy (SIMBA) and its derivatives produce an appropriate 50 nm spatial resolution and a 0.1-2s temporal resolution in living cells with simple off-the-shelf total internal reflection fluorescence (TIRF) equipment. However, SIMBA and its derivatives are limited by the requirement for dual-channel dataset or single-channel dataset with special design, the time-consuming calculation for extended field of view and the lack of real-time visualization tool. Here, we propose a universal and accelerated SIMBA ImageJ plug-in, Live-SIMBA, for time-series analysis in living cells. Live-SIMBA circumvents the requirement of dual-channel dataset using intensity-based sampling algorithm and improves the computing speed using multi-core parallel computing technique. Live-SIMBA also better resolves the weak signals inside the specimens with adjustable background estimation and distance-threshold filter. With improved fidelity on reconstructed structures, greatly accelerated computation, and real-time visualization, Live-SIMBA demonstrates its extended capabilities in live-cell super-resolution imaging.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In the past few decades, super resolution (SR) microscopy techniques have overcome the diffraction limit of the light and offered unprecedented abilities to observe subcellular structures, interactions and dynamics at the molecular scale [1]. Among of them, single-molecule localization microscopy (SMLM) [24] utilize photoswitchable or photoconvertible fluorophores to isolate each molecule so that only a sparse set of single molecules can be appeared in each camera frame. By sequentially localizing each isolated molecule, the underlying structures can be resolved with the precision under 5 nm [5]. To achieve such high spatial resolution, SMLM requires long acquisition times, as typically tens of thousands of frames. Therefore, this "time-for-space" strategy limits the achievable temporal-resolution in live-cell imaging [6].

To achieve live-cell SR imaging, one solution is to recognize the overlapping fluorophores in each frame by using multiple-emitter fitting [712]. However, the localization accuracy of these methods dramatically decreases when the molecular density increases. To further improve temporal resolution, some methods analyze the fluctuation of the image sequence in time domain, such as Bayesian analysis of the blinking and bleaching (3B) [13], super-resolution optical fluctuation imaging (SOFI) [14], balanced super-resolution optical fluctuation imaging (bSOFI) [15] and super-resolution radial fluctuations (SRRF) [16], are proposed. By adding extra dimensional information into calculation, the ultra-high density fluorophores can be precisely localized. Therefore, the temporal and spatial resolution are greatly improved. However, these methods are hard to build accurate models, sensitive to fluorescence intensity or image sequence frames, or time consuming. Significant improvements on building accurate models, reducing artificial structures, realizing higher temporal resolution and accelerating runtime have been achieved by single molecule-guided Bayesian localization microscopy (SIMBA) and its derivatives [1719]. By combining Bayesian inference with SMLM analysis or deep learning, SIMBA and its derivatives produce an appropriate 50 nm spatial resolution and a 0.1-2s temporal resolution in living cells with simple off-the-shelf total internal reflection fluorescence (TIRF) equipment. However, several problems still limit the extended capabilities in SIMBA: (i) The requirements for dual-channel dataset or single-channel dataset with special design limit the universal application; (ii) Time- and memory-consuming calculations limit the application for extended field of view (FOV) long-term SR imaging in living cells; (iii) Lack of real-time visualization tool and the complex parameters inherent in Bayesian inference limit the accessibility for biological scientists.

Here, we propose a universal and accelerated method, Live-SIMBA, for large-FOV long-term SR reconstruction in live cells. Live-SIMBA provide an opportunity to eliminate the needs for dual-channel imaging or single-channel imaging with special design by using intensity-based sampling algorithm, which greatly expand the universality of SIMBA method. Live-SIMBA can also better resolve the weak signals inside the specimens by using adjustable background estimation and distance-threshold filter. To further accelerate calculation, Live-SIMBA reduces the memory demand in computing and applies multi-core parallel-computing technique. Moreover, an interactive and modular ImageJ plug-in is designed for Live-SIMBA to simplify the usage and provide a real-time visualization of reconstructions. Comprehensive experiments demonstrate that the resolved structures in Live-SIMBA provide higher fidelity compared to current state of the art methods including the Bayesian inference (3B, SIMBA) and the higher-order statistical analysis (SOFI, bSOFI and SRRF). Besides, Live-SIMBA achieves thousands of times of acceleration than 3B and 25-fold acceleration than SIMBA in a moderate FOV (60*60 pixels, 1 pixel = 100 nm) by using personal computer. With improved fidelity on reconstruction, greatly accelerated computation, and real-time visualization, Live-SIMBA demonstrates its extended capabilities in long-term live-cell SR imaging.

2. Methods

Live-SIMBA contains three main components as shown in Fig. 1: (1) Dual-channel alignment to eliminate shift and deformation in dual-channel dataset; (2) High density reconstruction (i.e. Bayesian inference) to realize universal and accelerated SR reconstruction; (3) Data visualization to provide a real-time interactive tool and output final reconstructed SR images.

 figure: Fig. 1.

Fig. 1. Data analysis pipeline of Live-SIMBA. The three main components include dual-channel alignment, high density reconstruction and data visualization. The detailed implementations of high density reconstruction are illustrated in Fig. 2.

Download Full Size | PDF

2.1 Dual-channel Alignment

By introducing dual-channel imaging, SIMBA combines Bayesian inference with SMLM analysis to greatly reduce artificial structures and resolves more underlying SR structures. This module is provided in Live-SIMBA as well. Typically, the photo-converted fluorescent proteins (PCFP) (i.e., mEos3.2 [20] or pcStar [18]), which can be converted from green color to red color by a 405-nm laser, are utilized to collect dual-channel dataset. The fluorescent signals in different channels are split by a dichroic mirror and form the diffraction-limited images with different modalities (Fig. 1(a)). In the red channel, isolated fluorophores (named as candidate spots) can be precisely localized by the SMLM analysis. In the green channel, the high-density fluorophores perform very well with on/off or blinking and bleaching phenomena, which provide the fluctuation information of the image sequence in time domain. Since the imaging of each channel is from the same specimen under the different lasers, the candidate spots from the red channel will be used to guide the Bayesian inference of the green channel.

During the imaging acquisition, the dichroic mirror brings the shift and deformation into two channels, making the correct alignment between two channels impossible. Furthermore, the inaccurate correspondence will induce artifact to Bayesian inference. To eliminate the inaccurate alignment, Live-SIMBA provides a dual-channel alignment module in case of acting in dual-channel mode. Here, we adopt affine transformation model [21] to perform dual-channel alignment, which is more accurate than general translation model, since affine transformation can solve the multi-scale shifts problem by using pairs of fluorescent molecules in corresponding channels. Actually, affine transformation refers to a linear transformation plus a translation operation in spatial geometry [22], as shown in Eq. (1).

$$\left[\begin{matrix}u\\v\\\end{matrix}\right]=\left[\begin{matrix}a_2 & a_1 & a_0\\b_2 & b_1 & b_0\\\end{matrix}\right]\left[\begin{matrix}x\\y\\1\\\end{matrix}\right]$$
where $(x, y)$ is the coordinate in the red channel, ($u$, $v$) is the aligned coordinate in the green channel, and ($a_2$, $a_1$, $a_0$, $b_2$, $b_1$, $b_0$) are parameters of affine transformation parameters. Where $\left [\begin {matrix}a_2 & a_1\\b_2 & b_1\\\end {matrix}\right ]$ represents scale, shear, and rotation operations, and ($a_0$, $b_0$) represents the translation operation. It should be noted that the matched number of fluorophores from dual-channel is required by affine transformation.

2.2 High-density reconstruction

The key idea of SIMBA and its derivatives is building a factorial hidden Markov model (FHMM) [23] for entire dataset based on the bleaching and blinking events of fluorophores to get the final SR reconstructions. The detail workflow of SIMBA algorithm is as follows:

Step 1. Candidate spots extraction

The appropriate number of single molecules are extracted from red channel. After drift correction, the molecules are correlated to the green channel as candidate spots.

Step 2. Model selection

Candidate spots are randomly selected to construct an initial model. After that, the selected molecules are removed from the candidate spots. When all candidate spots have been selected once, the expansion spots are assigned as the new candidate spots.

Step 3. Model optimization

All molecules in the initial model are clustered by k-means algorithm [24], and for each cluster:

  • (1) The samples of the initial model are calculated by using hybridized Markov chain Monte Carlo sampling (MCMC) [25] and limited forward algorithm [17].
  • (2) For each molecule, 4 optimized pending positions are calculated using the limited forward algorithm and the modified conjugate gradient method [17]. Whether to add the pending positions as expansion spots is determined based on the distance and intensity thresholds.
  • (3) The samples of the new expansion spots are calculated and added to the initial model by expansion sampling.

Step 4. SR image generation

By repeating execution of Step 2 and Step 3, a final SR image is generated. The algorithm will be terminated when adjacent reconstructed images no longer significantly differ.

Here, we eliminated the needs for dual-channel imaging using intensity-based sampling algorithm and further accelerated calculation using multi-core parallel-computing technique. Moreover, an adjustable background estimation and distance-threshold filter were applied to better resolve weak signals inside the specimens in Live-SIMBA, as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. The overall workflow of Live-SIMBA algorithm. The single molecules (from red channel or generated using intensity-based sampling algorithm) and expanded high-probability molecules will guide the image sequence analysis in green channel by using Bayesian inference. The algorithm is accelerated using parallel computing.

Download Full Size | PDF

2.2.1 Live-SIMBA with random candidates in single-channel mode

SIMBA and its derivatives combine Bayesian inference with SMLM analysis or deep learning. Therefore, dual-channel dataset or single-channel dataset with special neural network model design is required. To provide a more universal algorithm, we propose an intensity-based sampling algorithm to extract candidate spots directly from structural intensity of specimen in single channel [26]. The key techniques include two aspects: an intensity distribution calculation to obtain the probability distribution of fluorophores and an intensity-based sampling to extract the positions of potential fluorophores with high probability.

The fluorophores used in Live-SIMBA has three possible states, emitting (light), not emitting and bleached. When one fluorophore stays in the emitting state, it will produce fluorescence, causing higher intensity in corresponding region than background. Here, we first calculated the intensity distribution of specimens as a probability map. Then, this map was used to guide the selection of the initial positions of fluorophores. To increase the signal-to-noise ratio and consider the fluorophores appearing in the image sequence, the estimated intensity at location $x$, $I(x)$, was calculated by sum of fluorescence intensity at location $x$ in all $K$ frames.

$$I(x)=\sum_{i=1}^{K}{\gamma_i(x)}$$
where $K$ is the total number of image frames. $\gamma _i(x)$ represents the fluorescence intensity at location $x$ in frame $i$. Next, $I(x)$ is normalized to obtain probability map of the image at a certain location $x$, $P(x)$ (Eq. (3)). The probability interval ranges from 0 to 1, where a larger value represents a higher probability to be selected as candidate spot at a certain position.
$${P(x)=\frac{I(x)}{\max_x{I(x)}}}$$
Then, based on this probability map, the positions of potential fluorophores can be extracted as candidate spots by using acceptance-rejection method [27].

However, the candidate spots generated by intensity-based sampling algorithm have higher probability to concentrate the areas with high intensity, causing missing of weak signals and artifacts of uneven brightness in the reconstructions. Here, an adjustable background estimation and a distance-threshold filter were proposed to solves above problems. First, in SIMBA and its derivatives, the raw image sequence was preprocessed by high-pass filtering with Gaussian kernel to remove the background offset and then normalized by using Z-score normalization with zero mean and the standard deviation of 1. Since the background level is high and the most areas of images only contains large amounts of out of focus light, the image noise can be modelled as Gaussian distribution with zero mean and the standard deviation of 1. However, this hypothesis will over-estimate the background, for example, if the image is mostly foreground, then there is mismatch in the estimation of the noise variance. In this situation, the weak signals are treated as background, causing candidate spots in this region cannot be localized and the structures have less opportunity to be reconstructed. Therefore, we proposed an adjustable background estimation. Instead of estimating background using the whole image region, we estimated background based on the darkest image region. By adjusting the percentage of the darkest to whole image regions, the reconstruction can contain more structures with weak signals. Second, candidate spots are randomly selected to construct an initial model and the pending positions around each candidate spots will be calculated and added to the initial model in model optimization. Therefore, the newly calculated positions will concentrate to the areas with high intensity around the candidate spots, causing the uneven brightness in the reconstruction. Here, a distance-threshold filter was used to redirect the selected candidate spots when sampling the initial model. In this way, the candidate spots within a certain distance will not be selected to ensure their distribution as dispersed as possible in the initial model, thus the potential fluorophores in darker areas have more opportunities to be optimized.

2.2.2 Reduce time and memory consuming

The Bayesian inference is time- and memory- consuming for large FOV high-density reconstruction. For example, to reconstruct a 200-frames image sequence with the region of 200 * 300 pixels (1 pixel = 160 nm), it requires roughly 11 hours (for 200 iterations) and 3 GB memory by using SIMBA on the platform of personal computer (with an Intel Core i7-8700 CPU processor at 3.20GHz with 8 GB memory). Considering the application of large-FOV long-term SR imaging in live cells, reconstructions of different time periods are required. To accelerate the calculation, one solution is using the parallel computing technique. However, due to the memory limitations of computing system, all reconstructions cannot be performed simultaneously, further limiting the potential for acceleration.

Live-SIMBA solve this problem from two aspects. To reduce time consumption, we adopted Open Multi-Processing (OpenMP) [28], a parallel computing technique using multiple CPUs with the shared memory, to improve the performance. As shown in Fig. 3, after the process of model selection in Live-SIMBA, a k-means algorithm was used to group all molecules in the initial model into different clusters according to the position relationship. Since the model optimization in each cluster is a highly independent task, it makes parallel computing possible for all clusters. In this solution, the samples of the partial model from clusters were calculated by using hybrid MCMC and limited forward algorithm. Then, the newly expansion spots were calculated by using the limited forward algorithm and the modified conjugate gradient method and formed the expansion samples. After that, all expansion samplings from each individual task were fused together to form a new model for next iteration. Due to memory-consuming task in large-FOV live-cell reconstruction, the relative adequate memory comparing with graphics processing unit (GPU) helps the OpenMP technique to carry out model optimization in each cluster simultaneously, thus the computation can be greatly accelerated.

 figure: Fig. 3.

Fig. 3. Flow chart of the Live-SIMBA algorithm with parallel computing. Arrows indicate the direction of the workflow. The process of model optimization is shown in red dashed box, which is a highly independent task and can be accelerated by using parallel computing technique, as shown in blue dashed box. OpenMP: Open Multi-Processing.

Download Full Size | PDF

In model optimization shown in Fig. 3, SIMBA and its derivatives used the hybrid MCMC and the limited forward algorithm to get sample of the initial model made up of single molecules, which represents the similar perspective between the model and the raw data. To reduce memory consumption, instead of calculating the averaged samples consisting the high probability distribution in molecules, Live-SIMBA chose the best match sample compared with the raw image data. In this solution, the large-scale sample should be easier to be cached into memory and reduce reconstruction artifacts since the first several samples in the averaged samples will introduce uncertainty into calculation. Furthermore, samples for each cluster can be calculated separately within a certain region comparing with whole-region calculation, which further reduce memory- and time- consumption. Combining all these solutions, the huge memory demand in SIMBA can be reduced from 3 GB to 448 MB (for six CPU cores, each core contains about 75 MB) and time consumption is reduced from 11 hours to 33 minutes for dataset mentioned at the beginning.

2.3 Data visualization

To extend the accessibility of Live-SIMBA for biological scientists, the Live-SIMBA provided an interactive ImageJ plug-in with three visualization modules for time-series analysis in living cells as shown in Code 1 [29]: (1) Dual-channel alignment module to generate calibrated positions of single molecules from the red channel to the green channel and provide a visualization of single molecules after calibration. This module was designed for dual-channel mode. (2) High density reconstruction module to provide a universal and accelerated SR reconstruction for both single-channel and dual-channel mode. All the parameters inherent in algorithm can be manipulated in a graphical user interface. (3) Data-visualization module to give a visualization feedback from the real-time reconstruction in iterative computation. Live-SIMBA utilized Bayesian inference to iteratively build the probability maps during the model selection and optimization. In each iteration, the algorithm expanded the potential molecules with high probability and added expanded molecules into ’model pool’. In this module, the SR reconstruction was generated from probability maps made up of ’model pool’. Besides, the plug-in provides a repair mechanism to load previous reconstruction in case the program was interrupted for some reasons.

The visualized SR reconstructed image was generated as following steps (as shown in Fig. 1(c)): First, a magnified empty image was created based on the image size of diffraction-limited image and a given magnification. Then, all calculated positions from probability maps were assigned to the nearest pixel in the magnified image. The resulting high-resolution image was blurred with Gaussian kernel with a standard deviation of around 2 pixels. This operation can be treated as the two-dimensional discrete convolution,

$$\left( {I * K} \right)\left( {x,y} \right) = \sum\nolimits_{ - \infty }^{ + \infty } {\sum\nolimits_{ - \infty }^{ + \infty } {I\left( {x,y} \right)K\left( {x - u,y - v} \right)} }$$
where $I$ is the magnified image, $K$ is the Gaussian kernel with given size and $*$ is the convolution operation.

3. Results

3.1 Data description

One simulated dataset and four experimental datasets were used to evaluate the performance of the Live-SIMBA.

3.1.1 Simulation

The ground truth of simulated dataset was acquired from the single-molecule localization microscopy (SMLM) challenge (http://bigwww.epfl.ch/smlm/datasets/) [30], which corresponded to Tubulin ConjAL647 dataset (Tub). From Tub dataset, single molecule positions were downloaded and then generated to high-density image sequence according to the characteristics of the photo-convertible fluorescent protein, mEos3.2 [20] and its associated point spread function and state transition [19]. For the convenience of calculation, the large-field structure of Tub dataset was cropped into four separate areas, each with 60*60 pixels (1 pixel = 100 nm). Finally, 200 frames of diffraction-limited images were generated for SR reconstruction.

3.1.2 Cell culture, transfection and fixation

U2OS cells were cultured in McCoy‘s 5A Medium Modified (MCMM) (Life Technologies) and COS-7 cells were cultured in Dulbecco‘s Modified Eagle Medium (DMEM) supplemented with glucose (Life Technologies), 10% fetal bovine serum (Life Technologies) and penicillin/streptomycin (Hyclone). The cell lines were grown at 37 $^\circ$C with 5% CO2. Transient transfections were conducted with LipofectamineTM 2000 Transfection Reagent (Life Technologies) in 12-well plates with an 80% cell confluence following the manufacture‘s protocol. The media were exchanged at 5 h after transfection. Twelve hours post transfection, cells were digested with trypsin and transferred to a slide pre-treated with Fibronectin for another 24 hrs. Fixations were performed with PBS buffer (pH 7.4) containing 4% paraformaldehyde and 0.2% glutaraldehyde for 15 min at 37 $^\circ$C followed by three time washes with PBS. During live imaging, cells were incubated in phenol red-free DMEM. For dual-channel alignment, 1 $\mu$l TetraSpeck Microspheres and 500 $\mu$l medium or buffer were well mixed and added into sample chamber just before use.

3.1.3 Imaging system and acquisition

A homemade TIRF microscopy system with an Olympus IX71 body (Olympus) and high-NA oil objectives was used for imaging acquisition. For imaging ER labelled with pCalnexin-pcStar in fixed COS-7 cells (Fig. 4), a 100$\times$, 1.49 NA oil objective (Olympus PLAN APO) was used with image pixel size of 68.75 nm. For imaging actin network labelled with lifeact-pcStar in fixed U2OS cells (Fig. 6) which was used in work [17] and ER labelled with pCalnexin-pcStar in fixed COS-7 cells (Fig. 8), a 100$\times$, 1.49 NA oil objective (Olympus PLAN APO) was used. A 1.6$\times$ intermediate magnification was also used with an image pixel size of 100 nm. For imaging actin network labelled with lifeact-mEos3.2 in fixed U2OS cells (Fig. 7) which was used in work [18], a 100$\times$, 1.49 NA oil objective (Olympus PLAN APO) was used with image pixel size of 160 nm.

 figure: Fig. 4.

Fig. 4. The quantification of dual-channel alignment in Live-SIMBA. (a) The overlaid images of fluorescent beads in dual-channel (b) The overlaid images of fluorescent beads in dual-channel after translation alignment (c) The overlaid images of fluorescent beads in dual-channel after affine transformation (d) The RMSE (root-mean-square error) of localized beads in dual-channel by using different methods. RG indicates no calibration in the red channel and the green channel, RG_translation indicates the calibration using translation and RG_affine indicates the calibration using affine transformation (here, 1 pixel = 68.75 nm). About 15 beads from three small regions are chose for these transformations and the RMSE of the calibration using affine transformation is about 46 and 13.4 times lower than no calibration and the calibration using translation. The scale bars are 3.5 $\mu$m

Download Full Size | PDF

During the data acquisition, the maximum power intensity near the back pupil of the objective was 2.5 $mW/\textrm {cm}^2$ for the 405-nm laser (LASOS), 2.5 $W/\textrm {cm}^2$ for the 488-nm laser (Coherent) and 1.5 $kW/\textrm {cm}^2$ for the 561-nm laser (Coherent). For PALM imaging, a 561-nm laser was used to record single molecules, while a 405-nm laser pulse was simultaneously used to activate the fluorescent proteins. The intensity of the 405-nm laser was set low so that it only activated a few molecules in each frame. For fixed actin imaging, we acquired a series of time-lapse images with an 20-ms (Fig. 6 FOV: 24*37 pixels, 87*84 pixels) and 50-ms (Fig. 7 FOV: 45*60 pixels) exposure without break, where each 200 frames were used to reconstruct one SR image. For live ER imaging, 4000 sequential frames were taken at an exposure time of 20 ms (Fig. 8 FOV: 26*24 pixels), where each 50 frames were used to reconstruct one SR image.

Here we compared our method with current state-of-the-art in live-cell SR imaging, including Bayesian inference based methods, 3B and SIMBA and high-order statistical analysis, SOFI, bSOFI and SRRF. All experiments are performed on the platform of personal computer with an Intel Core i7-8700 CPU processor at 3.20 GHz with 8 GB memory.

3.2 Dual-channel alignment

To verify the performance of dual-channel alignment, we imaged four-color-fluorescent beads, which can be emitted light in both red and green channels and be used as reference to do the calibration, for ER labeled with pCalnexin-pcStar in COS-7 cells. Since the fluorescent beads are quite bright during imaging and each of them exhibits the characteristics of a Gaussian distribution, we can get pairs relationship between green and red channels with high precision by localizing beads‘ center coordinates using Gaussian fitting. As shown in the Fig. 4, the overlaid images of fluorescent beads in dual-channel demonstrated that the various offsets exist in different regions (Fig. 4(a)). This mismatch cannot be calibrated by using the translation transformation, which make the center region overlap but leave the margin to be mismatched (Fig. 4(b)). In contrast, the calibration using affine transformation make the images of dual-channel fluorescence beads completely overlap, as shown in Fig. 4(c).

To further quantify the performance of alignment, we calculated the root-mean-square error (RMSE) of localized beads between red and green channels. By comparing no calibration and the calibration using translation and affine transformation, we found the RMSE of affine transformation achieved 13 nm (here, 1 pixel = 68.75 nm), about 46 and 13.4 times lower than no calibration and the calibration using translation.

3.3 Performance qualification on simulated datasets

To quantify the performance of Bayesian inference based methods on simulated datasets from SMLM challenge [30], we compare the ground-truth high-resolution images, the reconstruction of 3B, SIMBA and Live-SIMBA act in single-channel mode (Live-SIMBA SC, randomly sampled candidates to guide Bayesian inference) and dual-channel mode (Live-SIMBA DC, localized single molecules to guide Bayesian inference) (Fig. 5). Here, we illustrated three representative areas from the dataset and leaved the fourth area and the whole dataset in Fig. S1.

 figure: Fig. 5.

Fig. 5. Performance qualification of Bayesian inference based methods on simulated datasets from SMLM challenge. (a-c) Ground-truth high-resolution images, (d-e) selected diffraction-limited images from image sequence, (g-r) the reconstructions using 3B (g-i), SIMBA (j-l), Live-SIMBA in single-channel mode (Live-SIMBA SC) (m-o) and Live-SIMBA in dual-channel mode (Live-SIMBA DC) (p-r) on three representative areas of simulated dataset, respectively. (i-iii) The comparison of normalized intensities taken from the red lines for different methods.

Download Full Size | PDF

3.3.1 Comparison of reconstructions

As shown in Fig. 5, the ground-truth high-resolution images have clear intersection between line structure while the representative diffraction-limited images are blurry and noisy. To resolve the ultrastructures from the diffraction-limited images, we carried out 3B, SIMBA and Live-SIMBA methods under 200 iterations. It is clear that the reconstructions of both SIMBA and Live-SIMBA are similar to ground-truth structures in terms of smoothness, continuity, and thickness, however, the reconstructions of the 3B method consist of interrupted lines and isolated points with relative thin artifacts. Moreover, Live-SIMBA (Fig. 5 m-o,p-r) discovered much more latent structures than the 3B and SIMBA (Fig. 5(j-l)). For example, in the bottom part of Fig. 5(a), there are three lines interacting with each other and forming a bifurcation at the tail. Both Live-SIMBA in single-channel mode and dual-channel mode were able to recover the interacting structures. However, 3B only recovered one line and SIMBA reconstructs two lines with other line missing. Another region in the top part of Fig. 5(c) demonstrated two lines paralleling with each other. Both SIMBA (Fig. 5(l)) and Live-SIMBA (Fig. 5(p), Fig. 5(r)) reconstructed the proper thickness of that structure whereas the 3B only recovered a thin line structure. Moreover, Live-SIMBA was able to recover the overlapping structure while SIMBA cannot. The normalized line profiles Fig. 5(i,ii,iii) further demonstrated the Live-SIMBA in both single-channel and dual-channel mode achieved similar spatial resolution and were consistent with ground truth. It demonstrated that the candidate spots from intensity-based sampling algorithm in single-channel provides a high confident guidance for Bayesian inference.

3.3.2 Running time analysis

The running time for resolving simulated dataset including four separated Tub patches are shown in Table 1. Here, we used six CPU cores in personal computer for parallel computing in Live-SIMBA. Benefiting from parallel computing and improved sampling algorithms, Live-SIMBA achieved about 25 and thousands of times acceleration compared to SIMBA and 3B with 360 candidates which is suggested by 3B. The Live-SIMBA in single-channel mode implement a comparable efficiency as Live-SIMBA in dual-channel mode.

Tables Icon

Table 1. The Runtime analysis of 3B, SIMBA and Live-SIMBA for simulated dataset.

3.4 Performance qualification on experimental datasets

To evaluate the performance on experimental datasets, we compared the current state-of-the-art including Bayesian inference based methods and high-order statistics methods with Live-SIMBA in single- (Live-SIMBA SC) and dual-channel (Live-SIMBA DC) mode as shown in Fig. 6 and Fig. 7.

 figure: Fig. 6.

Fig. 6. Performance comparison between 3B and Live-SIMBA of actin network in a fixed U2OS cell. (a) Superimposed image from 200 diffraction-limited frames. (b-d) The reconstructions from red box in (a) by using 3B (b), Live-SIMBA in single-channel mode (Live-SIMBA SC) (c) and Live-SIMBA in dual-channel mode (Live-SIMBA DC) (d). (e) Comparison of the running time for the above methods. The running times were 19 h, 1.25 min and 1.23 min, respectively. The speedup ratios of 3B and Live-SIMBA in single- or dual-channel mode is more than 900 times. (f-h) The reconstructions from green box in (a) by using Live-SIMBA with adjustable background estimation with background level 1, 0.9 and 0.8. The scale bar are 4 $\mu$m (a), 0.5 $\mu$m (b-d) and 1 $\mu$m (f-h).

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Performance comparison of actin network in a fixed U2OS cell by using PALM, SOFI, bSOFI, SRRF and Live-SIMBA. (a) Superimposed fluorescence data from 200 frames showing the diffraction-limited resolution. (b-g) Diffraction-limited (DL) image (b), the reconstructions of PALM analysis with 20,000 frames (c), SOFI, bSOFI and SRRF with 1000 frames (d-f) and Live-SIMBA in single-channel mode (Live-SIMBA SC) with 200 frames (g) from red box region in (a). (h-i) Quantitative mapping results by using SQUIRREL for SRRF and Live-SIMBA in single-channel mode. The RSP indicates resolution-scaled Pearson coefficient and RSE indicates resolution-scaled error. The scale bars are 3 $\mu$m (a, h-i) and 1 $\mu$m (b-g).

Download Full Size | PDF

3.4.1 Comparison with Bayesian inference based methods

To validate the performance of Live-SIMBA on resolving the underlying SR structures, we imaged the actin network labeled by lifeact-pcStar in U2OS cells. Reconstructions for two local regions from 200 frames were used to qualify the performance of 3B and Live-SIMBA in both single- and dual-channel modes. In a relatively small region (as shown in red box of Fig. 6(a)), the reconstruction of 3B captured the main features of structures but also generated the thin and discontinuous artifacts indicated by arrow in Fig. 6(b). In contrast, these artifacts were absent in the reconstructions of Live-SIMBA in both single- and dual-channel modes, demonstrating the Live-SIMBA‘s ability for recovering latent ultrastructures. Moreover, the Live-SIMBA in single-channel mode achieved the similar performance to the dual channel mode. We further compared the running time in this region, it took 19 hours (3B analysis), 1.25 min (Live-SIMBA in single-channel mode), and 1.23 min (Live-SIMBA in dual-channel mode) respectively. Live-SIMBA achieves more than 900 times acceleration than 3B for this relatively small FOV.

To resolve the weak signals inside the specimens, we compared the reconstructions in the other region (as shown in green box of Fig. 6(a)) by Live-SIMBA in single-channel using adjustable background estimation with different background level. The background was estimated by using 100 percentage, 90 percentage and 80 percentage of the darkest to whole image region respectively. The results (Fig. 6(f-h)) showed that more weak signals can be resolved when adjusting the background level.

 figure: Fig. 8.

Fig. 8. Live-SIMBA imaging of ER structures in live COS-7 cells. (a) Superimposed fluorescence data from first 50 frames showing the diffraction-limited resolution. (b) Reconstructed images using Live-SIMBA at selective time points from the green box in a. (c) The spatial resolution estimated by decorrelation analysis from reconstructed images in (b). Scale bars are 4 $\mu$m (a), 1 $\mu$m (b)

Download Full Size | PDF

3.4.2 Comparison with high-order statistical analysis

Encouraged by the performance of Live-SIMBA in single-channel mode, we further compared it with high-order statistical analysis including the commonly used SOFI, bSOFI and SRRF. We imaged the actin network labeled by lifeact-mEos3.2 in U2OS cells and evaluated the reconstructions for both local region (shown in red box of Fig. 7) and large FOV (Fig. S2). We reconstructed the actin structures by Live-SIMBA analysis (Live-SIMBA SC) using only 200 raw frames and compared the result with that made by PALM using 20,000 frames, SOFI, bSOFI and SRRF using 1000 frames. Compared with SOFI and bSOFI analysis, the actin ruffles produced by SRRF and Live-SIMBA analysis resembled more closely to the PALM data (Fig. 7(c-g)), whereas the SOFI and bSOFI were even unable to reconstruct clear structures though with 1000-frames high-density datasets. The SRRF captures the main structure but with discontinuous structure. In contrast, Live-SIMBA provide higher fidelity on resolved structures. We analyzed the same region for SOFI, bSOFI, SRRF analyses using 200 frames and 500 frames (Fig. S2), which demonstrated the similar results.

We further assessed the performance of SRRF and Live-SIMBA in single-channel mode using SQUIRREL (super-resolution quantitative image rating and reporting of error locations), the most recent SR quality assessment method [31]. SQUIRREL is based on the assumption that the SR image should a high-precision representation of the potential nanoscale position of the imaging fluorophore and photon emission. By computing the similarity between the diffraction-limited image and the disgraded SR image, a quantitative map and two metrics, including the resolution-scaled Pearson coefficient (RSP) and the resolution-scaled error (RSE), are generated. The high RSP and low RSE values, represent the high image quality. For a local region (Fig. 7(h-i)), it can be seen the reconstruction using Live-SIMBA has fewer mismatch with the diffraction limited image as well as better quantitative metrics, supporting our previous observation.

3.5 Performance qualification on live-cell datasets

Finally, we performed SR imaging of ER over large FOV in live cells by using Live-SIMBA in single-channel mode. pCalnexin-pcStar was used to label the ER in COS-7 cells. We acquired a series of time-lapse images with a 20-ms exposure and a 12-s interval, where each 50 frames were used to reconstruct one SR image. The Live-SIMBA imaging resolved the dense tubular matrices with dynamic changes, interactions, and rearrangements (Fig. 8(b)). In contrast, the sites forming the tubular structures were relatively stable, where only small displacements were appeared.

We further measure the spatial resolution from selected region at different time points by using decorrelation analysis [32]. Live-SIMBA achieved a spatial resolution in the range of 40 nm to 43 nm and a temporal resolution of 1 s (Fig. 8(c)). The spatial resolution measured by Fourier ring correlation [33] and decorrelation analysis for entire image is shown in Fig. S4-(a) and Fig. S4-(b). This demonstrated that Live-SIMBA enabled the large-FOV long-term SR imaging and provided the possibility to observe the dynamic changes of ultrastructure in living cells.

4. Conclusion

In this paper, we propose an ImageJ plug-in, Live-SIMBA, for large-FOV long-term time-series analysis in living cells. Live-SIMBA circumvents the requirement of dual-channel dataset by using intensity-based sampling algorithm, therefore can act in both single- and dual-channel mode. Live-SIMBA greatly accelerates calculation process using multi-core parallel technique and improves reconstructions on structural fidelity and weak signals with adjustable background estimation and distance-threshold filter. The plug-in integrates the whole process with a user-friendly GUI and greatly reduce the complexity of the parameters in Bayesian inference, making the software accessible to the researchers in SR field. Live-SIMBA provides possibility to observe the dynamic changes of ultrastructure with minimal frames at second-level. Therefore, we expect Live-SIMBA will be a useful tool for SR live-cell imaging in the cell biology research.

Funding

National Key Research and Development Program of China (2017YFA0504702, 2017YFA0505300, 2017YFE0103900); National Natural Science Foundation of China (21778069, 31421002, 61672493, 61932018, U1611261, U1611263); Beijing Municipal Natural Science Foundation Grant(L182053); Project of the National Laboratory of Biomacromolecules.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

See Supplement 1 for supporting content.

Data Availability

The Live-SIMBA ImageJ plugin for live-cell super-resolution reconstruction is available as Supplementary Software [29]. The software package features an easy-to-use user interface including all steps of Live-SIMBA and works for Ubuntu operation system. Further updates will be made freely available at http://ear.ict.ac.cn.

References

1. S. W. Hell, “Far-field optical nanoscopy,” Science 316(5828), 1153–1158 (2007). [CrossRef]  

2. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

3. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

4. S. T. Hess, T. P. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006). [CrossRef]  

5. Y. M. Sigal, R. Zhou, and X. Zhuang, “Visualizing and discovering cellular structures with super-resolution microscopy,” Science 361(6405), 880–887 (2018). [CrossRef]  

6. S. Cox, “Super-resolution imaging in live cells,” Dev. Biol. 401(1), 175–181 (2015). [CrossRef]  

7. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster storm using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef]  

8. S. J. Holden, S. Uphoff, and A. N. Kapanidis, “Daostorm: an algorithm for high-density super-resolution microscopy,” Nat. Methods 8(4), 279–280 (2011). [CrossRef]  

9. E. A. Mukamel, H. Babcock, and X. Zhuang, “Statistical deconvolution for superresolution fluorescence microscopy,” Biophys. J. 102(10), 2391–2400 (2012). [CrossRef]  

10. F. Huang, S. L. Schwartz, J. M. Byars, and K. A. Lidke, “Simultaneous multiple-emitter fitting for single molecule super-resolution imaging,” Biomed. Opt. Express 2(5), 1377–1393 (2011). [CrossRef]  

11. T. Quan, H. Zhu, X. Liu, Y. Liu, J. Ding, S. Zeng, and Z.-L. Huang, “High-density localization of active molecules using structured sparse model and Bayesian information criterion,” Opt. Express 19(18), 16963–16974 (2011). [CrossRef]  

12. M. Fazel, M. J. Wester, H. Mazloom-Farsibaf, M. B. Meddens, A. S. Eklund, T. Schlichthaerle, F. Schueder, R. Jungmann, and K. A. Lidke, “Bayesian multiple emitter fitting using reversible jump Markov chain Monte Carlo,” Sci. Rep. 9(1), 13791 (2019). [CrossRef]  

13. S. Cox, E. Rosten, J. Monypenny, T. Jovanovic-Talisman, D. T. Burnette, J. Lippincott-Schwartz, G. E. Jones, and R. Heintzmann, “Bayesian localization microscopy reveals nanoscale podosome dynamics,” Nat. Methods 9(2), 195–200 (2012). [CrossRef]  

14. T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein, “Fast, background-free, 3d super-resolution optical fluctuation imaging (sofi),” Proc. Natl. Acad. Sci. 106(52), 22287–22292 (2009). [CrossRef]  

15. S. Geissbühler, N. Bocchio, C. Dellagiacoma, M. Geissbühler, C. Berclaz, M. Leutenegger, and T. Lasser, “Balanced super-resolution optical fluctuation imaging (bsofi),” in 18th International Workshop on Single Molecule Spectroscopy and Ultra Sensitive Analysis in the Life Sciences (2012), POST_TALK.

16. N. Gustafsson, S. Culley, G. Ashdown, D. M. Owen, P. M. Pereira, and R. Henriques, “Fast live-cell conventional fluorophore nanoscopy with imagej through super-resolution radial fluctuations,” Nat. Commun. 7(1), 12471 (2016). [CrossRef]  

17. F. Xu, M. Zhang, W. He, R. Han, F. Xue, Z. Liu, F. Zhang, J. Lippincott-Schwartz, and P. Xu, “Live cell single molecule-guided Bayesian localization super resolution microscopy,” Cell Res. 27(5), 713–716 (2017). [CrossRef]  

18. M. Zhang, Z. Fu, C. Li, A. Liu, D. Peng, F. Xue, W. He, S. Gao, F. Xu, and D. Xu, “Fast super-resolution imaging technique and immediate early nanostructure capturing by a photoconvertible fluorescent protein,” Nano Lett. 20(4), 2197–2208 (2020). [CrossRef]  

19. Y. Li, F. Xu, F. Zhang, P. Xu, M. Zhang, M. Fan, L. Li, X. Gao, and R. Han, “Dlbi: deep learning guided Bayesian inference for structure reconstruction of super-resolution fluorescence microscopy,” Bioinformatics 34(13), i284–i294 (2018). [CrossRef]  

20. M. Zhang, H. Chang, Y. Zhang, J. Yu, L. Wu, W. Ji, J. Chen, B. Liu, J. Lu, and Y. Liu, “Rational design of true monomeric and bright photoactivatable fluorescent proteins,” Nat. Methods 9(7), 727–729 (2012). [CrossRef]  

21. W. K. Chen, The Electrical Engineering Handbook (Elsevier, 2004).

22. M. Berger, Geometry (vols. 1-2) (1987).

23. Z. Ghahramani and M. I. Jordan, “Factorial hidden Markov models,” in Advances in Neural Information Processing Systems, (1996), pp. 472–478.

24. J. A. Hartigan, “A k-means clustering algorithm: Algorithm as 136,” Appl. Stat. 28(1), 100–130 (1979). [CrossRef]  

25. J. C. Spall, “Estimation via Markov chain Monte Carlo,” IEEE Control. Syst. Mag. 23(2), 34–45 (2003). [CrossRef]  

26. F. Xu, M. Zhang, Z. Liu, P. Xu, and F. Zhang, “Bayesian localization microscopy based on intensity distribution of fluorophores,” Protein Cell 6(3), 211–220 (2015). [CrossRef]  

27. L. Martino and J. Míguez, “Generalized rejection sampling schemes and applications in signal processing,” Signal Process. 90(11), 2981–2995 (2010). [CrossRef]  

28. R. Chandra, L. Dagum, D. Kohr, R. Menon, D. Maydan, and J. McDonald, Parallel Programming in OpenMP (Morgan Kaufmann, 2001).

29. H. Li, “Live-SIMBA software,” figshare (2020) [retrieved 19 September 2020], https://doi.org/10.6084/m9.figshare.12979535.v3.

30. D. Sage, H. Kirshner, T. Pengo, N. Stuurman, J. Min, S. Manley, and M. Unser, “Quantitative evaluation of software packages for single-molecule localization microscopy,” Nat. Methods 12(8), 717–724 (2015). [CrossRef]  

31. S. Culley, D. Albrecht, C. Jacobs, P. M. Pereira, C. Leterrier, J. Mercer, and R. Henriques, “Quantitative mapping and minimization of super-resolution optical imaging artifacts,” Nat. Methods 15(4), 263–266 (2018). [CrossRef]  

32. A. C. Descloux, K. S. Grussmayer, and A. Radenovic, “Parameter-free image resolution estimation based on decorrelation analysis,” Nat. Methods 16(9), 918–924 (2019). [CrossRef]  

33. R. P. Nieuwenhuizen, K. A. Lidke, M. Bates, D. L. Puig, D. Grünwald, S. Stallinga, and B. Rieger, “Measuring image resolution in optical nanoscopy,” Nat. Methods 10(6), 557–562 (2013). [CrossRef]  

Supplementary Material (2)

NameDescription
Code 1       The ImageJ plugin, source code, test data and manual for Live-SIMBA
Supplement 1       The supplementary Note for Live-SIMBA

Data Availability

The Live-SIMBA ImageJ plugin for live-cell super-resolution reconstruction is available as Supplementary Software [29]. The software package features an easy-to-use user interface including all steps of Live-SIMBA and works for Ubuntu operation system. Further updates will be made freely available at http://ear.ict.ac.cn.

29. H. Li, “Live-SIMBA software,” figshare (2020) [retrieved 19 September 2020], https://doi.org/10.6084/m9.figshare.12979535.v3.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Data analysis pipeline of Live-SIMBA. The three main components include dual-channel alignment, high density reconstruction and data visualization. The detailed implementations of high density reconstruction are illustrated in Fig. 2.
Fig. 2.
Fig. 2. The overall workflow of Live-SIMBA algorithm. The single molecules (from red channel or generated using intensity-based sampling algorithm) and expanded high-probability molecules will guide the image sequence analysis in green channel by using Bayesian inference. The algorithm is accelerated using parallel computing.
Fig. 3.
Fig. 3. Flow chart of the Live-SIMBA algorithm with parallel computing. Arrows indicate the direction of the workflow. The process of model optimization is shown in red dashed box, which is a highly independent task and can be accelerated by using parallel computing technique, as shown in blue dashed box. OpenMP: Open Multi-Processing.
Fig. 4.
Fig. 4. The quantification of dual-channel alignment in Live-SIMBA. (a) The overlaid images of fluorescent beads in dual-channel (b) The overlaid images of fluorescent beads in dual-channel after translation alignment (c) The overlaid images of fluorescent beads in dual-channel after affine transformation (d) The RMSE (root-mean-square error) of localized beads in dual-channel by using different methods. RG indicates no calibration in the red channel and the green channel, RG_translation indicates the calibration using translation and RG_affine indicates the calibration using affine transformation (here, 1 pixel = 68.75 nm). About 15 beads from three small regions are chose for these transformations and the RMSE of the calibration using affine transformation is about 46 and 13.4 times lower than no calibration and the calibration using translation. The scale bars are 3.5 $\mu$m
Fig. 5.
Fig. 5. Performance qualification of Bayesian inference based methods on simulated datasets from SMLM challenge. (a-c) Ground-truth high-resolution images, (d-e) selected diffraction-limited images from image sequence, (g-r) the reconstructions using 3B (g-i), SIMBA (j-l), Live-SIMBA in single-channel mode (Live-SIMBA SC) (m-o) and Live-SIMBA in dual-channel mode (Live-SIMBA DC) (p-r) on three representative areas of simulated dataset, respectively. (i-iii) The comparison of normalized intensities taken from the red lines for different methods.
Fig. 6.
Fig. 6. Performance comparison between 3B and Live-SIMBA of actin network in a fixed U2OS cell. (a) Superimposed image from 200 diffraction-limited frames. (b-d) The reconstructions from red box in (a) by using 3B (b), Live-SIMBA in single-channel mode (Live-SIMBA SC) (c) and Live-SIMBA in dual-channel mode (Live-SIMBA DC) (d). (e) Comparison of the running time for the above methods. The running times were 19 h, 1.25 min and 1.23 min, respectively. The speedup ratios of 3B and Live-SIMBA in single- or dual-channel mode is more than 900 times. (f-h) The reconstructions from green box in (a) by using Live-SIMBA with adjustable background estimation with background level 1, 0.9 and 0.8. The scale bar are 4 $\mu$m (a), 0.5 $\mu$m (b-d) and 1 $\mu$m (f-h).
Fig. 7.
Fig. 7. Performance comparison of actin network in a fixed U2OS cell by using PALM, SOFI, bSOFI, SRRF and Live-SIMBA. (a) Superimposed fluorescence data from 200 frames showing the diffraction-limited resolution. (b-g) Diffraction-limited (DL) image (b), the reconstructions of PALM analysis with 20,000 frames (c), SOFI, bSOFI and SRRF with 1000 frames (d-f) and Live-SIMBA in single-channel mode (Live-SIMBA SC) with 200 frames (g) from red box region in (a). (h-i) Quantitative mapping results by using SQUIRREL for SRRF and Live-SIMBA in single-channel mode. The RSP indicates resolution-scaled Pearson coefficient and RSE indicates resolution-scaled error. The scale bars are 3 $\mu$m (a, h-i) and 1 $\mu$m (b-g).
Fig. 8.
Fig. 8. Live-SIMBA imaging of ER structures in live COS-7 cells. (a) Superimposed fluorescence data from first 50 frames showing the diffraction-limited resolution. (b) Reconstructed images using Live-SIMBA at selective time points from the green box in a. (c) The spatial resolution estimated by decorrelation analysis from reconstructed images in (b). Scale bars are 4 $\mu$m (a), 1 $\mu$m (b)

Tables (1)

Tables Icon

Table 1. The Runtime analysis of 3B, SIMBA and Live-SIMBA for simulated dataset.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

[ u v ] = [ a 2 a 1 a 0 b 2 b 1 b 0 ] [ x y 1 ]
I ( x ) = i = 1 K γ i ( x )
P ( x ) = I ( x ) max x I ( x )
( I K ) ( x , y ) = + + I ( x , y ) K ( x u , y v )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.