In super-resolution imaging techniques based on single-molecule switching and localization, the time to acquire a super-resolution image is limited by the maximum density of fluorescent emitters that can be accurately localized per imaging frame. In order to increase the imaging rate, several methods have been recently developed to analyze images with higher emitter densities. One powerful approach uses methods based on compressed sensing to increase the analyzable emitter density per imaging frame by several-fold compared to other reported approaches. However, the computational cost of this approach, which uses interior point methods, is high, and analysis of a typical 40 µm x 40 µm field-of-view super-resolution movie requires thousands of hours on a high-end desktop personal computer. Here, we demonstrate an alternative compressed-sensing algorithm, L1-Homotopy (L1H), which can generate super-resolution image reconstructions that are essentially identical to those derived using interior point methods in one to two orders of magnitude less time depending on the emitter density. Moreover, for an experimental data set with varying emitter density, L1H analysis is ~300-fold faster than interior point methods. This drastic reduction in computational time should allow the compressed sensing approach to be routinely applied to super-resolution image analysis.
© 2013 Optical Society of America
Stochastic Optical Reconstruction Microscopy (STORM) and related techniques use stochastic switching and high-precision localization of single molecules to achieve image resolution beyond the diffraction limit [1–3]. In this imaging modality, sparse subsets of fluorescent emitters are activated stochastically, localized to sub-diffraction precision, and then inactivated. This process is repeated, and a final image is constructed from the individual localizations of emitters. Thus, the resolution of this image is no longer limited by diffraction but determined by the accuracy with which each emitter can be localized and the density of emitters [4–7].
However, because individual emitters are activated stochastically and observed as diffraction limited spots, the maximum density of resolvable emitters per imaging frame is limited. This emitter density limit in turn sets the number of camera frames that are required to construct a super-resolution image and, hence, the temporal resolution of STORM imaging. Thus, a lower permissible emitter density per frame may restrict the ability to follow processes in dynamic environments such as live cells [4,6,8]. In parallel, the inability to resolve overlapping fluorophores may also lead to underestimates of the number of emitters from regions of higher density, effectively limiting the dynamic range of the STORM image. For these reasons, better methods for handling overlapping emitters could substantially improve the performance of STORM imaging.
Recently, several methods have been introduced to resolve partially overlapping emitter images [9–15]. Among these methods, one powerful approach  allows the highest density of emitters to be localized to date by adopting techniques from the rapidly advancing field of compressed sensing. In the basic compressed sensing problem, an under-determined, sparse signal vector, i.e. a signal in which only a few basis elements are non-zero, is reconstructed from a noisy measurement in a basis in which the signal is not sparse. In the context of STORM, the sparse basis is a high resolution grid, in which fluorophore locations are presented—effectively an up-sampled, reconstructed image—while the noisy measurement basis is the lower resolution camera pixels, on which fluorescence signal are detected experimentally . In this framework, the optimal reconstructed image is the one that contains the fewest number of fluorophores but reproduces the measured image on the camera to a given accuracy when convolved with the optical point-spread-function (PSF) (Fig. 1). Remarkably, this technique can accurately process emitter densities 4-6 fold higher than what could be handled by a sparse emitter fitting algorithm, which can only analyze non-overlapping images of emitters, and approximately 2-3 fold higher than what could be analyzed by other overlapping-emitter-fitting algorithms . However, the interior point method used in the reported compressed sensing approach  is computationally intensive and, by our estimate, reconstruction of a typical STORM image of a 40 µm × 40 µm field of view using this approach would take a few months on a typical research computer.
Here, we introduce an alternative algorithm for reconstructing STORM images using the compressed sensing approach. This algorithm, referred to as L1-Homotopy (L1H) [16–18], is substantially faster than the previous approach which employed interior point methods, producing equivalent reconstructed STORM images in one to two orders of magnitude less time depending on the emitter density. Hence, the L1H algorithm should now allow the compressed sensing analysis of STORM data to become a routine practice.
In the compressed sensing approach to STORM image reconstruction, the reconstructed image is a high resolution grid of fluorophore locations that, when convolved with the optical PSF, reproduces the measured low resolution image on the camera to an accuracy set by the expected noise in the image. Mathematically, this reconstruction can be achieved by solving the following constrained-minimization problem:15].
Equation (1) was previously solved using interior point methods —iterative algorithms broadly applicable to a large class of minimization problems. Unfortunately, at each iteration such methods must invert an N × N matrix—an operation with complexity that scales as O(N3). As N is typically of order 103 – 104, 109 – 1012 floating point operations are needed for each iteration. In practice, reconstructing a ~40 µm × 40 µm STORM image with ~20 nm resolution using Eq. (1) and interior point methods takes about 30 minutes per camera frame on a high-end desktop PC. Since the typical movie used to construct a single STORM image contains thousands to tens of thousands of frames, it would take thousands of hours to reconstruct a typical STORM image. The long computational time has, thus, limited the use of compressed sensing in the reconstruction of STORM images to small fields of view, for example ~3 µm × 3 µm .
However, interior point methods are not necessarily the most efficient way to solve this minimization problem, and there is a rapidly growing collection of alternative methods [19, 20]. Many of these alternative algorithms solve the following minimization problem:Eq. (1) as for every value of the noise parameter ε there exists an equivalent value of λ for which the solutions are identical. In practice, solving Eq. (2) is often computationally simpler than solving Eq. (1) which is the reason why many alternative algorithms have been reported to have faster computational speed than interior point methods [19, 20].
However, Eq. (2) presents a challenge in the analysis of STORM data. Calculation of the appropriate noise parameter ε for the analysis of STORM data is simple. It can be done directly from the image data itself  (see Methods). However, identification of the value of λ for which Eq. (2) produces the equivalent solution to Eq. (1) is often not trivial since there is no known analytical connection between the noise parameter ε and λ for the general problem . Thus, many of the alternative compressed sensing algorithms must first be modified to search for the appropriate value of λ before they can be applied to the analysis of STORM data.
Among the alternative algorithms that solve Eq. (2), we identified one algorithm for which the search for the appropriate λ may be particularly efficient for STORM imaging data: the L1-Homotopy (L1H) algorithm. This algorithm functions by exploiting a simple geometric property of the solution space of Eq. (2) to trace solutions of Eq. (2) from one value of λ to another value. In particular, as the value of λ is decreased, the balance shifts from favoring the sparsity of emitters to favoring the accuracy in the optimal solution, and this solution changes in a continuous, piece-wise-linear fashion as a function of λ. Importantly, the slope of this solution changes only at well-defined ‘break-points’ where individual emitters are either added to, removed from, or moved to a different location within the solution, and the value of λ at which each break-point occurs can be calculated from the solution at adjacent break-points. These properties are illustrated in Fig. 2 for a simple one-dimensional problem. The L1H algorithm begins at a large value of λ where the solution is obvious, , and steps downward in λ from break-point to break-point until it reaches the desired value of λ.
While matrix inversions are needed at each break-point to calculate the position of the next break-point, these inversions typically involve a smaller matrix than those used by interior point methods. In particular, the required matrix contains a number of elements equal to the number non-zero up-sampled pixels. If this number—which scales as the number of reconstructed emitters, K—is small enough, we find that, in practice, the computational complexity of L1H is no longer dominated by the inversion of this matrix but rather by other matrix multiplications whose complexity scale roughly as O(N × K). By contrast, the matrix inversion typically dominates the cost of interior point methods, and its complexity scales as O(N3). Thus, when the solution vector is sparse, as it is in STORM images, K will be much less than N and the computational cost of L1H should be dramatically lower than the cost of interior point methods.
One advantage of L1H over other alternative compressed sensing algorithms is that it naturally searches the space of possible λ values as part of its normal iteration, and L1H can be modified to identify the λ that corresponds to the appropriate value of ε. To accomplish this task, we introduced a simple stopping criterion into the L1H algorithm. After the identification of each break point, our algorithm computes the residual error between the reconstructed image at that break point and measured image, i.e. the L2 norm in Eq. (1). If that residual error is larger than the target error set by ε, then the algorithm continues onto the next break point. If the residual error is less than ε, then iteration stops, and the algorithm uses mid-point bisection to find the value of λ and the intensity of the localized emitters for which the residual error is equal to ε. This solution represents the final reconstructed image, and the final value of λ corresponds to the value for which the solutions of Eqs. (1) and (2) are equal. Because calculation of the residual error involves K × M calculations, it does not significantly increase the computational cost of the L1H algorithm.
Of the existing algorithms that solve Eq. (2), L1H is the only algorithm which has the property that identification of the correct λ is a byproduct of its natural iteration. Thus, we reasoned that L1H should be the optimal approach for STORM image reconstruction. However, it is possible that when combined with an efficient search algorithm for λ, other alternative compressed sensing algorithms might be comparable to L1H in terms of computational complexity. To test this possibility, we considered one of the more commonly used efficient compressed sensing algorithms, the Fast Iterative Shrinkage Thresholding Algorithm (FISTA) . This algorithm also requires on the order of N × M calculations per iteration, and, thus has a computational cost per iteration that is similar to L1H. However, each iteration in FISTA does not produce an exact solution to Eq. (2), as is the case for L1H, but rather serves only to improve the accuracy of an approximation to Eq. (2). Thus, in order for FISTA to be competitive with L1H, it must produce an approximation of the solution with sufficient accuracy in a relatively small number of iterations.
We tested the number of iterations of FISTA required to produce a reasonable approximation to the solution of Eq. (2). Even when provided with the correct value of λ as identified by L1H, FISTA required 100 – 1000 times more iterations to converge to an approximate solution of a 7 pixel × 7 pixel data set than is required for L1H to solve the problem exactly. We expect similar limitations to arise for other iterative approaches such as gradient projection, proximal gradient, and augmented Lagrange multiplier methods [19, 20] which would also require an iterative search over λ in addition to their natural iterative improvement of the solution to Eq. (2). Thus, L1H appears to be particularly well suited for the analysis of STORM data using compressed sensing.
Algorithm and Image Analysis. The L1H algorithm used here was written in C with a Python interface and is based on the approach described by Donoho and Tsiag  with two modifications. First, because negative photon numbers are unphysical, we restrict the algorithm so that it only considers non-negative values for . Second, we modified this algorithm so that it stops when the appropriate value of λ has been identified, as discussed above. The FISTA algorithm used here was also written in C with a Python interface following . Source code and executables of the L1H and FISTA algorithms are available at http://zhuang.harvard.edu/software.html. We used the Matlab interior-point-method implementation of the compressed sensing algorithm provided in  for comparisons. The core of this code is the optimized interior point methods provided in the freeware package CVX . All reported data were calculated with the default precision settings of this package; however, we observe little difference in the required computation time when these settings are optimized for speed. All analysis was conducted on a single core of an i7-3770 Sandy Bridge 3.4 GHz CPU.
To limit the size of the matrix, we divided the fluorescence images from the camera into sub-regions and analyzed these regions separately as was done previously with interior point methods . Specifically, we divided the measured images into multiple, partially overlapping 7 pixel × 7 pixel regions. To analyze each of the 7 pixel × 7 pixel regions, we allowed emitters to be placed within an 8 pixel × 8 pixel space, which corresponds to a 1/2 pixel extension in each direction from the measured region, in order to take into account the fluorescence signal contributed by emitters outside of the measured region due to the finite PSF size, as shown in Fig. 3.To further limit errors due to edge effects, we only retained emitters located within a central 5 pixel × 5 pixel region. These 5 × 5-camera-pixel up-sampled images, with neighboring images touching but not overlapping with each other, were then stitched together to form the final STORM image. In addition, following , we included a background term in the matrix that allows for non-zero uniform background correction. As the regions that are analyzed are small (7 × 7 pixels) this background correction makes the solution fairly robust to the slowly varying background typically encountered in fluorescence images.
For a noise-free detector, the noise parameter would be theoretically set by the Poisson noise of the photons, i.e. . However, it was previously shown that a slightly larger noise parameter produced higher quality reconstructions . For simulated data, the optimized noise parameter was shown to be 1.5 times the theoretical noise parameter. For real data collected with an electron-multiplying CCD camera, the optimized noise parameter was shown to be 2.1 times the theoretical noise parameter. We also used these optimized values in this work.
Simulated Data. For all simulated data, we randomly distributed emitters on an 8-fold up-sampled grid and generated the number of photons for each emitter from a lognormal distribution with a mean and standard deviation of 3000 and 1700 photons, respectively. A Gaussian PSF of width equal to one camera pixel was used to convolve the signal from individual emitters, and the photons from all up-sampled pixels within a low-resolution, camera pixel were summed. Finally, a uniform background of 70 photons was added to each camera pixel and the image was corrupted with Poisson distributed noise with a mean set by the number of photons in each pixel. In all simulations, the camera pixel is assumed to be 167 nm in size. These values were chosen to match our typical experimental data. Using this approach, 7 × 7 camera pixel images were generated, and these simulated images were analyzed by both the L1H and CVX methods to compare the accuracy of the reconstructions. We also generated 256 × 256 camera pixel images to compare the computation time of these two methods. Reported computation times are the average of three independent trials.
Algorithm Comparison. To compare the quality of the reconstructions between different algorithms, we used a single-emitter fitting algorithm [24, 25], the multi-emitter fitting algorithm 3d-DAOSTORM , L1H, and CVX to analyze 256 × 256 cameral pixel, five frame data simulated with the same parameters as above. We used the 2d option of 3d-DAOSTORM which produces similar results to the original DAOSTORM algorithm . We converted the up-sampled reconstructed images produced by L1H and CVX to molecule lists by combining all non-zero pixels with at least one of their 8 adjacent neighbors also with a non-zero value into a single molecule whose location is determined by the weighted centroid of these connected pixels as described previously . Only emitters reconstructed within a central 150 × 150 camera pixel region of interest were included in the analysis to minimize edge effects. The reconstructed density is the number of localized emitters in the region of interest. The XY localization error is defined as the median Cartesian distance between each reconstructed molecule and the nearest real molecule. This distance is related to the standard deviation of a Gaussian distributed position error,, via .
Real Data. To test the L1H algorithm on real data, we collected images of immunostained microtubules. BS-C-1 cells were fixed with 3% paraformaldehyde and 0.1% glutaraldehyde in phosphate buffered saline (PBS) for 10 minutes at room temperature and then were treated with 0.1% sodium borohydride for 7 minutes to reduce auto-fluorescence. After blocking with 3% w/v bovine serum albumin (BSA) and 0.1% v/v Triton X-100 in PBS, the microtubules were labeled with a mouse monoclonal anti-beta-tubulin primary antibody (Sigma, T5201) followed by a donkey anti-mouse secondary antibody (Jackson Labs, 715-005-150) conjugated with Alexa Fluor 405 and Alexa Fluor 647 . Alexa Fluor 647 is a photoswitchable dye and Alexa Fluor 405 facilitates the photoactivation of Alexa Fluor 647. STORM images were collected on a Olympus IX-71 inverted optical microscope as described previously [24, 25]. Briefly, a 647 nm laser was used to both image Alexa Fluor 647 and switch it off, and a 405 nm laser was used to reactivate the Alexa Fluor 647 to the fluorescent state. The 405 nm laser was used at low intensities such that only a subset of the Alexa Fluor 647 molecules were activated, allowing the locations of these molecules to be determined using either L1H or CVX.
We find that compressed sensing is sensitive to differences between the assumed PSF and the actual PSF; thus, we used an experimentally determined PSF to analyze the real STORM data with the L1H and CVX algorithms. The PSF of the STORM microscope was determined by analyzing the images of well separated single Alexa Fluor 647 molecules. The pixel intensity profile of the images of these molecules was fit with a cubic spline. Five hundred splines were aligned to each other at the centroid positions, and then averaged to create a single cubic spline estimate of the PSF. This spline PSF estimate was then used to create the matrix used for this data.
In theory, interior point methods and L1H should produce identical solutions. Nonetheless, as both algorithms are subject to stability issues and round-off errors, we first tested computationally whether the two algorithms produced identical reconstructions of STORM data. We constructed a set of simulated 7 × 7 camera pixel STORM data and compared the results derived from our L1H algorithm and the algorithm for the interior point method (CVX) [15, 23]. Figure 4(a)-4(c) shows that the two algorithms produce reconstructions in which the vast majority of fluorophores are in the same locations. There are occasionally differences between the two reconstructions, as depicted in Fig. 4(c). However, as shown in Fig. 4(d)-4(f), these differences are exceedingly rare and when they do occur, the reconstructed positions of the fluorophores differ only by one up-sampled pixel (1/8 of the camera pixel). Moreover, we find that the L1H and CVX produce solutions that differ in the L1 norm and in the residual image error by much less than 0.1%, as shown in Fig. 4(g)-4(h). Thus, we conclude that the two algorithms produce functionally equivalent reconstructions.
To further confirm that L1H produces functionally equivalent reconstructions to CVX, we generated a series of simulated 256 × 256 camera pixel STORM movies across a range of emitter densities and analyzed them with CVX and with L1H. Figure 5 reveals that L1H and CVX produce reconstructions that are effectively identical in the density and localization precision of reconstructed emitters. However, Fig. 6 reveals that L1H produced these reconstructions in roughly 10 – 250 fold less time than CVX. The speed improvement with L1H decreases as the emitter density increases. This density dependence reflects the fact that the computation time of L1H scales with the number of emitters while that of CVX scales primarily with the number of up-sampled pixels. Based on the above results, we conclude that L1H produces the same reconstructions as CVX but in a fraction of the time.
To determine the computational speed improvement for real data, where emitter density is non-uniform but can vary dramatically within a single image, we analyzed STORM images of immunostained microtubules in mammalian cells. Figure 7(a) shows that for the real STORM data we again find that CVX and L1H produce essentially identical solutions. However using the time it took CVX to analyze the first 10 frames of this 5000 frame movie (which is 256 × 256 camera pixels per frame), we estimate that a complete analysis of all 5000 frames would take roughly two months using a single core of a high end PC. In contrast, L1H was able to analyze the entire 5000 frame movie in just under 3 hours using the same computer. The analysis time for the first 10 frames using L1H was 340-fold faster that using CVX, as shown in Fig. 7(b). The average emitter density in this field of view is approximately 1 μm−2; however, the speed enhancement is substantially larger than that observed for the simulated data at a uniform density of 1 μm−2 because of the large variation in the local emitter density observed in the real STORM data. Since the typical STORM image does not have a uniform emitter density, we anticipate this experimentally-derived speed enhancement to be representative of what will be achieved in the analysis of a wide range of real STORM data.
The results in Fig. 7 also confirmed the previous conclusion that analysis of relatively high emitter density data by compressed sensing yields superior results compared to analysis by a single-emitter fitting algorithm , where overlapping emitters are discarded. Because of the relatively high emitter density of this data set, the single-emitter localization algorithm only identified 23% of the localizations that L1H identified. Figure 7(c)-7(f) show that the single-emitter algorithm produced an image with reduced sharpness, continuity, and contrast as compared to the reconstruction produced with the compressed sensing approach using the L1H algorithm. Moreover, the single-emitter fitting algorithm produces an image with a lower dynamic range, which largely evened out the intensity differences between regions of high and low microtubule densities whereas such difference are obvious in the images generated by the L1H algorithm. (For example, compare the regions highlighted by the red arrows and arrow heads). Thus, by resolving partially overlapping emitters, the compressed sensing approach to STORM analysis can produce reconstructed images with improved sharpness, contrast, and dynamic range. It is worth noting that the results from compressed sensing based analysis are sensitive to differences between the assumed PSF and the actual PSF. In real samples, the PSF from out of focus emitters will naturally be different than that of in focus emitters and hence can cause localization errors when a fixed PSF is used in the analysis. Despite this concern, Fig. 7 reveals that the compressed sensing reconstruction of real data with some natural variation in the focus of the emitters is still superior to that produced by the single-emitter fitting algorithm.
To further compare the performance between the compressed sensing approaches and other STORM image reconstruction algorithms, we also analyzed the 256 pixel × 256 camera pixel simulated data with a single-emitter localization algorithm that can only handle sparse emitter densities and with the multi-emitter fitting algorithm which can handle moderate-to-high emitter densities [9, 13]. The comparison of the performance of these algorithms as a function of emitter density in Figs. 5 and 6 illustrates the benefits and potential limitations of a compressed sensing approach with respect to other algorithms. Above a density of ~0.3 emitters per μm−2, the single-emitter localization algorithm begins to miss a substantial fraction of emitters, and above an intermediate density of ~3 emitters per μm−2 the multi-emitter fitting algorithm also begins to miss a substantial number of emitters. However, L1H and CVX both reconstruct the vast majority of the emitters up to a density of 5-6 emitters per μm−2, as shown in Fig. 5(a). Thus, for the highest emitter densities, a compressed sensing approach is required to reconstruct most emitters.
We also quantified the accuracy with which these emitters are reconstructed, as seen in Fig. 5(b). At low densities, the multi-emitter fitting algorithm has the lowest localization error because it employs maximum likelihood estimation as opposed to the least squares minimization  employed by our single-emitter fitting algorithm. This algorithm continues to outperform compressed sensing in terms of localization error up to an intermediate density of 3-4 emitters per μm−2. Beyond this density, a compressed sensing approach is superior. These results are consistent with those reported previously . However it is important to note that this improvement in high-density image reconstruction comes at a cost. Compressed sensing requires slightly more analysis time than multi-emitter fitting, even when using the significantly faster L1H algorithm. See Fig. 6(a).
Finally, to qualitatively capture and summarize these benefits, we present the results on a simulated STORM data with variable emitter density derived from several different STORM image analysis methods, including a single-emitter fitting algorithm, DAOSTORM (a multi-emitter fitting algorithm), deconSTORM (a recently reported deconvolution analysis method ,) and L1H. As seen in Fig. 8, in the regions of low density all algorithms produce similar quality images, but in regions of high emitter densities, the single- and multi-emitter fitting algorithms miss most of the emitters. deconSTORM and compressed sensing retain these emitters in the high-density regions and, thus, more faithfully reflect the dynamic range of the image. In addition, compressed sensing produces the sharpest reconstructed images within the high-density regions. This ability to accurately process data with a large range of emitter densities is a key advantage of compressed sensing approaches. Thus, while compressed sensing is still a computationally expensive analysis method, with the use of L1H it should now be practical to routinely apply this method. Moreover, compressed sensing is a versatile analysis method, and it should be possible to modify the A matrix to include PSFs of variable size and shape, which will in turn extend compressed sensing to the analysis of three dimensional data. The significant decrease in computational complexity afforded by L1H will partially offset the substantial increase in computational complexity of 3D implementations of compressed sensing. Finally, it may be possible to further increase computational speed by implementing L1H on GPUs or clusters of CPUs.
In conclusion, the speed improvement enabled by the L1H algorithm makes it more practical to apply compressed-sensing-based image analysis to high emitter density data. The high density of fluorescent emitters that can be localized should substantially reduce the number of imaging frames required to reconstruct a super-resolution image, increasing the time resolution of STORM and related super-resolution imaging methods, and benefit the study of ultra-structural dynamics inside cells.
We thank Sang-Hee Shim for assistance with collecting the STORM data of microtubules. This work was supported by the National Institutes of Health (NIH) and the Gatsby Charitable Foundation. J.R.M. was supported by a Helen Hay Whitney Postdoctoral Fellowship. X.Z. is a Howard Hughes Medical Institute Investigator.
References and Links
2. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef] [PubMed]
4. H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5(5), 417–423 (2008). [CrossRef] [PubMed]
7. R. P. J. Nieuwenhuizen, K. A. Lidke, M. Bates, D. L. Puig, D. Grünwald, S. Stallinga, and B. Rieger, “Measuring image resolution in optical nanoscopy,” Nat. Methods 10(6), 557–562 (2013). [CrossRef] [PubMed]
8. S.-H. Shim, C. Xia, G. Zhong, H. P. Babcock, J. C. Vaughan, B. Huang, X. Wang, C. Xu, G.-Q. Bi, and X. Zhuang, “Super-resolution fluorescence imaging of organelles in live cells with photoswitchable membrane probes,” Proc. Natl. Acad. Sci. U.S.A. 109(35), 13978–13983 (2012). [CrossRef] [PubMed]
10. F. Huang, S. L. Schwartz, J. M. Byars, and K. A. Lidke, “Simultaneous multiple-emitter fitting for single molecule super-resolution imaging,” Biomed. Opt. Express 2(5), 1377–1393 (2011). [CrossRef] [PubMed]
11. S. Cox, E. Rosten, J. Monypenny, T. Jovanovic-Talisman, D. T. Burnette, J. Lippincott-Schwartz, G. E. Jones, and R. Heintzmann, “Bayesian localization microscopy reveals nanoscale podosome dynamics,” Nat. Methods 9(2), 195–200 (2011). [CrossRef] [PubMed]
12. T. Quan, H. Zhu, X. Liu, Y. Liu, J. Ding, S. Zeng, and Z.-L. Huang, “High-density localization of active molecules using Structured Sparse Model and Bayesian Information Criterion,” Opt. Express 19(18), 16963–16974 (2011). [CrossRef] [PubMed]
13. H. P. Babcock, Y. M. Sigal, and X. Zhuang, “A high-density 3D localization algorithm for stochastic optical reconstruction microscopy,” Optical Nanoscopy 1(1), 6–10 (2012). [CrossRef]
16. M. R. Osborne, B. Presnell, and B. A. Turlach, “A new approach to variable selection in least squares problems,” IMA J. Numer. Anal. 20(3), 389–403 (2000). [CrossRef]
17. D. M. Malioutov, M. Cetin, and A. S. Willsky, “Homotopy continuation for sparse signal representation,” ICASSP 5, 733–736 (2005).
18. D. L. Donoho and Y. Tsaig, “Fast Solution of the L1-norm Minimization Problem When the Solution May be Sparse,” IEEE Trans. Inf. Theory 54(11), 4789–4812 (2008). [CrossRef]
19. A. Y. Yang and S. S. Sastry, “Fast l1-minimization algorithms and an application in robust face recognition: a review,” Proceedings of 2010 IEEE 17th International Conference on Image Processing, 1849–1852 (2010). [CrossRef]
20. A. Y. Yang, Z. Zhou, A. Ganesh, S. S. Shankar, and Y. Ma, “Fast L1-Minimization Algorithms For Robust Face Recognition,” IEEE Trans. Image Process. 22(8), 3234–3246 (2012).
21. E. van den Berg and M. P. Friedlander, “Probing the Pareto frontier for basis pursuit solutions,” SIAM J. Sci. Comput. 31(2), 890–912 (2009). [CrossRef]
22. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2(1), 183–202 (2009). [CrossRef]
23. M. C. Grant and S. P. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.0 beta,” http://cvxr.com/cvx (2013).
26. M. Grant, S. Boyd, and Y. Ye, “CVX: Matlab Software for Disciplined Convex Programming, Version 1.0 beta 3,” Recent Advances in Learning and Control}, 95-110 (2006).
27. K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods 7(5), 377–381 (2010). [CrossRef] [PubMed]