The ability to localize precisely a single optical emitter is important for particle tracking applications and super resolution microscopy. It is known that for a traditional microscope the ability to localize such an emitter is limited by the photon count. Here we analyze the ability to improve such localization by imposing interference fringes. We show here that a simple grating interferometer can introduce such improvement in certain circumstances and analyze what is required to increase the localization precision further.
© 2017 Optical Society of America
An optical microscope has a resolution that is limited by diffraction. This limit, formulated by Abbe, is , where is the wavelength of light and is the numerical aperture of the microscope’s objective and is a fundamental limitation in imaging of nanostructures. This limitation stems from the fact that an infinitely small single emitter that is imaged by an optical microscope, has an image with finite size. This image is the point-spread-function (PSF) of the microscope. However, this limit does not apply to the localization of the single emitter. Since the image of a single emitter has the shape of the Point-Spread-Function (PSF), the position of the single emitter can be found by fitting the data to the shape of the PSF, usually an Airy-disk, which is often approximated by a Gaussian function .
The ability to track the position of a single optical emitter has been used in numerous biological research studies, such as motor protein motility [2–4], membrane dynamics [5–7], DNA conformations [8,9], organelle transport  etc. Most commonly, a single fluorophore bead or molecules are used; however, quantum dots  and metal nanoparticles [8,12] show greater photo stability and may also be used as single emitters due to their size (a few nanometer to a few microns). Even large scattering objects can be used for tracking. The localization precision that was obtained experimentally was a few nanometers down to a single nanometer. While larger objects may not result a diffraction limited spot when imaged with a microscope, they can be localized precisely using the same techniques, where the localization precision scales with the size of the spot on the camera .
The theoretical precision of such localization was investigated repeatedly with a common conclusion: the localization precision for a Gaussian like PSF cannot be better than in a background free environment. Here, s is the standard deviation of the Gaussian PSF and N is the number of detected photons [1,13–16]. Given this limitation, a few methods were developed in the past to improve localization in terms of computational efficiency [17,18]. A different approach was to revise the optical system to obtain a different PSF shape and optimize it according to a certain criterion. In the case of the double-helix PSF, the shaping of the PSF allowed for 3D localization [19,20], where in some cases it was shown that theoretically, the PSF may be localized with higher precision than conventional microscope’s PSF, however, it was not demonstrated. The double-helix PSF is limited in axial range, a problem that was solved by the optimization of the PSF for axial range , where the tetrapod PSF has a range of 20µm . While these techniques allowed for better axial localization, they did not show improvement in in-plane localization (i.e. x and y).
In the past, we have shown that by using an interferometer in the Fourier domain, the localization precision can go beyond the limit . In this paper we analyze the possibility to modulate the PSF using the same interferometer, i.e. in the real-space domain rather than the Fourier domain. Here, we clarify the conditions required to obtain improvement over the Gaussian PSF localization. We analyze our model theoretically, numerically and experimentally.
In section 2 we show the theory behind interference based localization. In section 3 we analyze numerically the ability to obtain higher localization precision using this scheme. Section 4 is devoted to the optical model of the modulated PSF, where we test it experimentally, theoretically and numerically. In section 5 we visualize through simulation the improvement in localization and we conclude the paper in section 6.
A microscope projects an image on a light sensitive device (usually a digital camera sensor) that can be represented as a convolution, where is the object, and is the PSF, which is the image of an infinitely small point emitter. In a conventional imaging system, has the shape of an Airy-disk, and can be approximated by a Gaussian function. In some systems, the PSF is engineered, see for example references [19–22]. In our analysis, we use a Gaussian function for .
Two interfering plane waves impose a fringe pattern that can be written as , where are the fringe visibility, fringe spatial frequency and phase difference between the two interfering waves . We use this as the function that modulates the PSF in our analysis. The most basic model for a modulated PSF is when the phase of the fringes is a constant. In this model, the system is shift invariant and the fringes move with the PSF. This can be described as,Figure 1(a) and Fig. 1(b) show examples of such PSFs. In this case, C can be predetermined by a pre-scan of a single emitter. This calibration stage can be performed by using a single nanoparticle or a fiducial marker. In this process, many frames of this single emitter are captured. It is then localized and the phase value for each frame is extracted. The mean phase value equals C. A second model is when the phase is completely random and does not depend on position, i.e.
This case resembles the constant phase model, but here the phase is not known and requires estimation. The third case to be examined is the case where the phase is linear with position. This can be described asFig. 1(c), where looking at the two PSFs on the left we see that the phase is different for the two.
The PSF fringe modulation can be obtained using a previously reported interferometer , shown in Fig. 2. Here the image coming from a microscope passes through a grating interferometer that imposes a fringe pattern on top of the image. When a point emitter is imaged, we obtain a fringe pattern on top of the PSF.
In the design of the interferometer one has to take into account the fringe spatial frequency,, and the camera pixel size, d. To properly sample the fringe pattern we require that at least two pixels sample the fringe period. Since the fringe period is , then we require that . The fringe spatial frequency is determined by the interference angle between the two beams. We use as the half angle between the two interfering beam, then , where is the wavelength of light. The maximum angle of interference should be . The interference angle depends on the grating period: . A smaller angle is easier to obtain due to the larger grating period, which is easier to fabricate; however, this also means a longer interferometer.
3. Localization error analysis
To analyze the influence of the fringes on the localization of a single emitter we ran 2D Monte-Carlo simulations. The results of the simulations were compared to the Cramer-Rao Lower Bound (CRLB) of the Gaussian PSF. CRLB is the theoretical limit of localization precision and, in the case of a Gaussian, can be written as 
In the Monte-Carlo simulations we generated a single emitter in every frame, localized it and calculated the localization error by considering the difference between the estimated position and the real position. The localization error is defined as , where is the estimated position and is the real position. Since the localization error is composed of the variance of the localization and the bias, and the fact that in our case the estimation is unbiased, the localization error (rms) equals the localization precision (standard deviation). In all the simulations the size of the frame was , and the position was random within one pixel in the middle. The localization error was calculated separately for the two axes. We assumed a Gaussian model for the PSF with standard deviation of 120 nm, and generated the pixel values by integrating the signal over the area of the pixel. For each PSF the pixel intensity was sampled from a Poisson distribution with a mean value that is the value of the PSF with added background, as applicable for each simulation. For the localization, the PSF was fitted to the different models using Non Linear Least Squares (LS) estimation, by applying the Levenberg-Marquardt algorithm. 2000 iterations were used for each set of parameters in each simulation. The simulation results were always compared to the theoretical limit of localization of a conventional Gaussian PSF (using CRLB), therefore, the use of Maximum Likelihood (ML) for the modulated PSF will only improve its performance compared to the Gaussian case. In all the simulations, except when noted otherwise, the pixel size was 25 nm, , and the phase was determined by the chosen model.
For the first simulation, we looked at the constant phase model in Eq. (1), where the phase is assumed to be known. The phase value was 0, π/2, π, 3π/2, the number of detected photons varied between 500 and 2000, and the background photon count was assumed to be zero. The results in Fig. 3(a) show that the localization error is significantly lower in the case of a modulated PSF than in the conventional Gaussian case, this in agreement with previous work, where we analyzed the CRLB for such a case . The results also show that the localization error does not depend on the phase itself.
When the phase is unknown, and needs to be estimated with the other parameters, the localization error is expected to be the same as that in the Gaussian PSF case (See Appendix A). The results, shown in Fig. 3(b) indicate that using the Least-Squares estimator, the localization error is slightly higher than the Gaussian case (Maximum Likelihood could reach the theoretical limit in estimation.) In this case, no improvement in localization is obtained using the grating interferometer. The results here also do not change with the phase values.
For the linear phase model, see Eq. (3), the phase slope with respect to the position can be determined accurately using the same procedure described for the constant phase case. Since the estimation precision of the phase scales as , where is the number of recorded frames per position, a very high accuracy in determining the phase slope can be achieved by capturing many frames for each nanoparticle position in the pre-scan stage. For this model, the simulation results in Fig. 3(c) show that when the phase slope is equal to the fringe spatial frequency (i.e. ), then the localization error is higher than in the case of a Gaussian PSF. In this case, the fringes are not moving with the emitter PSF and are locked to the camera frames; therefore, no additional information on the position of the emitter is added with the fringes. The localization error reduces significantly when the phase slope is changing, where the bigger the difference between them, the better localization we have.
Two factors that may influence the localization error are the fringe spatial frequency,, and the fringe visibility, . We simulated the modulated PSF localization for PSFs with different fringe spatial frequency, between 20/µm and 120/µm. Since the size of the pixel may be related to the chosen spatial frequency (we require, according to Nyquist criterion, that the fringe period spans over at least two pixels), we also simulated three different pixel sizes, 25 nm, 50 nm and 100 nm. The results of these simulations are shown in Fig. 4(a). We see that when the pixel size is 100 nm, the localization error is higher than the Gaussian case, as expected by the requirement for adequate sampling of the fringe period. The difference between pixel sizes of 25 nm and 50 nm was not significant, where the localization error increases when the fringe period is higher. This is contrary to the theoretical prediction in which the localization error should decrease when the fringe spatial frequency increases (see Appendix A) This can be explained again by the sampling of the period, where the theoretical prediction using CRLB did not take into account pixelation. When the fringe spatial frequency was lower than 90/µm the localization error was the same for pixel sizes of 25 nm and 50 nm, and was better than the Gaussian PSF case.
Looking at the fringe visibility simulation results, in Fig. 4(b) we see that when the fringe visibility exceeds 0.3, improvement in localization compared to the Gaussian PSF is obtained. One should note that experimentally obtained fringe visibility was above 0.9.
A significant factor in the localization error of single emitters is the background photon count as is evident from Eq. (4). We simulated the modulated PSF for two cases: known phase and unknown phase. The results, shown in Fig. 4(c), indicate that when the phase is known, the performance is better than the Gaussian PSF for a low background photon count. When the phase is unknown and requires estimation, the performance is a little better than the Gaussian PSF for a high background photon count.
We also looked at the localization error in the case of linear phase in the presence of background noise, see Fig. 4(d). When the phase slope is different than the interference fringes , the localization error is significantly lower than the Gaussian PSF, as we have seen previously in the absence of background. Moreover, the improvement factor (the ratio between localization error of Gaussian PSF and modulated PSF) increases with background photon count. For instance, for phase slope of , the improvement factor increases from 1.4 to 1.95 when the background photons count increase from 1 to 10. Looking at the case of , in the absence of background we did not see improvement in the localization error, see Fig. 3(c). However, when the background photon-count increases, the improvement factor increases, and the localization error is lower than the Gaussian PSF.
The lower localization error compared to Gaussian PSF in the case of linear phase with a slope of , and the case of estimated phase, can be explained by looking at the model for the added background. The two options are non-interfering background, as in the simulation in Fig. 4, i.e.
When the background is not interfering, the interference fringes enable better separation of the signal from the background noise. We calculated the CRLB for both cases. The results in Fig. 4(e) show that when the background is not interfering, the localization error is smaller than when the background is interfering. In the case of interfering background, the localization error is the same as the case of Gaussian PSF, thus the interference show no improvement.
4. Optical model
The signal captured by the camera with a simple grating interferometer can be written asAppendix B. This indicates that the fringes are constant to the camera frame, i.e. are not moving with the PSF, and the linear model (Eq. (3)) with is the correct one.
To verify our model and to test the ability to localize single emitters, we built the grating interferometer and tested it experimentally. The experimental system illustrated in Fig. 5(a), and in more detail in Appendix A.
The system is designed in a scan/de-scan confocal configuration for fluorescence imaging with a single dual-axis scan mirror (Optics In Motion). The system has two detection paths controlled by a programmable flip mirror. One detection path directs the light onto an APD (Micro Photon Devices, PDM), and an image can be constructed as the scanning mirror directs the beam over the sample. The second detection path contains the interferometer, where a lens is positioned before the interferometer to form an imaging plane on the second grating. A lens relay system then reforms the image on the second grating onto an EMCCD (Andor iXon) camera, where this image now contains the fringe modulation over the emission PSFs. The overall magnification of the system is 400. Within the interferometer, custom manufactured transmission gratings were utilized, with a grating period of 5.97 μm, and an efficiency of 81% transmission to the ± 1 orders. The gratings are only along the x-axis, and were not rotated.
For the experiments, 60 nm gold nanoparticles (GNP) were used. A drop of nanoparticle solution was deposited onto a microscope slide and left to dry. A coverslip was used to cover the nanoparticles, where the space between the slide and coverslip was filled with immersion oil to reduce reflections.
To verify our ability to localize a single emitter, we used a static GNP sample in the microscope, where a predetermined scanning pattern was fed to the computer controlling the scanning mirror. The microscope was focused on the nanoparticle layer on the slide, see Appendix A. The same single nanoparticle was illuminated by the focused beam and a single frame was captured at each mirror position. By moving the mirror, we move the nanoparticle with respect to the illumination beam. For each frame taken in the mirror scan experiment we found the nanoparticle position with respect to the illumination beam by finding the ROI through simple thresholding. We then summed the columns and the rows to obtain the x and y cross sections (see insets in Fig. 5(b)), and localized the nanoparticle by fitting to a Gaussian function and to Eq. (2), to find the y and x positions respectively. We then localized the nanoparticle by fitting to the modulated PSF as described earlier. The image obtained when a single emitter is illuminated by the system can be seen in Fig. 5(b), where the insets show the x (along the fringes) and y projections. These were calculated by summing the 2D PSF over the columns and rows for the x and y projections, respectively. The results in Fig. 5(c) show good agreement between the predetermined mirror position and the extracted nanoparticle position, where the error (1 nm to 2 nm) was smaller than the positioning precision of the scanning mirror (6 nm).
Using the same system we can also find the phase vs. position relations in the system. We find the phase in each nanoparticle position by fitting the obtained PSF to Eq. (2). If we repeat the experiment for many positions, we can conclude which of the models in Eqs. (1)-(3) is the correct model. We repeated the same experiment as before, but captured 30 frames in each mirror position this time. We then averaged over the 30 frames in each position for the portion of the U shape that is horizontal, i.e. the bottom arc. We found the position using the model of the arbitrary estimated phase (Eq. (2)), and found the average phase in each position. The plot of the phase vs. position is shown in Fig. 5(d). We see that the phase is linear with the position. Calculating the phase slope, we find that the phase slope is , where the mean fringe spatial frequency is . This indicates, looking at Eq. (3) that the fringes are not moving at all or barely moving with the PSF.
To validate our results, we performed a numerical propagation simulation of the system in Fig. 5(a). For the propagation simulations, we used the Fresnel diffraction integral to calculate the electric field and to propagate it. We assumed a 4F system with magnification of 400 and NA of 0.7 and a wavelength of 561 nm. In our experiment, using the interference periodicity we found that the interference angle is 0.2 degrees, and this was chosen as the diffraction angle of the grating. In this case, since we can put the two mirrors very close to each other (there is no space limitation as in the experiment), we did not need the second grating to impose such a shallow angle. For every nanoparticle position a camera frame was generated, and the signal was fitted to Eq. (2). The Phase vs. Position was plotted (see Fig. 5(d)) and by linear fitting, the phase slope was found. The calculated phase slope here is and the fringe spatial frequency is . This indicates, as in our experimental results and theoretical analysis that the fringes are not moving with respect to the camera plane.
Our method is not limited to confocal microscopy. To show that, we constructed a widefield image by converting the confocal microscope into a widefield microscope, see Appendix A. This was performed by inserting an additional lens, before the scan lens. The result is a focused excitation beam in the back-focal plane of the objective lens, thus creating widefield illumination. For this experiment, we used a similar GNP sample, and the results can be seen in Fig. 5(e). In the image, we can see two nanoparticles, with a fringe pattern imposed on top of the PSFs.
5. Resolution target simulations
A different application of the localization of a single emitter is in super-resolution techniques such as PALM/fPALM/STORM [26–28], where the resolution obtained show approximately 10 fold improvement over the diffraction limit.
In the case of PALM/fPALM/STORM, the higher resolution is based on stochastically turning on and off sparse single emitters (fluorescence molecules), and localizing them precisely, thus, improvement in localization precision of single emitters may improve super-resolution imaging techniques.
To demonstrate the ability to improve super-resolution using the various PSF models, we ran numerical simulations of ring targets with sub-diffraction limited size. We assumed inter-emitter distance of 1 nm and that a single emitter is in bright state in each frame. We localized the single emitter using the same localization algorithm described in the Monte-Carlo simulations and built the super resolution frame by plotting a Gaussian spot with standard deviation of 1 nm for each localized position. For these simulations the pixel size was 50 nm and the background photon count was 0,3,5,10 photons/pixel. The inner radius of the inner ring is 10 nm and the width of both rings is 10 nm. The total target diameter is 80 nm, well below the diffraction limit (~250 nm for this simulation).
The results of the simulations are shown in Fig. 6. In these results we can clearly see that when the phase is constant as in Eq. (1), see Fig. 6(c), the results are better than in the case of Gaussian PSF, see Fig. 6(a). This is also true, as evident from Fig. 6(f), for a linear phase where the phase slope is different than the fringe spatial frequency, see Eq. (3). No improvement in the results is seen in the other cases, which is in agreement with the Monte-Carlo simulations. To quantify the results we looked at the obtained contrast for all cases with background photon counts of 0,3,5,10 photons per pixel. To calculate the contrast, we calculated the mean values of the reconstructed image, where the ring target was 1, and the mean value of the reconstructed image, where the ring target was 0, and divided the two values, i.e:
The results in Fig. 6(g) show that the best contrast is obtained for the constant phase case and the case where the phase is linear with a slope that is different than the fringe spatial frequency.
To obtain such improvement for super-resolution, the interference fringes should be applied to fluorescence signals, and cannot be realized using the current interferometer. The difficulty comes from the incoherence of the fluorescence signal. The individual wavelengths that compose the broadband fluorescence signal will interfere in different spatial locations, thus averaging the interference patterns over all wavelengths. The result will be severely reduced fringe pattern.
Our results show that improvement in localization error of a single emitter can be obtained using a grating interferometer. The results showed that when is close in value to M, the localization error is at best equal to the Gaussian PSF case, when the background photon count is 0. A phase encoding element that will change the phase slope is required in that case to gain improvement in localization performance. We can also conclude that at a background photon count of 10 photons per pixel, even when the fringes do not move with respect to the camera, the localization error is smaller than the Gaussian case when the PSF is modulated. This is indeed helpful in the case of our current system, since the grating interferometer imposes interference fringes that are not moving with respect to the camera plane. As we have seen earlier this is a result of background photons that do not interfere, unlike the signal itself (see Eq. (5) and (6)). An interfering background will occur when the sources of the background are reflections of the laser when scattering object is tracked. This could be from the optics or the sample. A non-interfering background will occur when the background comes from ambient light or from auto-fluorescence. One example of such a scenario is when one wishes to track a scattering object in auto-fluorescing environment such as cells. A slightly different application is when one would like to track a single emitter on top of a fluorescent signal coming from the sample [29,30]. In this case, in terms of the localization error, the fluorescent signal reduces the performance of the system. However, since this fluorescence signal does not interfere, by tracking a scattering object, we can retain the low localization error, since the background influence is eliminated.
We have seen that the random phase case resembles the case of phase slope that equals the fringe spatial frequency, i.e. . This indicates that pre-scanning of the stage with a single emitter, capturing multiple frames, and determining the exact phase slope is an un-necessary step in that particular case, since it does not yield higher localization precision than estimation of the phase for each localization. This pre-scan stage would be helpful if the phase slope can be changed using a phase encoding element.
Tracking in biological environment may be more challenging than phantom sample imaging, mainly due to higher background count, but also aberration may serve a role in increasing the localization error. Our system is intended for use at the output of any microscope, thus retaining lower localization error than possible with the same microscope using conventional (non-modulated) imaging. Our analysis show that the localization error will be the same or better, with higher background, see Fig. 7, while aberration will not affect our method more than conventional PSF localization, since the self-interference of the PSF will still yield the interference fringes necessary for the improvement in localization.
We should also note that in our experimental system we used two gratings instead of just one. While theoretically there is no need for the second grating in the system, the second grating helps reduce the interference angle without increasing the total length of the interferometer . This limitation is imposed by optical table space and fabrication and is not a fundamental limit in the system.
To obtain a full tracking system that is capable of localization error that is lower than traditional system, two elements are required: A phase element that encodes the signal and allows the fringes to move, at least partially with the PSF, and a second mirror system in the vertical direction that will impose the fringes on the y direction as well. Such a system will prove robust, since it can be fit to the output of every microscope, either scanning or wide-field to improve tracking resolution at low cost. A design of such a system is the subject of future research.
The current system enables improvement in localization in 1D (while the second dimension can be localized by Gaussian fitting). To extend the system into 2D modulated signal, the first grating has to be replaced by a 2D grating, and two additional mirrors should be placed in the y direction, see Fig. 8.
Appendix A additional figures
The following supplementary figures are shown in this appendix:
Figure 7 Comparison of localization performance between Gaussian PSF and modulated PSF for background values of 0 to 100.
Figure 8 Schematic for 2D modulation.
Figure 9 A more detailed experimental system schematic.
Figure 10 CRLB for known and unknown phase cases.
Figure 11 CRLB for modulated PSF as a function of fringe spatial frequency.
Figure 12 Full resolution target results.
Appendix B optical model
To calculate the optical model of our system we start with the system illustrated in Fig. 13. This is a 4F imaging system, where the lenses have a focal length of f1 and f2. The grating is assumed to be just after the second lens. The distance between the sample (a single emitter) and the first lens is the focal length of the lens (f1). The distance between the two lenses is f1+f2. After the second lens, there is a grating interferometer that is composed of a grating that splits the beam into two, and two mirrors that reflect the beams back to the center. The two beams meet at the camera. The distance is the distance between the grating and the mirrors, and between the mirrors and the camera. The distance is set to keep the image focused by accounting for the diagonal path of the beams within the interferometer. For that, the camera is located at a distance of before the image plane of a system without the interferometer, i.e. , where f is the focal length of the lens.
In order to calculate the signal that is captured on the camera we first have to find the electric field before the grating at plane p1. To find that, we observe that up to that point, the system looks exactly like a 4F system, see Fig. 13(b). Therefore, if we can find the electric field in the system of Fig. 13(b), we can then propagate it through the grating system and find the signal recorded on the camera.
We know that for a point source at the object plane, we have the PSF of the system in the image plane (plane p2 in Fig. 13(b).) We use that, and back propagate the field back to plane p1.
The field p can be represented using the angular spectrum:
The spatial shift in the sample plane can be expressed in terms of a linear phase in Fourier domain, thus:
The distance between plane p2 and plane p1 equals a distance of .Thus, back propagation to plane p1 gives:
We can now return to the grating interferometer and the system in Fig. 13(a). The grating function is:
We multiply the signal at by a grating function:
And forward propagate to the mirror plane:
The tilt applied by the mirrors can be written as a linear phase, , where the accounts for the fact that the tilt angle of the mirrors is set to be equal with an opposite sign to allow the beams to meat at the same x position. We now analyze only the positive sign diffraction orders, and assume that the zero diffraction order is blocked. After the mirrors we have:
Propagating to the camera plane:
Since the mirror angle is set in a way that the beam returns to over the same propagation distance , we set .
For the negative diffraction order we have
The detected intensity is
This indicates that the fringes, represented by the cosine function do not depend on the position of the emitter,, i.e. the fringes are fixed to the camera.
National Science Foundation (NSF) (DBI-0845193, 10030539, 1309041); National Institutes of Health (NIH) (R01 NS034307); Utah Science Technology and Research (USTAR) initiative.
References and links
2. A. Yildiz, J. N. Forkey, S. A. McKinney, T. Ha, Y. E. Goldman, and P. R. Selvin, “Myosin V walks hand-over-hand: single fluorophore imaging with 1.5-nm localization,” Science 300(5628), 2061–2065 (2003). [CrossRef] [PubMed]
6. A. Kusumi, H. Ike, C. Nakada, K. Murase, and T. Fujiwara, “Single-molecule tracking of membrane molecules: plasma membrane compartmentalization and dynamic assembly of raft-philic signaling molecules,” in Seminars in Immunology (Elsevier, 2005), Vol. 17, pp. 3–21.
7. J. Enderlein, “Tracking of fluorescent molecules diffusing within membranes,” Appl. Phys. B 71(5), 773–777 (2000). [CrossRef]
8. M. Lindner, G. Nir, S. Medalion, H. R. Dietrich, Y. Rabin, and Y. Garini, “Force-free measurements of the conformations of DNA molecules tethered to a wall,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 83(1), 011916 (2011). [CrossRef] [PubMed]
9. P. Lebel, A. Basu, F. C. Oberstrass, E. M. Tretter, and Z. Bryant, “Gold rotor bead tracking for high-speed measurements of DNA twist, torque and extension,” Nat. Methods 11(4), 456–462 (2014). [CrossRef] [PubMed]
12. L. M. Browning, T. Huang, and X. N. Xu, “Far-field photostable optical nanoscopy (PHOTON) for real-time super-resolution single-molecular imaging of signaling pathways of single live cells,” Nanoscale 4, 2797–2812 (2012).
14. K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods 7(5), 377–381 (2010). [CrossRef] [PubMed]
16. D. Mendlovic, G. Shabtay, Z. Zalevsky, and S. G. Lipson, “The optimal system for sub-wavelength point source localization,” Opt. Commun. 198(4-6), 311–315 (2001). [CrossRef]
19. S. R. P. Pavani and R. Piestun, “Three dimensional tracking of fluorescent microparticles using a photon-limited double-helix response system,” Opt. Express 16(26), 22048–22057 (2008). [CrossRef] [PubMed]
20. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. U.S.A. 106(9), 2995–2999 (2009). [CrossRef] [PubMed]
22. Y. Shechtman, L. E. Weiss, A. S. Backer, S. J. Sahl, and W. E. Moerner, “Precise 3D scan-free multiple-particle tracking over large axial ranges with Tetrapod point spread functions,” Nano Lett. 15, 4194–4199 (2015). [CrossRef]
23. C. G. Ebeling, A. Meiri, J. Martineau, Z. Zalevsky, J. M. Gerton, and R. Menon, “Increased localization precision by interference fringe analysis,” Nanoscale 7(23), 10430–10437 (2015). [CrossRef] [PubMed]
24. B. J. Thompson and E. Wolf, “Two-beam interference with partially coherent light,” JOSA 47(10), 895–902 (1957). [CrossRef]
25. J. Martineau, R. Menon, A. Meiri, Z. Zalevsky, and J. M. Gerton, “Increasing theoretical localization precision using multi-peaked point spread functions,” Submitted.
26. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef] [PubMed]
30. D. Tian, M. Diao, Y. Jiang, L. Sun, Y. Zhang, Z. Chen, S. Huang, and G. Ou, “Anillin regulates neuronal migration and neurite growth by linking RhoG to the actin cytoskeleton,” Curr. Biol. 25(9), 1135–1145 (2015). [CrossRef] [PubMed]