## Abstract

The 3D orientation and location of individual molecules is an important marker for the local environment and the state of a molecule. Therefore dipole localization and orientation estimation is important for biological sensing and imaging. Precise dipole localization is also critical for superresolution imaging. We propose and analyze wide field microscope configurations to simultaneously measure these parameters for multiple fixed dipole emitters. Examination of the images of radiating dipoles reveals how information transfer and precise detection can be improved. We use an information theoretic analysis to quantify the performance limits of position and orientation estimation through comparison of the Cramer-Rao lower bounds in a photon limited environment. We show that bi-focal and double-helix polarization-sensitive systems are attractive candidates for simultaneously estimating the 3D dipole location and orientation.

© 2012 OSA

## 1. Introduction

The photo-physical properties of individual fluorophores depend on both the orientation and location of the molecule with respect to its environment. Therefore, direct measurement of these properties is of interest [1, 2] for sampling the local environment, detecting chemical reactions, measuring molecular motions, sensing conformational changes and as a means to realize optical resolution beyond the diffraction limit [3–7]. Furthermore, using wide-field microscopy for single-molecule detection allows for parallelized information throughput from a three-dimensional volume, potentially containing many events of interest. However, previously reported single-molecule orientation techniques normally operate within a reduced depth of defocus [8–10] and/or on one molecule at a time [10, 11], significantly limiting their applicability. While some of these techniques could be extended to operate in a longer field of view, our study below shows that they are not optimal or sensitive enough. These limitations restrict the number of available degrees of freedom to analyze a three dimensional (3D) volume including a multitude of molecules.

Single molecules that freely rotate can be modeled as point emitters as a result of the rapid and random orientation changes on a time scale much shorter than the integration time of the detection device. The limitations of standard optical microscopes in localizing isotropically emitting molecules in all three dimensions has been overcome with the use of point spread functions (PSF) engineered specifically for 3D localization of isotropic emitters. Techniques that use multiple defocused image planes [12, 13], astigmatic optics [3], and Double-Helix PSFs [4, 14, 15] have been particularly successful in demonstrating that the optical system response can be tailored to enhance 3D localization performance [16]. Efficient estimators have demonstrated experimentally the possibility of reaching the fundamental limit of 3D localization precision provided by the Cramer-Rao Lower Bound (CRLB) [17, 18]. The use of an accurate system model, proper estimators, and calibration are critical to achieve the localization precision limit and avoid bias [15, 17–19]. However, the application of these techniques to dipole emitters such as fixed single molecules, where the isotropic assumption is not valid, is not straightforward and if the proper model and estimator are not used they can lead to orientation-dependent systematic errors [20–23]. If present, this bias can be eliminated by proper system design and matched reconstruction.

This paper addresses the design of optical microscope systems for the specific task of estimating the location and/or 3D orientation of multiple fixed dipoles in a wide field system. The goal is to create a system (or systems) that can precisely distinguish among different dipole positions and orientations in 3D space. The response of a system to a dipole input for different positions and orientations is the dipole spread function or more precisely the Green’s tensor. Thus Green’s tensor engineering for the estimation of dipole localization and orientation is the generalization of PSF engineering for the case of isotropic emitters (see Fig. 1(a) ). The key difference is the a priori assumption about the nature of the emitting particles and its implications for the optical system design. PSF engineering assumes the imaging of point emitters and has demonstrated the possibility of generating information efficient responses that encode the desired parameters. Similarly, Green’s tensor engineering addresses the possibility of shaping the optical response to fixed dipoles at varying orientations. With the additional degrees of freedom in dipole orientation, the prior PSF designs may no longer provide optimum information efficient solutions, hence opening opportunities for novel task-specific designs. In this paper, solutions based on polarization encoded imaging are presented and shown to overcome the limitations of polarization insensitive systems currently in use [8–10, 19, 20, 24]. In section 2, we describe the analytic expressions used to model microscope systems that image the field distributions of fixed dipoles. We present the field distributions for representative dipole orientations and for specific microscope systems. In section 3, we use the CRLB to compare these systems based on their ultimate capacity to estimate the location and 3D orientation of fixed dipoles. In section 4, we compare the 3D localization limits of the fixed dipole emitter with that of an isotropic point emitter.

## 2. Optical system model and analysis

The electric field distribution resulting from dipole radiation has known analytic solutions [25]. Given the position $\left({x}_{0},{y}_{0},{z}_{0}\right)$ and orientation $\left(\Theta ,\Phi \right)$ (see Fig. 1(b)) of a dipole immersed in a medium of refractive index ${n}_{1}$, the far-field radiation pattern in the spherical coordinate system is given by,

**J**

*as follows*

_{OE}*(*

_{Mask}*x,y*) in the Fourier plane can also modify the transfer function as follows: ${E}_{x}^{b}{}^{\text{'}}={E}_{x}^{b}\cdot {P}_{Mask}\left(x,y\right)$ and ${E}_{y}^{b}{}^{\text{'}}={E}_{y}^{b}\cdot {P}_{Mask}\left(x,y\right)$. In either case, the field at the pupil plane is then focused on the detector using a tube lens which performs to a good accuracy a scaled Fourier transform (

*FT*) of the field at the pupil plane, i.e. ${E}_{x}^{}={FT\left\{{E}_{x}^{b}{}^{\text{'}}\right\}|}_{\lambda f}$and ${E}_{y}^{}={FT\left\{{E}_{y}^{b}{}^{\text{'}}\right\}|}_{\lambda f}$. The total intensity at the detector is given byFrom the above equations, it is clear that the emission pattern of the dipole, the intensity I, and intensity of the two linear polarizations |E

_{x}|

^{2}and |E

_{y}|

^{2}depend on the dipole orientation. Figure 2 shows the associated intensity distributions for a dipole located at the focal plane (Fig. 2(a)) and at 0.2μm from the focal plane (Fig. 2(b)). Each row provides the resulting intensity distributions for each unique dipole orientation, namely, O1: dipole along $\widehat{y}$ (Θ = 90° Φ = 90°); O2: dipole along $\widehat{z}$ (Θ = 0° Φ = 0°); O3: dipole along Θ = 45° Φ = 45°). Figure 2 also shows different intensity distributions demonstrating the variability when using either total intensity or two different polarization state decompositions. Here, all systems in consideration have been standardized to use an objective lens with numerical aperture (NA) of 1.4 and assume the emission wavelength of the emitter at λ = 532 nm. The dipole is assumed to be immersed in a medium of refractive index ${n}_{1}$ = 1.52.

For a dipole oriented along $\widehat{y}$ the intensity |E_{x}|^{2} is zero and all the energy lies in |E_{y}|^{2}. This owes to the fact that the electric field of a dipole is linearly polarized along the dipole axis; which is true even when the dipole is defocused, implying that there is no information about the z-position of the dipole in |E_{x}|^{2} for a dipole along $\widehat{y}$. In order to make sure that irrespective of the orientation of the dipole, neither of the two orthogonal polarizations states have zero intensity; we propose a set of elliptical polarization images. The elliptical polarizations are obtained by superposing the orthogonal linear polarizations and can be realized by using a quarter wave plate with principal axis at 45° with the *x*-axis followed by polarizers along the *x* and *y* axes,

The Green’s tensor response can also be tailored by using phase masks. For instance, the last two columns of Fig. 2 show a polarization sensitive (PS) system that uses a double helix (DH) phase mask [14] in the Fourier plane. The DH phase mask has been extensively used for 3D localization of isotropic emitters over an extended depth range. Here we analyze its use for Green’s function engineering applied to fixed dipoles. Owing to the design of the DH mask, it generates two lobes that rotate as the dipoles are defocused, but for fixed dipoles the relative strength and lobe shape are significantly affected by the dipole orientation.

The simulated images in Fig. 2 reveal that dipole localization/orientation information is carried in the images at the orientations investigated and that the total intensity microscope, the linear polarization microscope (with or without the phase mask), and the elliptical polarization microscope are worth investigating as potential candidate solutions.

In what follows we compare different optical systems which are designed to employ either total intensity images, linear polarization images, or elliptical polarization images (Fig. 3 ) as a means to retrieve information from the system towards localization/orientation estimation. In addition to investigating the utility of polarization modulation, we propose including the bi-focal microscope configuration, i.e. simultaneously capturing the images at two different focal planes. This configuration has already been demonstrated to be useful for axial localization of isotropic emitters [12, 13]. It is noteworthy that a myriad of different systems could be realized. The systems considered here represent an interesting subset and act as a proof of principle of the possibilities available for Green’s tensor engineering. Also, because of the inherent low signal collection in single-molecule imaging, each system is selected so that no photons exiting the objective pupil are lost beyond the neglected minor losses at the passive devices (polarizers, waveplates, lenses, and beam splitters).

The schematic in Fig. 3(a) shows the excitation laser and the microscope objective and represents the signal processing unit as a black box. Seven optical signal processing systems are split into three categories for analysis purpose. Category A (Fig. 3(b), Fig. 3(c), and Fig. 3(d)) requires that the system collects information from a single focal plane. Category B ((Fig. 3(e), Fig. 3(f), and Fig. 3(g)) uses two images located at two different focal depths. Also, as shown, the systems in Fig. 3(b) and Fig. 3(e) image the total intensity without polarization sensitivity. The systems in Fig. 3(c) and Fig. 3(f) consider the use of two imaging channels with orthogonal linear polarization states where the dipole emission is collected by a microscope objective and split by a polarizing beam splitter in the pupil plane. The systems in Fig. 3(d) and Fig. 3(g) show the use of two imaging channels that employ orthogonal elliptical polarizations as described in Eqs. (9) and (10). The emission light goes through a quarter wave plate with fast-axis aligned at 45° and is then split using a polarizing beam splitter; each channel is imaged separately using a pair of tube lenses. Category C considers the use of linear polarization system with the addition of phase masks. In particular, Fig. 3(h) shows the PS-DH system [4, 14] with a DH phase mask place in the Fourier plane.

The intensity distributions in Fig. 2 show that for dipoles oriented along O1:(Θ = 90°, Φ = 90°) and O2:(Θ = 0°, Φ = 0°) some of the systems in Fig. 3 might be lacking in information about the dipole’s position and/or orientation whereas there is always finite information for dipoles oriented along O3:(Θ = 45°, Φ = 45°). We further consider the results for two intermediate orientations at O4:(Θ = 30°,Φ = 30°) and O5:(Θ = 60°, Φ = 60°). Figure 3(h) shows these five dipole orientations. These orientations were chosen as a representative set of the full 4pi steradian solid angle.

## 3. Cramer-Rao lower bound

References [26] and [27] introduced information theoretic analyses for the study of the limits of precision in dipole orientation. In Ref [26]. we analyzed the 5D dipole estimation problem for the polarization system of Fig. 3(c). Meanwhile, Ref [27]. performed Fisher information calculations for orientation estimation using configurations that allow estimation of only one molecule at a time. Therefore the analysis did not include localization estimation or the effects of defocus. In contrast, here we analyze both the orientation and localization precision limits as functions of defocus and orientation for multiple configurations. All the systems analyzed allow for widefield imaging and hence the estimation of location and orientation of multiple dipoles in parallel.

We evaluate the performance of single-molecule localization/orientation estimation by use of Cramer-Rao lower bound (CRLB) analysis. The CRLB is a fundamental quantity associated with the lowest realizable variance to calculate the parameters of interest with unbiased estimation methods [28]. The lowest possible standard deviation of an unbiased estimator is found from the square-root of the variance (CRLB)

The standard deviation directly yields the error lower bound in the same units as the measured data. We assume the imaging systems to be shift-invariant in the transverse direction, which is a good approximation in the central region of the field of view. Hence, the CRLB remains constant with transverse shifts. For 3D imaging and localization, we are interested in the minimum localization volume. One measure of this uncertainty volume isHere, ${\sigma}_{x}$, ${\sigma}_{y}$, and ${\sigma}_{z}$ represent the lower bound standard deviation along the three Cartesian co-ordinates and ${\sigma}_{3D}$ is the volume ellipsoid generated by using the these standard-deviations as the three semi-principal axis. Similarly, for estimating the orientation of a dipole, we can define the solid angle error as,Here, ${\sigma}_{\Theta}$and ${\sigma}_{\Phi}$ are the lower bound standard deviation for the polar and azimuthal angles and ${\sigma}_{\Omega}$ represents the solid angle of the cone generated using these values as the polar and azimuthal angles. Fixed dipoles lead to a 5-parameter estimation problem and defining the quantities in the above equations facilitates the analysis, comparison, and visualization. Appendix A presents further details of the calculation of CRLB.#### 3.1 Estimation error bounds as a function of defocus

We compare the CRLB for dipole position and orientation in the shot noise limit using 5000 photons per image for the systems previously discussed. Figures 4(a) and 4(b) show the average of the standard deviation for 3D position estimation $({\sigma}_{3D})$ and solid angle estimation $({\sigma}_{\Omega})$ respectively, over the five dipole orientations shown in Fig. 3(h). (For solid angle estimation, we average over four dipole orientations because for a dipole along the optical axis (Θ = 0°), sinΘ = 0°, and the solid angle error is indeterminate.) These are respectively denoted as $\text{avg}({\sigma}_{3D})$ and $\text{avg}({\sigma}_{\Omega})$. It can be seen from Fig. 4 that, for an in-focus molecule, the $\text{avg}({\sigma}_{3D})$ and $\text{avg}({\sigma}_{\Omega})$ for the single channel system [TI np: Fig. 3(b)] and the linear polarization system [Lin pol: Fig. 3(c)] increases rapidly, whereas for the elliptical system [Elp pol: Fig. 3(d)] they have a relatively smaller value. These high averages are due to the fact that near focus, these three systems carry either none or very little information about z-position variations of the dipoles that lie in the x-y plane (Θ = 90°) and dipoles that are oriented along the optical axis (Θ = 0°). Also, far from focus, the linear and elliptical polarization systems exhibit more precise localization than the total intensity system On the other hand, the PS-DH system shows a finite $\text{avg}({\sigma}_{3D})$ and $\text{avg}({\sigma}_{\Omega})$ over the complete defocus range. In a smaller defocus range and away from focus, the $\text{avg}({\sigma}_{3D})$ is less precise than the clear aperture polarization sensitive systems. As for the solid angle error, the PS-DH system has the lowest and most uniform $\text{avg}({\sigma}_{\Omega})$. Thus, if we need uniform performance over a defocus range of –z to + z, the widely used single channel system [8–10,19,20,24] will not be the best candidates.

In order to analyze the bi-focal systems, a defocus of 0.4 μm was chosen by optimizing the average CRLB for the dipole oriented along O3:(Θ = 45°, Φ = 45°). This orientation was chosen since it gives a finite CRLB for all the different systems and for all defocus values. Among the three bi-focal systems, the system that measures the total intensity has a substantially higher $\text{avg}({\sigma}_{3D})$ and $\text{avg}({\sigma}_{\Omega})$ throughout the defocus region compared to the bi-focal systems that employ polarization. The bi-focal systems with linear and elliptical polarization present a more uniform curve in the region of interest with the linear polarization system showing a lower CRLB than the elliptical one for solid angle estimation. It is noteworthy that the bi-focal linear curve is asymmetric about z_{0} = 0 and has a spike at defocus −0.2 μm. Since the radiation of a dipole is linearly polarized, the Ex channel of the bi-focal linear system, for a dipole along $\widehat{y}$ (Θ = 90°, Φ = 90°), has no information at focus and this coupled with the Ey channel at z_{0} = −0.4 μm results in a spike in the CRLB curve at z_{0} = −0.2 μm. Thus, for 3D localization, depending on the region of interest, either the bi-focal elliptical system or one of the single-plane linear or elliptical systems would be suitable candidates. However, for orientation estimation, the PS-DH system shows the lowest CRLB among the systems considered followed closely by the bi-focal linear polarization system.

#### 3.2 Estimation error bounds as a function of azimuthal and polar angles

Localization and orientation estimation of a dipole are functions of both, the dipoles position and orientation. In Fig. 5
, we show the lower bound of the standard deviation for volume localization $({\sigma}_{3D})$ and the orientation solid angle $({\sigma}_{\Omega})$ for a dipole with respect to the azimuthal and polar angles. Figures 5(a) and 5(b) show ${\sigma}_{3D}$and ${\sigma}_{\Omega}$ respectively, with the top row displaying them as a function of angle Φ for Θ = 90°, and the bottom row displaying them as a function of angle Θ for Φ = 0°. Note that at Θ = 90° an in-focus dipole has rapidly increasing CRLB, thus these plots were made for a defocus of z_{0} = 0.1 μm to gain a qualitative insight. Both ${\sigma}_{3D}$ and ${\sigma}_{\Omega}$ have a nearly constant CRLB for all seven systems as a function of azimuthal angle Φ. For estimation of the solid angle, the Lin pol, the Bf-Lin pol system and the PS-DH system show the lowest CRLB followed by the Elp pol and TI np systems. Indeed, as the dipole rotates in Φ, the intensity distributions of the (Bf-)TI np and the (Bf-) Elp pol system rotate, thus rotating the major-axis of the elliptical pattern of the dipole emission, whereas for the (Bf -) Lin pol and PS-DH systems, there is energy exchange between the two channels. As for volume localization, the Elp pol and Lin pol systems that focus at the same plane have a better precision but only over a short range in the axial dimension (see Fig. 4(a)).

The bottom row in Fig. 5 shows the volume and solid angle estimation precision w.r.t to the polar angle Θ. As shown in Fig. 5(a), for volume localization w.r.t. Θ, all systems except the non-polarization sensitive systems provide a pretty uniform and low CRLB, implying a better lower bound for estimation error. On the other hand, from Fig. 5(b) it can be seen that, as a function of Θ, the estimation of the solid angle becomes difficult using the non-polarization sensitive systems, whereas the PS-DH system has the smallest lower bound in estimating the solid angle. Thus, overall the PS-DH system has the lowest and most uniform ${\sigma}_{\Omega}$ for solid angle estimation but not far from the Bf-Lin pol system.

#### 3.3 Estimation error bound (${\sigma}_{LB}$) as a function of defocus and polar angle

For shift invariant systems, ${\sigma}_{3D}$and ${\sigma}_{\Omega}$ are in general functions of Θ, Φ, and z. Therefore, they could be represented in a 3D space for joint optimization. The cross sections presented in Fig. 4 and Fig. 5 are representative of the behavior of the systems and help identify the best systems. To further clarify the power of the CRLB analysis we show, in Fig. 6(a) and Fig. 6(b), surface plots of the lower bounds for the commonly used single channel imaging system [8–10, 19, 20, 24] and the best two-channel systems identified above. These plots show the striking improvement in precision achievable by design via the CRLB metric. Typical improvements are threefold in 3D position estimation and fourfold in orientation estimation. Also from Fig. 4(a) it can be seen that the Bf-Elp pol system has a more uniform CRLB than the Lin pol system, although it performs worse near focus. We compare the volume localization of these two systems in Fig. 6(c). Similarly, for solid angle error (Fig. 5), the PS-DH system and the Bf-Lin pol system are the strongest contenders. In Fig. 6(d) we compare these two systems as functions of defocus and polar angle Θ. This analysis can be extended to include parametric surfaces as a function of specific system parameters, such as number of photons, background noise, etc., which could be used for further system optimization.

## 4. Localization of an isotropic point emitter vs. dipole emitter

A freely and randomly rotating dipole can be modeled as an isotropic point source emitter. The localization of isotropic emitters constitutes a different problem than that of the localization of fixed dipoles because isotropic emitters lead to a three-parameter estimation problem, while fixed dipoles require the estimation of five parameters. Therefore, because the prior knowledge about the object to be localized is different, special care has to be taken in understanding the limitations of a performance comparison.

Here we compare the CRLB of the dipole localization with that of a point source emitter. A point source emits a spherical wave making the intensity equal in all directions, unlike that of a dipole where the intensity varies as sin^{2}θ, where θ is the angle measured from the axis of the dipole. We assume that the total number of photons emitted by the point source and the dipole is equal. Thus, for a dipole oriented perpendicular to the optical axis we derive the ratio of the number of photons captured by the lens as

*t*is the half-angle of the cone captured by the objective lens. For a system with NA = 1.4 and index of immersion medium n = 1.52,

_{m}*t*≈67° and the above ratio ≈1.5. Thus, if the number of detected photons for the fixed dipole oriented perpendicular to the optical axis is 5000, for the isotropic case it will be ≈3333. The CRLB is calculated in a similar way using Poisson noise (see details in Appendix A) and the lower bound of the volume localization error calculated as in Eq. (12).

_{m}Figure 7 shows the volume localization error for a fixed dipole compared with that of the isotropic emitter. The localization of the isotropic emitter is clearly independent of the dipole orientation (Θ, Φ). Because we assume the fixed dipole and the isotropic emitter emit the same number of photons, the fixed dipole can be localized more precisely at orientations around the normal to the optical axis, which are the directions of maximum radiation. Similarly, fixed dipoles oriented between 0° and 50° from the optical axis have poorer localization accuracy. The relative difference is explained by the fact that the number of photons detected for the dipole is larger as long as the dipoles are oriented closer to the transverse plane (Θ = 90°), while the difference in image shape has only a second order effect.

## 5. Conclusion

In conclusion, the CRLB analysis provides a powerful tool for the design of fixed dipole localization/orientation imaging systems. The main conclusion from this analysis is that when imaging fixed dipoles under shot noise limited conditions, systems that are sensitive to polarization are stronger candidates for estimating the 3D position and 3D orientation of the dipole. In particular we have shown that the commonly used systems that acquire the total intensity of a defocused single image provide the poorest localization and orientation performance among the systems considered here. Clearly, this is primarily due to the fact that the light emitted from a fixed dipole is polarized. Hence, splitting the emitted radiation in orthogonal polarization states helps estimate these parameters more efficiently by making the system more sensitive to changes in position or orientation.

Furthermore, we quantified the performance limits from a set of candidate imaging systems by comparing their CRLB. We also demonstrated the importance of multifocal imaging in terms of the CRLB for localization and orientation estimation. The CRLB analysis establishes that position estimations can be uniformly improved by using a two channel bi-focal polarization sensitive system, while a single focus plane polarization system might provide a lower CRLB for a short range defocus region. On the other hand, the orientation of a dipole is best estimated using a two channel bi-focal linear polarization sensitive system or a polarization-sensitive double-helix system. These results open further possibilities to solve the five-parameter estimation problem for fixed dipoles via Green’s tensor function engineering.

## 6. Appendix A: Cramer-Rao lower bound calculations

In this section we present the details of the Cramer-Rao lower bound (CRLB) calculation for the various systems described in the paper. The CRLB is the inverse of the Fisher Information (FI) matrix and is given by [28]

where, $\psi $ is the unknown parameter to be estimated, which in the case of dipole estimation is the position and orientation of the dipole. Thus $\psi =\left[{x}_{0},{y}_{0},{z}_{0},\Theta ,\Phi \right]$and*m*is 1, 2, 3, 4 or 5. The FI matrix is a 5x5 matrix calculated as follows

*p*is the probability density function (PDF) for the pixel in

_{i,j}(k|ψ)*i*

^{th}row and

*j*

^{th}column,

*E*refers to expectation, ln the natural logarithm, and the indices

*m*,

*n*are 1, 2, 3, 4 or 5. FI is additive and thus the summation denotes the addition of the FI over all the pixels of the detector. For the multiple channel systems described in the main text, the FI of the system is calculated by adding the FI of each channel. Different noise sources can be chosen by appropriately choosing the PDF.

In order to calculate the CRLB, we first calculate the image at the detector as described in Section 2. A Poisson noise model is then considered in the images. The derivative of the natural logarithm of these images (PDF) is taken with respect to each of the five variables. Finally, the expectations from all the pixels are added together to get the FI. This procedure is used for all channels of the system and the total FI is obtained by adding them. The CRLB is then obtained by inverting the FI matrix and using the respective diagonal elements for each of the unknown parameters. The standard deviation σ is then calculated taking the square root of the CRLB. The number of photons captured depends on the orientation of the dipole with respect to the objective lens because the intensity of dipole radiation varies as sin^{2}θ, where θ is the angle from the axis of the dipole. Thus, a dipole that is perpendicular to the optical axis will have more photons detected than all other dipole orientations as long as the half-angle of the captured cone is less than 90°. Thus, among the representative orientations considered, the dipole along $\widehat{y}$ (Θ = 90° Φ = 90°) will have the most photons captured and the PDF is normalized with respect to the dipole along $\widehat{y}$ for CRLB calculations. We use a total of 5000 photons for the dipole along$\widehat{y}$.

The CRLB for the isotropic point emitter is calculated in a similar way using the Poisson noise model. But, since localization of an isotropic point source is a 3-parameter problem, the unknown parameter in this case is given by $\psi =\left[{x}_{0},{y}_{0},{z}_{0}\right]$ leading to a FI matrix of size 3x3.

## Acknowledgments

We thankfully acknowledge support from NSF awards DBI-0852885 and DGE-0801680.

## References and links

**1. **W. E. Moerner, “New directions in single-molecule imaging and analysis,” Proc. Natl. Acad. Sci. U.S.A. **104**(31), 12596–12602 (2007). [CrossRef] [PubMed]

**2. **E. Toprak and P. R. Selvin, “New fluorescent tools for watching nanometer-scale conformational changes of single molecules,” Annu. Rev. Biophys. Biomol. Struct. **36**(1), 349–369 (2007). [CrossRef] [PubMed]

**3. **B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science **319**(5864), 810–813 (2008). [CrossRef] [PubMed]

**4. **S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. U.S.A. **106**(9), 2995–2999 (2009). [CrossRef] [PubMed]

**5. **E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science **313**(5793), 1642–1645 (2006). [CrossRef] [PubMed]

**6. **S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. **91**(11), 4258–4272 (2006). [CrossRef] [PubMed]

**7. **M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods **3**(10), 793–796 (2006). [CrossRef] [PubMed]

**8. **M. Böhmer and J. Enderlein, “Orientation imaging of single molecules by wide-field epifluorescence microscopy,” J. Opt. Soc. Am. B **20**(3), 554 (2003). [CrossRef]

**9. **K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods **7**(5), 377–381 (2010). [CrossRef] [PubMed]

**10. **M. A. Lieb, J. M. Zavislan, and L. Novotny, “Single-molecule orientations determined by direct emission pattern imaging,” J. Opt. Soc. Am. B **21**(6), 1210 (2004). [CrossRef]

**11. **M. R. Foreman, C. M. Romero, and P. Török, “Determination of the three-dimensional orientation of single molecules,” Opt. Lett. **33**(9), 1020–1022 (2008). [CrossRef] [PubMed]

**12. **M. F. Juette, T. J. Gould, M. D. Lessard, M. J. Mlodzianoski, B. S. Nagpure, B. T. Bennett, S. T. Hess, and J. Bewersdorf, “Three-dimensional sub-100 nm resolution fluorescence microscopy of thick samples,” Nat. Methods **5**(6), 527–529 (2008). [CrossRef] [PubMed]

**13. **S. Ram, J. Chao, P. Prabhat, E. S. Ward, and R. J. Ober, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE **6443**, 64430D (2007). [CrossRef]

**14. **S. R. P. Pavani, J. G. DeLuca, and R. Piestun, “Polarization sensitive, three-dimensional, single-molecule imaging of cells with a double-helix system,” Opt. Express **17**(22), 19644–19655 (2009). [CrossRef] [PubMed]

**15. **G. Grover, S. Quirin, C. Fiedler, and R. Piestun, “Photon efficient double-helix PSF microscopy with application to 3D photo-activation localization imaging,” Biomed. Opt. Express **2**(11), 3010–3020 (2011). [CrossRef] [PubMed]

**16. **G. Grover, S. R. P. Pavani, and R. Piestun, “Performance limits on three-dimensional particle localization in photon-limited microscopy,” Opt. Lett. **35**(19), 3306–3308 (2010). [CrossRef] [PubMed]

**17. **F. Aguet, S. Geissbühler, I. Märki, T. Lasser, and M. Unser, “Super-resolution orientation estimation and localization of fluorescent dipoles using 3-D steerable filters,” Opt. Express **17**(8), 6829–6848 (2009). [CrossRef] [PubMed]

**18. **S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for superresolution microscopy with aberrations and engineered point spread functions,” Proc. Natl. Acad. Sci. U.S.A. **109**(3), 675–679 (2012). [CrossRef] [PubMed]

**19. **D. Patra, I. Gregor, and J. Enderlein, “Image Analysis of Defocused Single-Molecule Images for Three-Dimensional Molecule Orientation Studies,” J. Phys. Chem. A **108**(33), 6836–6841 (2004). [CrossRef]

**20. **A. P. Bartko and R. M. Dickson, “Imaging Three-Dimensional Single Molecule Orientations,” J. Phys. Chem. B **103**(51), 11237–11241 (1999). [CrossRef]

**21. **J. Engelhardt, J. Keller, P. Hoyer, M. Reuss, T. Staudt, and S. W. Hell, “Molecular orientation affects localization accuracy in superresolution far-field fluorescence microscopy,” Nano Lett. **11**(1), 209–213 (2011). [CrossRef] [PubMed]

**22. **J. Enderlein, E. Toprak, and P. R. Selvin, “Polarization effect on position accuracy of fluorophore localization,” Opt. Express **14**(18), 8111–8120 (2006). [CrossRef] [PubMed]

**23. **S. Stallinga and B. Rieger, “Accuracy of the Gaussian Point Spread Function model in 2D localization microscopy,” Opt. Express **18**(24), 24461–24476 (2010). [CrossRef] [PubMed]

**24. **E. Toprak, J. Enderlein, S. Syed, S. A. McKinney, R. G. Petschek, T. Ha, Y. E. Goldman, and P. R. Selvin, “Defocused orientation and position imaging (DOPI) of myosin V,” Proc. Natl. Acad. Sci. U.S.A. **103**(17), 6495–6499 (2006). [CrossRef] [PubMed]

**25. **L. Novotny and B. Hecht, *Principles of Nano-Optics* (Cambridge University Press, 2006), Chap. 10.

**26. **A. Agrawal, S. Quirin, G. Grover, and R. Piestun, “Limits of 3D Dipole Localization and Orientation Estimation with Application to Single-Molecule Imaging - OSA Technical Digest (CD),” in *Computational Optical Sensing and Imaging* (Optical Society of America, 2011), p. CWA4.

**27. **M. R. Foreman and P. Török, “Fundamental limits in single-molecule orientation measurements,” New J. Phys. **13**(9), 093013 (2011). [CrossRef]

**28. **S. M. Kay, *Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory (v. 1)* (Prentice Hall, 1993), Chap. 3.