Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Determining the rotational mobility of a single molecule from a single image: a numerical study

Open Access Open Access

Abstract

Measurements of the orientational freedom with which a single molecule may rotate or ‘wobble’ about a fixed axis have provided researchers invaluable clues about the underlying behavior of a variety of biological systems. In this paper, we propose a measurement and data analysis procedure based on a widefield fluorescence microscope image for quantitatively distinguishing individual molecules that exhibit varying degrees of rotational mobility. Our proposed technique is especially applicable to cases in which the molecule undergoes rotational motions on a timescale much faster than the framerate of the camera used to record fluorescence images. Unlike currently available methods, sophisticated hardware for modulating the polarization of light illuminating the sample is not required. Additional polarization optics may be inserted in the microscope’s imaging pathway to achieve superior measurement precision, but are not essential. We present a theoretical analysis, and benchmark our technique with numerical simulations using typical experimental parameters for single-molecule imaging.

© 2015 Optical Society of America

1. Introduction

Ever since the first experimental measurements of single-molecule orientational dynamics [1], polarization optics have played a critical role in determining the rotational mobility of the fluorescent molecules under observation. A fluorescent molecule may be regarded to leading order as an oscillating electric dipole. Hence, the electric field emitted by a molecule has a characteristic polarized far field pattern according to the orientation of that molecule’s emission dipole moment. Furthermore, the efficiency with which a molecule may be excited by a light source will also depend upon the alignment of the polarization of the incident light relative to the molecule’s absorption dipole moment. Changes in fluorescence intensity, as a function of excitation/emission polarization, may thus be related to the orientational behavior of the molecule under observation. A simply measured quantity is the linear dichroism (LD) of a molecule, which may be computed as:

LD=(I0°I90°)/(I0°+I90°)
Where I0° is the total measured intensity after fluorescence has passed through a polarizer, and I90° is the emitted intensity measured when using a perpendicularly oriented polarizer. In practice, both I0° and I90° may be acquired simultaneously using a polarizing beamsplitter, and two separate photodetectors (or separate regions on a single image sensor). It is useful to know the linear dichroism of a molecule, since this quantity tends to approach zero as the molecule becomes more mobile. It may thus be used to establish a bound on the range of orientations visited by the fluorophore over the integration time of the photodetector. Furthermore, by recording multiple LD measurements of the same molecule [2–6] using different excitation polarizations, one may determine the underlying amount of rotational freedom with useful precision. For in-depth comparisons of various polarization configurations, and their relative precisions given limited signal, see [7,8].

In previous work, LD measurements have played a crucial role in helping researchers quantify the mechanical properties of DNA [2,9], and understand the complex mechanisms governing the movement of motor proteins [10,11]. However, there are some notable limitations to this technique. For example, consider the following (Fig. 1): LD measurements for three different molecules are acquired. The first molecule is rotationally immobile, and oriented parallel to the optical axis. The second molecule is oriented perpendicular to the optical axis, but is oriented at 45° with respect to the polarizing beamsplitter placed in the emission pathway (this molecule is also immobilized). Finally, the third molecule undergoes rotation about the optical axis on a timescale faster than the temporal resolution of the photodetectors. All three molecules will yield an LD measurement of zero, even though they each exhibit completely different orientational characteristics! In order to break such degeneracies, it is often necessary to introduce polarization modulation optics in the illumination pathway, and/or repeatedly measure the fluoresence emitted from the same molecule after a different excitation polarization has been applied. However, in widefield imaging studies, it may only be feasible to record a single LD measurement per molecule, using a single excitation polarization [12,13]. In this case, the potential to mis-interpret an LD data set is quite real, since a single LD measurement cannot completely characterize the rotational behavior of a molecule. Reliance on LD data alone may obscure relevant physical phenomena or, as we will demonstrate in a numerical experiment, may cause an experimenter to form patently incorrect conclusions about a specimen under observation.

 figure: Fig. 1

Fig. 1 Examples of rotational behavior which yield identical linear dichroism measurements. (a) Immobile molecule aligned along the optical axis. Orientation of polarization analyzers indicated with respect to microscope focal plane. (b) Immobile molecule aligned in plane of coverslip, at 45° angle to each of the polarization analyzers. (c) Molecule rotating about the optical axis.

Download Full Size | PDF

In order to avoid many of the ambiguities inherent inLD measurements, many researchers have turned to widefield image-based analysis in order to determine the orientation of single molecules [14–22]. The combination of image-based analysis with polarized detection configurations has been considered in [23]. Using slightly defocused images of single dye molecules in order to deduce orientation, researchers have studied the stepping behavior of the myosin V motor protein [24], and have gained insight into the optical biasing of Brownian rotations when molecules are attached to a thin polymer film [25]. Furthermore, defocused imaging has been recently proposed as a means of studying the photophysics of chiral molecules [26], and molecules containing multiple chromophores [27]. Applications of orientation imaging have assumed that a fluorophore is either fixed in orientation, or rotating at a rate far slower than the integration time of the camera. However, molecules commonly undergo rotational motions on timescales much faster than the ~ms temporal resolution of state-of-the-art image sensors. We address this apparent shortcoming by proposing a method to determine the amount of wobble that a molecule undergoes, in addition to that molecule’s mean orientation, using just one camera frame. Unlike alternative image-based approaches for determining rotational mobility [28], ours does not rely upon any specific rotational diffusion model, nor does it require a large library of ‘training’ single-molecule images of known orientations/mobilities.

This paper is organized as follows: In section 2, we describe our theoretical framework for simulating and characterizing single-molecule widefield fluorescence images arising from molecules of different orientations and rotational mobilities. In section 3, we perform a series of numerical experiments designed to showcase the method’s capacity to finely differentiate molecules exhibiting varying amounts of rotational freedom. We demonstrate that linear dichroism measurements would not provide sufficient information to distinguish such behavior. Furthermore, we examine how our method performs in moderate to low signal-to-noise imaging conditions, and suggest various means of optimizing our method, given a limited photon budget. Finally, in section 4 we discuss the practical hurdles that must be overcome in order to make our proposed method experimentally realizable.

2. Theoretical framework

In this section, we describe a simple, accurate and computationally efficient method for calculating the image of single-molecule fluorescence formed on a typical camera array detector such as an electron-multiplication charge coupled device (EMCCD) or a sCMOS detector. Our method can be applied to simulate images of both rotationally mobile and immobile molecules. In the first part of this section, we show how any single-molecule image can be fully characterized using a 3-by-3 symmetric, positive-semidefinite matrix, which we term M. The relative magnitudes of the eigenvalues, {λ1,λ2,λ3}, of this matrix may be used to quantify the mobility of any given molecule under observation. As an application of the M matrix approach, we show how this formulation may be related to a more commonly employed, but less general, ‘constrained rotation within a cone’ model of molecular orientational dynamics. In the latter half of this section, we address to the problem of precisely determining the entries of the M matrix (a necessary precursor to calculating the eigenvalues!). Importantly, M may be inferred simply by solving a matrix pseudo-inversion problem of modest dimensions. This computational technique for determining M will be indispensable to the later sections of this paper, in which we extract rotational mobility measurements from single noisy images of molecules.

2.1 The M matrix formulation for characterizing images of rotating molecules

High numerical aperture (NA) imaging apparatuses based on oil-immerison or other immersion microscope objectives require special modeling considerations, in order to properly account for all of the optical effects that will influence the final intensity distribution recorded on a detector [29]. Furthermore, electric dipoles (and therefore fluorescent molecules) are highly anisotropic, emitting a far-field intensity distribution resembling that of an expanding torus [30] aligned perpendicular to the emission dipole moment of a given molecule. After propagation through a microscope composed of an objective/ tube lens pair, the electric field present at an image sensor can be calculated as follows:

[Eximg(r)Eyimg(r)]=[Exμx(r)Exμy(r)Exμz(r)Eyμx(r)Eyμy(r)Eyμz(r)][μxμyμz]=G(r)μ
In Eq. (2), Ex,yimg(r) are the x/y polarized electric fields present at a given point, r, in the microscope’s image plane. Ex,yμx,y,z(r) are the x/y polarized electric fields associated with an immobile dipole oriented along the x, y or z axis of the optical system (see Fig. 2(a)). Subscripts denote the polarization of the field, and superscripts denote the orientation of the dipole. This equation simply states that the field for an arbitrary dipole drection is a superposition of three effective dipoles corresponding to the three Carteisan projections of the dipole moment. Collectively, these fields may be assembled into a 2-by-3 matrix G(r) . The vector μ specifies the orientation of the emission dipole moment of a single molecule at a given instant in time as a point on a sphere. The amplitude, A, of the dipole moment can be calculated from μ as:
A=| μ|=μx+μy+μz
Alternatively, one may parameterize the instantaneous orientation of a molecule by the azimuthal and polar inclination, {Φ,Θ}, of the dipole moment with respect to the imaging system’s coordinate frame (Fig. 2(a)):
μ|μ|=[sin(Θ)cos(Φ)sin(Θ)sin(Φ)cos(Θ)]
In order to calculate the electric fields Ex,yμx,y,z(r), one must, in general, rigorously model all components of the optical system, as well as any inhomogeneities within the sample itself (such as refractive index variations) [31–33]. Furthermore, the electric fields present at the image sensor will vary as a function of microscope defocus d, the distance between the objective lens focal plane and the single molecule being imaged (Fig. 2(b)). In this paper, we assume isotropic media, and calculate the electric fields using the procedure detailed in [22]. For completeness, we briefly summarize how the image plane electric fields are computed in the appendix of this paper. Finally, it ought to be noted that we have neglected to calculate the z-polarized portion of the electric field incident upon the microscope image plane. While a z-polarized field does exist, it is generally many orders of magnitude smaller than the x/y polarized electric fields, and need not be included in our analysis. Equipped with Eq. (2), the intensity, U(r), present at the image plane (at a single instant in time) may be calculated as:
U(r)=(Eximg(r))*Eximg(r)+(Eyimg(r))*Eyimg(r)=j=x,y[(Ejμx(r))*(Ejμy(r))*(Ejμz(r))*](μμT)[Ejμx(r)Ejμy(r)Ejμz(r)]
Rotation of the emission dipole moment is modeled as follows: If the molecule undergoes a total of N absorption-emission cycles over the integration period of the image sensor, T, we simply replace the outer-product enclosed in parentheses in Eq. (5) with the summation:
M=[MxxMxyMxzMxyMyyMyzMxzMyzMzz]=1Nn=1NμnμnT
Where μn denotes the orientation of the emission dipole at the time of the n’th emission event. Note that up to this point we have made no assumptions regarding the rotational behavior of the molecule, or the polarization of excitation illumination. Such modeling considerations must eventually be taken into account in order to prescribe the relative frequencies of various μn. (In part b of this section, we will adopt a simplified ‘rotation within a cone’ model for this purpose.) However, key insights may be achieved before restricting our analysis. Substituting Eq. (6) into Eq. (5), the time-integrated image associated with a rotating molecule is:
U(r)=j=x,y[(Ejμx(r))*(Ejμy(r))*(Ejμz(r))*]M[Ejμx(r)Ejμy(r)Ejμz(r)]
Equation (7) is completely general. No matter how erratically a molecule behaves over the camera integration time, the final image visible on the detector can be calculated from the matrix M. From Eq. (6), M is the summation of the outer-products of 3-dimensional vectors μn. Therefore, M must have an eigen-decomposition:
M=j=13λjvjvjT
Where the λj0 are the eigenvalues of M, indexed from largest to smallest in magnitude. The vj are the corresponding eigenvectors. One may interpret the vj as three, stationary dipoles arranged othogonally. The square roots of the λj are then understood to be the corresponding amplitudes of these three dipoles (see Figs. 2(d) and 2(f)). The relative sizes of the eigenvalues λj, yields a useful picture of the rotational mobility of the molecule in question.

 figure: Fig. 2

Fig. 2 Parameterizations of single-molecule orientation and rotational mobility. (a) A rotationally fixed single molecule may be modeled as a fixed dipole with polar orientation Θ and azimuthal orientation Φ. Alternatively, orientation may be described as a unit vector μ, with x, y and z components μx, μy and μz respectively. (b) Experimental schematic: A single molecule is placed a distance d from the focal plane of the objective, and a single widefield image is acquired. (c) Rotation within a cone model: A single molecule undergoes constrained rotation about some mean orientation {Φ0,Θ0}. The molecule may deviate by an angle α from the mean. (d) A molecule rotating in a cone may be alternatively parameterized by three orthogonal dipoles. One dipole will have amplitude equal to the square root of the largest eigenvalue of the M matrix, as defined in the main text. The other two dipoles will have amplitudes equal to the square root of the second largest eigenvalue. (e) In a more general case, a single molecule’s rotation may be confined to an elliptical region of the unit hemisphere, parameterized by two angles α and β. (f) For rotation within an elliptic region, the equivalent eigenvectors provide three different dipoles, each with a distinct amplitude determined from the square roots of the eigenvalues of the M matrix.

Download Full Size | PDF

2.2 Example: Relationship of M matrix to the ‘rotation within a cone’ model

We now discuss how one may interpret the magnitudes of the λj in order to deduce rotational mobility, by showing how a special case of the M matrix approach is equivalent to the widely used ‘rotation within a cone’ model of rotational diffusion [34]. In many specimens of biological interest, fluorescent labels are neither entirely immobilized, nor entirely free to rotate. A molecule exhibiting some intermediate mobility may be approximated as follows: The molecule will have a mean orientation, specified by the angles: {Φ0,Θ0}. Furthermore, we assume that the molecule may deviate from its mean orientation by an angle α, which specifies the half-aperture of a cone, within which the emission dipole is rotationally constrained (Fig. 2(c)). How is the M matrix calculated for this case? We begin by assuming that the molecule visits each orientation within its constraint cone with uniform frequency. (We have chosen this elementary model in order to streamline the ensuing mathematical analysis, however our approach may be readily adopted to incorporate more sophisticated treatments of orientational dynamics involving rotational diffusion and fluorescence lifetime considerations. The reader is refered to [35].) This assumption permits us to convert the summation of Eq. (6) into an averaging over the solid angle circumscribed by α (the red region shown in Fig. 2(c)). In this case:

M=A2Sϕ=02πθ=0αVVTsin(θ')dθ'dϕ'
Where S=2π(1cos(α))  is the area of the solid angle, and:
V=R[sin(θ)cos(ϕ')sin(θ)sin(ϕ')cos(θ)]
In Eq. (9), we have chosen to work in a rotated coordinate system {ϕ',θ'} (Fig. 2(c)). That is, θ'=0 when the molecule assumes the orientation {Φ0,Θ0}. This coordinate transformation is effected by the rotation matrixR, and the relationship:
μ|μ|=[sin(Θ)cos(Φ)sin(Θ)sin(Φ)cos(Θ)]=R[sin(θ)cos(ϕ')sin(θ)sin(ϕ')cos(θ)]
Explicitly, a suitable rotation matrix R can be calculated using the axis/angle method [36]:
R=[xxC+cxyC+zsxzC+ysxyC+zsyyC+cyzC+xsxzCysyzC+xszzC+c]
Where:
x= sin(Φ0)y=cos(Φ0)z=0c=cos(Θ0)s=sin(Θ0)C=1c
Since R and RT are constant with respect to the variables of integration, they may be placed outside of the integral in Eq. (9). The remaining expression may be integrated analytically:
M=A2R[(1cos(α))(cos(α)+2)6000(1cos(α))(cos(α)+2)6000(cos3(α)1)(3cos(α)3)]RT
R and RT do not affect the magnitudes of the eigenvalues, λj, because they are orthonormal. Hence,
λ1=A2(cos3(α)1)(3cos(α)3)                      λ2=A2(1cos(α))(cos(α)+2)6λ3=A2(1cos(α))(cos(α)+2)6
Figure 3 shows single-molecule images simulated using the M matrix approach. Five molecules are oriented at {Φ0=45,Θ0=45}, each exhibiting a different α. As α increases, the images become increasingly symmetric. In Fig. 4(a), the eigenvalues λj are plotted as a function of cone angle α (we assume that A=1). Note that two of the eigenvalues, λ2and λ3, are identical, and increase in magnitude with increasing α. The image of a molecule rotating within a cone will thus be identical to an image of three super-imposed dipoles—two of the dipoles will have equal amplitude, and a third dipole will have a distinct amplitude, greater than or equal to that of the other two. When α=90, the rotating molecule effectively behaves as an isotropic emitter, as all three eigenvalues are identical, (corresponding to three dipoles of amplitude 1/3). When α=0, We observe that λ1=1, the other eigenvalues are zero, and the molecule behaves as a rotationally fixed dipole. In the intermediate regime, in which α is between 0 and 90, the cone angle may be inferred simply by using a single λ1 measurement to read off α from the red curve in Fig. 4(a).

 figure: Fig. 3

Fig. 3 Images of single molecules simulated with mean orientation {Φ0=45,Θ0=45}, and varying α. For these images, the defocus was set to d = 1.25 μm. All other simulation parameters are specified in section 3.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Analytical calculation of the eigenvalues of the M matrix for different cone angles α and β. Note that these parameters do not change as a function of mean orientation, {Φ0,Θ0}. Furthermore, they are not affected by experimental variables such as defocus, emission wavelength, or microscope NA. (a)-(c) Eigenvalue calculations, β = α, β = α/2, and β = α/8 respectively.

Download Full Size | PDF

How does the above analysis relate to more sophisticated single-molecule rotational diffusion models? Our assumption of a uniform emission probability distribution within the rotational constraint cone is only completely valid under select circumstances. For example, consider the case when the rotational correlation time [37], τr, of a molecule is much shorter that the molecule’s fluorescence lifetime, τf. That is:

τrτfT
In this case, the molecule will have completely explored the constraint cone between the time at which it absorbs a photon and when it emits a photon, and the emission probability within the constraint cone will therefore be uniform. On the other hand, if τr>τf, then the polarization of the excitation illumination source relative to the mean orientation of the molecule will induce an asymmetric emission probablity distribution within the constraint cone [35]. In this situation, the cone angle α cannot be deduced from the eigenvalues of M alone.

2.3 Beyond the rotation within a cone model with the M matrix

The rotation within a cone model may be augmented to allow for the case in which there are three distinct eigenvalues of the M matrix. Instead of assuming the molecule’s cone of rotational constraint is a circular region on a sphere, the dipole motion may instead demark an ellipse (Fig. 2(e)) parameterized by two angles, α and β. If we again assume that the molecule’s emission probability is uniform throughout this elliptical constraint region, the M matrix may be calculated by changing the bounds of integration of Eq. (9):

M=A2S'ϕ=02πθ=0α2cos2(ϕ')+β2sin2(ϕ')VVTsin(θ')dθ'dϕ'
S' is the area of the solid angle parameterized by α and β. The integral over θ' may be evaluated straightforwardly, simplifying Eq. (17) to:
M=A2S'R[a000b000c]RT
Where:
a=13ϕ=02πcos2(ϕ)(cos(α2cos2(ϕ)+β2sin2(ϕ))1)2×(cos(α2cos2(ϕ)+β2sin2(ϕ))+2)dϕb=13ϕ=02πcos2(ϕ)(cos(α2cos2(ϕ)+β2sin2(ϕ))1)2×(cos(α2cos2(ϕ)+β2sin2(ϕ))+2)dϕc=13ϕ=02π1 cos3(α2cos2(ϕ)+β2sin2(ϕ))dϕ
In this case, S' is computed as:
S'=ϕ=02π1 cos(α2cos2(ϕ)+β2sin2(ϕ))dϕ
From inspection, the eigenvalues of M are:
λ1=cA2S'λ2=aA2S'λ3=bA2S'
The one-dimensional integrals appearing in Eqs. (19) and (20) may be calculated numerically using a trapezoidal approximation scheme. In Figs. 4(b) and 4(c), we have calculated the three eigenvalues of M for different α and β. Hence, the image formed from a molecule rotating within an elliptic constraint region will be identical to the image formed from three super-imposed dipoles, each of distinct amplitude (Fig. 2(f)).

2.4 The basis function formulation: determining the entries of the M matrix

Given a single molecule’s corresponding M matrix, it is straightforward to compute the eigenvalues of this matrix, and to deduce the images expected from the molecule. We now address the inverse problem of inferring the entries of the M matrix from raw image data. Here we propose a simple approach based on matrix inversion. Carrying out the matrix multiplication in Eq. (7), and distributing terms yields the following expression for U(r):

U(r)=j=x,y|Ejμx(r)|2Mxx+2{(Ejμx(r))*Ejμy(r)}Mxy+|Ejμy(r)|2Myy+2{(Ejμx(r))*Ejμz(r)}Mxz+|Ejμz(r)|2Mzz+2{(Ejμy(r))*Ejμz(r)}Myz
We define the following six ‘basis functions’:
XX(r)=j=x,y|Ejμx(r)|2XY(r)=2j=x,y{(Ejμx(r))*Ejμy(r)}YY(r)=j=x,y|Ejμy(r)|2XZ(r)=2j=x,y{(Ejμx(r))*Ejμz(r)}ZZ(r)=j=x,y|Ejμz(r)|2YZ(r)=2j=x,y{(Ejμy(r))*Ejμz(r)}
Substituting Eqs. (23) into Eq. (22) allows us to express U(r) as an inner product:
U(r)=[XX(r)YY(r)ZZ(r)XY(r)XZ(r)YZ(r)][MxxMyyMzzMxyMxzMyz]=[XX(r)YY(r)ZZ(r)XY(r)XZ(r)YZ(r)]M'
Where M' is the vector containing the six unique elements of M. Practically, the data set acquired by an image sensor is not a continuous function U(r), but a set of N discrete pixel intensity readings  {U1, U2,UN}, which we will denote by the vector U. Such a data set may be interpreted as having been generated by the following matrix multiplication:
U=[XX1XX2XXNYY1YY2YYNZZ1ZZ2ZZNXY1XY2XYNXZ1XZ2XZNYZ1YZ2YZN]M'=[|||XXYYZZ||||||XYXZYZ|||]M'=BM'
Where the columns of the matrix B are composed of a discretized sampling of the six basis functions {XX, YY,ZZ, XY, XZ, YZ}, evaluated at the N image sensor pixel locations. So long as the single-molecule image is sampled over more than six pixels, Eq. (25) defines an over-determined linear system, which may be solved by computing the pseudo-inverse of B, which we denote B+:
B+=(BTB)1BT
We then recover the entries of M' by multiplying by the intensity data:
M'=B+U
Evaluating Eq. (27) is equivalent to solving a linear least squares problem. Once the six unique entries of M are found, analysis of the eigenvalues may proceed as detailed in section 2.b. Alternatively, the entries of Mmay be inferred by solving a maximum likelihood estimation problem, subject to Poisson noise statistics, using an approach similar to [18, 20]. While maximum likelihood estimation theoretically achieves superior measurement precision, the strategy described here is computationally advantageous – maximum likelihood estimation requires iteratively optimizing an objective function, while determining Mfrom Eq. (27) requires only a single matrix multiplication of modest dimensions. (Strategies for handling the more complex MLE problem have been described in detail in [18], and GPUs can be leveraged to accelerate image analysis speed [38].)

We will also consider the effect of polarization optics, aligned along the x/y axes of our imaging system. The basis function approach to inferring M' may be trivially modified. Polarized images, Ux,y(r), and their pixelated counterparts Ux,y, may be simulated by ignoring the terms corresponding to y/x polarized light in Eq. (22) respectively. The polarized components of the six basis functions, {XXx,y,YYx,y,ZZx,y,XYx,y,XZx,y,YZx,y}, may be found in similar fashion. Assuming that both the x and y polarized images are recorded on different image sensors (or different regions of the same image sensor), the system of equations that is solved in order to determine M' may be augmented:

[|Ux||Uy|]=[|XXx||YYx||ZZx||XYx||XZx||YZx||XXy||YYy||ZZy||XYy||XZy||YZy|]M'=BpolM'
Where M'is found by computing the pseudo-inverse of Bpol. In Fig. 5, representative basis functions have been generated using various amounts of objective lens defocus d (see the next section for the other simulation parameters, which were held constant throughout this paper). The six basis functions used to generate unpolarized images are shown alongside their polarized components. Note that as defocus increases, the basis functions become more diffuse, inhabiting a larger region of the image sensor. As will be shown in the next section, this effect will have different consequences, depending upon the signal and background levels.

 figure: Fig. 5

Fig. 5 The image of any single molecule, fixed or rotationally mobile, may be decomposed into a linear combination of six basis functions. These six basis functions have been calculated at three representative defocus depths, given the simulation parameters presented in the main text. The x/y polarized components of these basis functions are also shown with x/y superscripts. Units of intensity are scaled such that the brightest pixel for a dipole parallel to the focal plane at 0.55 μm defocus has a magnitude of 1.

Download Full Size | PDF

3. Numerical experiments

In this section, we demonstate the efficacy of our proposed method using simulated data sets of single-molecule images. In the first trial, we simulate rotationally fixed molecules, while varying the microscope defocus distance, d. We determine the optimal defocus for various signal and background levels by measuring the orientation of each simulated molecule, and comparing our measurement to the molecule’s true orientation. Using a value of d that minimized mean angular measurement error for fixed molecules, we performed two additional numerical experiments using rotationally mobile molecules: First, we demonstrate that quantitative cone angle (α) measurements may be acquired even when signal is modest. Then, we show that two distinct populations of molecules may be differentiated by their rotational mobility. All simulations presented in this section were coded using the MATLAB programming language.

For all numerical trials, we assumed the experimental apparatus depicted in Fig. 6, which simultaneously records two orthogonally polarized images. Alternatively, a single unpolarized image may be captured by removing the polarizing beamsplitter and one of the image sensors. We simulated an objective lens with 100× magnification and a numerical aperture of 1.40. Molecules were embedded in a medium identical to that of the objective’s immersion oil, of refractive index n = 1.518. The emission wavelength of the molecules was λ = 600 nm. We assumed an effective pixel size of 160 nm for our image sensor (which corresponds to the 16 μm pixel size of an Andor iXon Ultra 897 EMCCD detector, after 100× magnification). Images were simulated on a 33-by-33 pixel grid, assuming that the molecule was located at the midpoint of the central pixel. In order to model the effects of photon shot noise upon our measurements, we first normalized simulated (noiseless) images to a desired number of total signal photons, then added a specified amount of uniform background to each pixel. For each pixel, as our ‘measured’ number of photons detected, we drew a Poisson distributed random variable with mean equal to the calculated (noiseless) photon count. Specifically, the shot noise corrupted data at a given pixel U˜j is determined as:

U˜j= ois{Uj+b}
Where ois{} denotes a random variable drawn from a Poisson distribution, with mean in brackets. In practice, the noise statistics of an EMCCD detector are a much more complicated function of signal photons, readout noise, and electron-multiplication gain [39]. However, the raw data from an EMCCD can be appropriately scaled such that a distribution is observed that can be closely approximated as Poisson [40]. Hence, we adopt this simplified model, and neglect other noise sources.

 figure: Fig. 6

Fig. 6 Experimental schematic assumed for all numerical experiments.

Download Full Size | PDF

An additional modeling consideration arises due to the fact that the objective’s collection efficiency is a function of a molecule’s orientation. In isotropic media, far fewer photons will be collected for a molecule with an emission dipole moment parallel to the optical axis, than for a molecule oriented perpendicular to the optical axis (parallel to the plane of the coverslip), even if the two molecules emit the same total number of photons. This feature arises due to the fact that the objective collects only a cone of light, defined by the numerical aperture, emmanating from a given molecule [41]. In order to properly account for this effect, the basis components used to simulate (noiseless) single-molecule images must be properly normalized. For example, say a molecule oriented parallel to the coverslip emits a total of P photons. Then the normalized basis components will be:

XX^=PXXj=1NXXjYY^=PYYj=1NXXjZZ^=PZZj=1NXXjXZ^=PXZj=1NXXjYZ^=PYZj=1NXXjXY^=PXYj=1NXXj
The pixels of a noiseless image may thus be calculated from the normalized matrix B^ as:
U=[|||XX^YY^ZZ^||||||XZ^YZ^XY^|||]M'=B^M'
Normalizing the basis components for polarized images may also be performed simply by multiplying each component by a factor of P/j=1NXXj. For the remainder of this paper, when signal photons are reported, we are referring to the mean signal for a molecule in the plane of the coverslip. Hence, the mean signal detected for molecules inclined towards the optical axis will be substantially less. For simplicity, we assume that molecules of all orientations receive equal excitation intensity. As a final data-processing step, the eigenvalue measurements are normalized such that they sum to 1. This normalization is equivalent to dividing by a factor of A2. This permits eigenvalues corresponding to different molecules emitting varying numbers of photons to be compared on the same scale.

3.1 Numerical experiment #1: Determining optimal microscope defocus

In this experiment, images of 1,000 rotationally immobilized molecules were simulated at orientations drawn uniformly at random from a unit hemisphere. That is, a given molecule’s orientation was selected by drawing Θ and Φ from the following distributions:

Φ=2πU{0,1}Θ=cos1(U{0,1})
Where U{0,1} denotes a random variable drawn from a uniform distribution with support [0,1]. The orientation of each molecule was estimated from unpolarized and polarized image data as the eigenvector corresponding to the largest eigenvalue of M. Since this simulation concerns immobilized molecules, we expect the second and third largest eigenvalues to be zero. However, due to Poisson noise, we found these eigenvalues to have magnitudes ~3-7% that of the largest eigenvalue. The angular error from the true orientation was calculated as:
error=|cos1(μtrueTμestimated)|
Where μtrue and μestimated are unit vectors corresponding to the true and estimated orientations of the emission dipole moment respectively. For a molecule oriented in the plane of the coverslip the mean detected signal was 3,000 photons. The simulation was repeated for defocus values of d = 0.3-3.0 μm in 50 nm increments. Furthermore, tests were performed using a background of b = {0, 5, 10, 15, 20} photons per pixel. Results for polarized and unpolarized data are shown in Figs. 7(a) and 7(b). Inspection of these plots reveals that polarized detection offers significant advantages, in terms of reducing measurement error for a given photon budget. Furthermore, these plots demonstrate the interesting feature that the optimal defocus (minimal mean error)is a function of signal and background. When there is no background, it is advantageous to defocus as much as possible. In this regime, increased defocus produces images that vary strikedly as a function of orientation, aiding image analysis and enhancing measurement precision. However, if a moderate to high amount of background is present, there is a ‘sweet-spot’ between d = 0.5 μm to d = 0.6 μm, at which the measurement error is minimized. When background is no longer negligible, the helpful effects of defocus must be balanced with the need to conentrate emitter intensity upon a smaller region of the image sensor, in order to maintain a reasonable signal-to-background ratio. From inspection of the results in Fig. 7, we choose a defocus of d = 0.55 μm for the remainder of our numerical experiments. Note that our chosen defocus parameter is much smaller in magnitude than what is generally employed in orientation-imaging applications (d ~1.0 μm in [16, 24, 25]). Before continuing, it ought to be stressed that the optimal defocus can be altered when one adjusts any of the imaging parameters such as emission wavelength, objective specifications or detector pixel size. Hence, when applying this technique to a different experimental system, it is necessary to re-calculate the plots in Fig. 7 accordingly. As an alternative to performing numerical simulations, as we have done here, the tradeoffs between defocus and background can be analyzed using Cramer-Rao lower bound calculations. The reader is referred to [7, 8, 42].

 figure: Fig. 7

Fig. 7 Results of numerical experiment 1. (a) Mean angular error as a function of defocus, d, for unpolarized and (b) polarized image data, with varying numbers of background photons per pixel.

Download Full Size | PDF

3.2 Numerical experiment #2: Measuring the eigenvalues of Min the presence of noise

To demonstrate that rotational mobility may be ascertained from single-molecule images, we simulate a corpus of molecules with cone angle, α, varied in 5° increments. The mean orientation, {Φ0,Θ0}, of each molecule was drawn randomly. 100 molecules were simulated for each distinct α. The simulation was performed in two different signal regimes: First, we used a mean signal for a molecule in the plane of the coverslip of 10,000 photons, and a background of 0 photons. Next, we switched to a mean of 3,000 photons of signal, and 20 photons of background per pixel. This experiment was repeated for both polarized and unpolarized data. Figures 8(a) and 8(b) show the resulting eigenvalue measurements as a function of α. For comparison, we overlay the theoretically calculated eigenvalues as blue and red curves, as plotted in Fig. 4. We note that the eigenvalues obtained from simulated data cluster around the theoretical values, and that both high signal and polarized data contribute to increased precision of acquired eigenvalue measurements. To demonstrate the effects of noise on the raw data input into our algorithm for determining M and its eigenvalues, in Fig. 8(c), we show representative simulated images of single molecules with different α. The cone angle α of a given molecule may be inferred from the largest eigenvalue, λ1, alone (the red curve). However, measuring all three eigenvalues provides a much more robust means of ascertaining rotational mobility, especially if each eigenvalue has a significantly different magnitude—as this would imply that the rotation ‘within a cone’ model of Fig. 2(c) is invalid, and the more complex elliptic constraint region (Fig. 2(e)) is in force.

 figure: Fig. 8

Fig. 8 Results of numerical experiment 2. (a) and (b) Eigenvalue measurements from single-molecule images for unpolarized data (a) and polarized data (b). Overall standard deviations in eigenvalue measurements for each trial are noted on their respective plots. Error bars are ±σ. (c) Sample raw images of molecules with different α.

Download Full Size | PDF

3.3 Numerical experiment #3: Differentiating sub-populations of molecules by their rotational mobility

Having demonstrated the ability to extract the eigenvalues of the M matrix for a given single-molecule image, we now turn our attention to gauging whether our method can yield meaningful insight under realistic experimental conditions. We carried out the following trial: 3,000 single-molecule images were simulated, using 3,000 photons mean signal for a molecule parallel to the coverslip, and 20 photons of background per pixel. Of these molecules, 1,500 had a cone angle of α = 55°, and the other 1,500 had a cone angle of α = 65°. As before, the mean orientation of each molecule was drawn randomly. We computed the eigenvalues of the M matrix using both polarized and unpolarized image data. Additonally, in order to benchmark our technique with an established method, we also computed the LD associated with each molecule. In our simulation of LD measurements, we incorporated the photon shot noise resulting from signal detected from a single molecule. However, in order to demonstrate that our image analysis technique outperforms a simple LD measurement, even under unfavorable signal-to-noise conditions, we did not include background when simulating noisy LD data. In Fig. 9(a), we histogram the LD measurements associated with the 3,000 simulated molecules. From the LD data alone, it is difficult to infer that two populations of molecules are present: The histogram features a single peak at LD = 0, and the broadness of the distribution could potentially be a result of multiple populations of molecules with distinct rotational mobilities, a single population of molecules with a low rotational mobility, or noise associated with the LD measurements. In order to make any conclusions about this sample, more quantitative analysis and potentially a more sophisticated experimental setup is required. However, in Figs. 9(b) and 9(c), we histogram the largest eigenvalue, λ1, of the M matrix associated with each single-molecule image, computed using unpolarized and polarized data, respectively. When polarized data (Fig. 9(c)) is inspected, a bimodal distribution of λ1 measurements is clearly present, confirming the presence of two populations of molecules with different rotational mobilities. In comparison, two peaks in the eigenvalue histogram are not as readily evident when examining the unpolarized measurement results (Fig. 9(b)). This difference underscores the improved measurement capabilites of a polarized detection system. To better quantify the performance enhancement gained by acquiring polarized images, we fit Gaussian distributions (overlaid on Figs. 9(b) and 9(c)) to the 1,500 eigenvalue measurements taken for the set of molecules that had α = 55° (red curves) and α = 65° (green curves), using both the polarized and unpolarized results. While the means, η, of the Gaussians corresponding to different α are nearly identical when considering either polarized or unpolarized data, the standard deviations, σ, of the measurements differ. In the unpolarized case, the standard deviations in eigenvalue measurements are ± 0.061 while for polarized data, the standard deviations are ± 0.042. Practically, if one were to infer cone angle by consulting the red curve in Fig. 4(a), such precision in eigenvalue measurement would imply a precision of ± 6.3° (unpolarized data) or ± 4.3° (polarized data) in measurement of α. Thus when it is necessary to make fine distinctions in the rotational mobilities of different molecules, polarized detection is preferred.

 figure: Fig. 9

Fig. 9 Results of numerical experiment 3. (a) Linear dichroism histogram. From this data alone, the presence of two distinct populations of molecules is not clearly evident. (b) Histogram of largest eigenvalues measured for each single-molecule image using unpolarized data. (c) Histogram of largest eigenvalues measured for each single-molecule image using polarized data.

Download Full Size | PDF

4. Discussion

We conclude by remarking on some of the challenges that, although out of the scope of this current work, will need to be addressed in order to realize the M matrix method in actual experiments.

  • Localization of molecules:

    In our current simulation framework, we have assumed that our simulated data are precisely aligned with the basis functions used to infer the M matrix. In experiment, however, we do not know the lateral (x-y) position of a given molecule with respect to the grid of image-sensor pixels, nor do we know the precise defocus of the molecule. In practice, each of these quantities will have to be estimated as a pre-processing step, then an appropriate set of basis functions generated accordingly. As has been shown in previous work [17–19,21], accurate methods for localizing molecules in three dimensions are feasible, even when orientational effects are prominent. Alternatively, one could envision employing maximum likelihood estimation or an expectation-maximization framework [43] to iteratively estimate both the position and the M matrix from a single-molecule image.

  • Optical Aberrations:

    The accuracy of the method hinges upon the ability to determine the true basis functions that are superimposed to form single-molecule images. In our current simulations, we have employed an idealized model for our objective lens, and assumed that there are no aberrations present in our system. In practice, the sample under investigation and the components in the imaging pathway of the microscope will introduce aberrations in the acquired single-molecule images. In order to avoid incurring systematic measurement errors as a result of these aberrations, the simulations used to generate accurate basis functions must be augmented to incorporate any aberrations that may have some impact on experimentally measured images [44–46]. A spatial-light modulator or deformable mirror could additionally be used to mitigate aberrations that may cause discrepancies between theoretical basis function calculations and experiment [47].

Notwithstanding the points noted above, the proposed method provides new insight into the orientational mobility of a molecule from a single image. It removes many of the ambiguities that arise when using more conventional linear dichroism (or bulk polarization anisotropy) measurements to ascertain the orientational dynamics of individual fluorescent molecules. As evidenced by our simulations, it is possible to acquire meaningful rotational mobility data when signal and background are at levels typical of single-molecule imaging experiments (3,000 photons signal, 20 photons per pixel of background). Our method may be applied to unpolarized data sets, however polarized detection enhances measurement precision. Future work will explore further augmentations to the optical system that will make our method feasible under circumstances when signal is severely limited, and when aberrations/localization uncertainty may be present.

5. Appendix

In this section, we summarize how we determine the electric fields present at the image plane of a microscope for a rotationally fixed molecule. This calculation serves as a major building block for simulating the image of a molecule undergoing constrained rotation. The first step is to analytically evaluate the electric fields, ξx,yμx,y,z(ρ), present at the microscope’s back focal plane [48,49]:

ξxμx(ρ)=ein1kd1 ρ2n1n0(1ρ2)1/2(sin2(ϕ)+cos2(ϕ)1 ρ2)ξxμy(ρ)=ein1kd1 ρ2n1n0(1ρ2)1/2(sin(2ϕ)(1 ρ21)/2)ξxμz(ρ)=ein1kd1 ρ2n1n0(1ρ2)1/2(ρcos(ϕ))ξyμx(ρ)=ein1kd1 ρ2n1n0(1ρ2)1/2(sin(2ϕ)(1 ρ21)/2)ξyμy(ρ)=ein1kd1 ρ2n1n0(1ρ2)1/2(cos2(ϕ)+sin2(ϕ)1 ρ2)ξyμz(ρ)=ein1kd1 ρ2n1n0(1ρ2)1/2(ρsin(ϕ))
In Eqs. (34), the polar coordinates ρ :={ρ[0,NA/n1),ϕ[0, 2π)} specify a point in space within the plane located one focal length behind the microscope’s objective lens. These formulas are used to calculate electric fields within the circular pupil specified by ρ<NA/n1, Where NA is the numerical aperture of the objective, and n1 is the refractive index of the objective’s immersion medium. Outside of this circle, the electric fields are zero. Furthermore, the fluorescence wavelength is λ, the wavenumber is k=2π/λ, the defocus distance of the emitter from the (front) focal plane is d (where d < 0 specifies an emitter between the objective lens and its focal plane), and the refractive index of the medium surrounding the image sensor is n0 (usually it is assumed that n0=1). To calculate the image plane electric fields Ex,yμx,y,z(r), the Fourier transforms of the back focal plane fields are calculated numerically, which can be performed efficiently using the fast Fourier transform algorithm:
Ex,yμx,y,z(r)={ξx,yμx,y,z(ρ)}
In Eq. (35), {} denotes the Fourier transform operation, and r:={r[0,),φ[0, 2π)} specifies a point on the image plane. When numerically simulating single-molecule images using Eqs. (34) and (35), the units of length at the image plane will depend upon the density of sampling within the back focal plane, and the fluorescence wavelength, λ. For example, our simulations evaluate ξx,yμx,y,z(ρ) on a 512-by-512 grid, with a sample spacing of Δρ=(1160 nm)(λn1N). After applying the fast Fourier transform, the resulting simulated images are sampled at Δr = 160 nm. If one were to also account for the magnification M=100×, of our optical system, this sampling would correspond to Δr = 160 μm, which matches the pixel size of the image sensor referred to in our numerical experiments.

Acknowledgments

The authors wish to thank Mikael Backlund, Matthew Lew, Steffen Sahl and Yoav Shechtman for helpful discussions. A. S. B. acknowledges support from a Simons Graduate Research Assistantship. This work was supported by National Institutes of Health, National Institute of General Medical Sciences Grant R01GM085437.

References and links

1. T. Ha, T. Enderle, S. Chemla, R. Selvin, and S. Weiss, “Single molecule dynamics studied by polarization modulation,” Phys. Rev. Lett. 77(19), 3979–3982 (1996). [CrossRef]   [PubMed]  

2. T. Ha, J. Glass, T. Enderle, D. S. Chemla, and S. Weiss, “Hindered rotational diffusion and rotational jumps of single molecules,” Phys. Rev. Lett. 80(10), 2093–2096 (1998). [CrossRef]  

3. H. Sosa, E. J. G. Peterman, W. E. Moerner, and L. S. B. Goldstein, “ADP-induced rocking of the kinesin motor domain revealed by single-molecule fluorescence polarization microscopy,” Nat. Struct. Biol. 8(6), 540–544 (2001). [CrossRef]   [PubMed]  

4. E. J. G. Peterman, H. Sosa, L. S. B. Goldstein, and W. E. Moerner, “Polarized fluorescence microscopy of individual and many kinesin motors bound to axonemal microtubules,” Biophys. J. 81(5), 2851–2863 (2001). [CrossRef]   [PubMed]  

5. J. N. Forkey, M. E. Quinlan, and Y. E. Goldman, “Measurement of single macromolecule orientation by total internal reflection fluorescence polarization microscopy,” Biophys. J. 89(2), 1261–1271 (2005). [CrossRef]   [PubMed]  

6. S. A. Rosenberg, M. E. Quinlan, J. N. Forkey, and Y. E. Goldman, “Rotational motions of macro-molecules by single-molecule fluorescence microscopy,” Acc. Chem. Res. 38(7), 583–593 (2005). [CrossRef]   [PubMed]  

7. M. R. Foreman and P. Török, “Fundamental limits in single-molecule orientation measurements,” New J. Phys. 13(9), 093013 (2011). [CrossRef]  

8. A. Agrawal, S. Quirin, G. Grover, and R. Piestun, “Limits of 3D dipole localization and orientation estimation for single-molecule imaging: towards Green’s tensor engineering,” Opt. Express 20(24), 26667–26680 (2012). [CrossRef]   [PubMed]  

9. C. Phelps, W. Lee, D. Jose, P. H. von Hippel, and A. H. Marcus, “Single-molecule FRET and linear dichroism studies of DNA breathing and helicase binding at replication fork junctions,” Proc. Natl. Acad. Sci. U.S.A. 110(43), 17320–17325 (2013). [CrossRef]   [PubMed]  

10. M. E. Quinlan, J. N. Forkey, and Y. E. Goldman, “Orientation of the Myosin Light Chain Region by Single Molecule Total Internal Reflection Fluorescence Polarization Microscopy,” Biophys. J. 89(2), 1132–1142 (2005). [CrossRef]   [PubMed]  

11. J. F. Beausang, D. Y. Shroder, P. C. Nelson, and Y. E. Goldman, “Tilting and wobble of Myosin V by high-speed single-molecule polarized fluorescence microscopy,” Biophys. J. 104(6), 1263–1273 (2013). [CrossRef]   [PubMed]  

12. T. J. Gould, M. S. Gunewardene, M. V. Gudheti, V. V. Verkhusha, S. R. Yin, J. A. Gosse, and S. T. Hess, “Nanoscale imaging of molecular positions and anisotropies,” Nat. Methods 5(12), 1027–1030 (2008). [CrossRef]   [PubMed]  

13. I. Testa, A. Schönle, C. von Middendorff, C. Geisler, R. Medda, C. A. Wurm, A. C. Stiel, S. Jakobs, M. Bossi, C. Eggeling, S. W. Hell, and A. Egner, “Nanoscale separation of molecular species based on their rotational mobility,” Opt. Express 16(25), 21093–21104 (2008). [CrossRef]   [PubMed]  

14. R. M. Dickson, D. J. Norris, and W. E. Moerner, “Simultaneous imaging of individual molecules aligned both parallel and perpendicular to the optic axis,” Phys. Rev. Lett. 81(24), 5322–5325 (1998). [CrossRef]  

15. A. P. Bartko and R. M. Dickson, “Imaging three-dimensional single molecule orientations,” J. Phys. Chem. B 103(51), 11237–11241 (1999). [CrossRef]  

16. D. Patra, I. Gregor, and J. Enderlein, “Image analysis of defocused single-molecule images for three-dimensional molecule orientation studies,” J. Phys. Chem. A 108(33), 6836–6841 (2004). [CrossRef]  

17. F. Aguet, S. Geissbühler, I. Märki, T. Lasser, and M. Unser, “Super-resolution orientation estimation and localization of fluorescent dipoles using 3-D steerable filters,” Opt. Express 17(8), 6829–6848 (2009). [PubMed]  

18. K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods 7(5), 377–381 (2010). [CrossRef]   [PubMed]  

19. M. P. Backlund, M. D. Lew, A. S. Backer, S. J. Sahl, G. Grover, A. Agrawal, R. Piestun, and W. E. Moerner, “Simultaneous, accurate measurement of the 3D position and orientation of single molecules,” Proc. Natl. Acad. Sci. U.S.A. 109(47), 19087–19092 (2012). [CrossRef]   [PubMed]  

20. A. S. Backer, M. P. Backlund, M. D. Lew, and W. E. Moerner, “Single-molecule orientation measurements with a quadrated pupil,” Opt. Lett. 38(9), 1521–1523 (2013). [CrossRef]   [PubMed]  

21. A. S. Backer, M. P. Backlund, A. R. Diezmann, S. J. Sahl, and W. E. Moerner, “A bisected pupil for studying single-molecule orientational dynamics and its application to 3D super-resolution microscopy,” Appl. Phys. Lett. 104, 193701 (2014). [CrossRef]   [PubMed]  

22. A. S. Backer and W. E. Moerner, “Extending single-molecule microscopy using optical Fourier processing,” J. Phys. Chem. B 118(28), 8313–8329 (2014). [CrossRef]   [PubMed]  

23. S. Stallinga and B. Rieger, “Position and orientation estimation of fixed dipole emitters using an effective Hermite point spread function model,” Opt. Express 20(6), 5896–5921 (2012). [CrossRef]   [PubMed]  

24. E. Toprak, J. Enderlein, S. Syed, S. A. McKinney, R. G. Petschek, T. Ha, Y. E. Goldman, and P. R. Selvin, “Defocused orientation and position imaging (DOPI) of myosin V,” Proc. Natl. Acad. Sci. U.S.A. 103(17), 6495–6499 (2006). [CrossRef]   [PubMed]  

25. J. A. Hutchison, H. Uji-i, A. Deres, T. Vosch, S. Rocha, S. Müller, A. A. Bastian, J. Enderlein, H. Nourouzi, C. Li, A. Herrmann, K. Müllen, F. De Schryver, and J. Hofkens, “A surface-bound molecule that undergoes optically biased Brownian rotation,” Nat. Nanotechnol. 9(2), 131–136 (2014). [CrossRef]   [PubMed]  

26. A. Cyphersmith, A. Maksov, R. Hassey-Paradise, K. D. McCarthy, and M. D. Barnes, “Defocused emission patterns from chiral fluorophores: application to chiral axis orientation determination,” J. Phys. Chem. Lett. 2(6), 661–665 (2011). [CrossRef]  

27. S. Ham, J. Yang, F. Schlosser, F. Wurthner, and D. Kim, “Reconstruction of the molecular structure of a multichromophoric system using single-molecule defocused wide-field imaging,” J. Phys. Chem. Lett. 5(16), 2830–2835 (2014). [CrossRef]  

28. Y. Zhang, L. Gu, H. Chang, W. Ji, Y. Chen, M. Zhang, L. Yang, B. Liu, L. Chen, and T. Xu, “Ultrafast, accurate, and robust localization of anisotropic dipoles,” Protein Cell 4(8), 598–606 (2013). [CrossRef]   [PubMed]  

29. B. Richards and E. Wolf, “Electromagnetic diffraction in optical systems. II. Structure of the image field in an aplanatic system,” Proc. R. Soc. Lond. A Math. Phys. Sci. 253(1274), 358–379 (1959). [CrossRef]  

30. J. D. Jackson, Classical Electrodynamics (Wiley, 1962).

31. E. H. Hellen and D. Axelrod, “Fluorescence emission at dielectric and metal-film interfaces,” J. Opt. Soc. Am. B 4(3), 337–350 (1987). [CrossRef]  

32. M. Böhmer and J. Enderlein, “Orientation imaging of single molecules by wide-field epifluorescence microscopy,” J. Opt. Soc. Am. B 20(3), 554–559 (2003). [CrossRef]  

33. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University, 2007).

34. K. Kinosita Jr, S. Kawato, and A. Ikegami, “A theory of fluorescence polarization decay in membranes,” Biophys. J. 20(3), 289–305 (1977). [CrossRef]   [PubMed]  

35. M. D. Lew, M. P. Backlund, and W. E. Moerner, “Rotational mobility of single molecules affects localization accuracy in super-resolution fluorescence microscopy,” Nano Lett. 13(9), 3967–3972 (2013). [CrossRef]   [PubMed]  

36. J. Vince, Geometric Algebra for Computer Graphics (Springer, 2008).

37. J. R. Lakowicz, Principles of Fluorescence Spectroscopy (Kluwer Academic, 1999).

38. C. S. Smith, N. Joseph, B. Rieger, and K. A. Lidke, “Fast, single-molecule localization that achieves theoretically minimum uncertainty,” Nat. Methods 7(5), 373–375 (2010). [CrossRef]   [PubMed]  

39. M. Hirsch, R. J. Wareham, M. L. Martin-Fernandez, M. P. Hobson, and D. J. Rolfe, “A stochastic model for electron multiplication charge-coupled devices--from theory to practice,” PLoS ONE 8(1), e53671 (2013). [CrossRef]   [PubMed]  

40. F. Huang, T. M. P. Hartwich, F. E. Rivera-Molina, Y. Lin, W. C. Duim, J. J. Long, P. D. Uchil, J. R. Myers, M. A. Baird, W. Mothes, M. W. Davidson, D. Toomre, and J. Bewersdorf, “Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms,” Nat. Methods 10(7), 653–658 (2013). [CrossRef]   [PubMed]  

41. D. Axelrod, “Fluorescence polarization microscopy,” Methods Cell Biol. 30, 333–352 (1989). [CrossRef]   [PubMed]  

42. R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. 86(2), 1185–1200 (2004). [CrossRef]   [PubMed]  

43. C. Bishop, Pattern Recognition and Machine Learning (Springer, 2006).

44. S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for superresolution microscopy with aberrations and engineered point spread functions,” Proc. Natl. Acad. Sci. U.S.A. 109(3), 675–679 (2012). [CrossRef]   [PubMed]  

45. S. Liu, E. B. Kromann, W. D. Krueger, J. Bewersdorf, and K. A. Lidke, “Three dimensional single molecule localization using a phase retrieved pupil function,” Opt. Express 21(24), 29462–29487 (2013). [CrossRef]   [PubMed]  

46. B. Huang, T. D. Perroud, and R. N. Zare, “Photon counting histogram: one-photon excitation,” ChemPhysChem 5(10), 1523–1531 (2004). [CrossRef]   [PubMed]  

47. I. Izeddin, M. El Beheiry, J. Andilla, D. Ciepielewski, X. Darzacq, and M. Dahan, “PSF shaping using adaptive optics for three-dimensional single-molecule super-resolution imaging and tracking,” Opt. Express 20(5), 4957–4967 (2012). [PubMed]  

48. M. A. Lieb, J. M. Zavislan, and L. Novotny, “Single-molecule orientations determined by direct emission pattern imaging,” J. Opt. Soc. Am. B 21(6), 1210–1215 (2004). [CrossRef]  

49. D. Axelrod, “Fluorescence excitation and imaging of single molecules near dielectric-coated and bare surfaces: a theoretical study,” J. Microsc. 247(2), 147–160 (2012). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Examples of rotational behavior which yield identical linear dichroism measurements. (a) Immobile molecule aligned along the optical axis. Orientation of polarization analyzers indicated with respect to microscope focal plane. (b) Immobile molecule aligned in plane of coverslip, at 45° angle to each of the polarization analyzers. (c) Molecule rotating about the optical axis.
Fig. 2
Fig. 2 Parameterizations of single-molecule orientation and rotational mobility. (a) A rotationally fixed single molecule may be modeled as a fixed dipole with polar orientation Θ and azimuthal orientation Φ. Alternatively, orientation may be described as a unit vector μ, with x, y and z components μ x , μ y and μ z respectively. (b) Experimental schematic: A single molecule is placed a distance d from the focal plane of the objective, and a single widefield image is acquired. (c) Rotation within a cone model: A single molecule undergoes constrained rotation about some mean orientation { Φ 0 , Θ 0 } . The molecule may deviate by an angle α from the mean. (d) A molecule rotating in a cone may be alternatively parameterized by three orthogonal dipoles. One dipole will have amplitude equal to the square root of the largest eigenvalue of the M matrix, as defined in the main text. The other two dipoles will have amplitudes equal to the square root of the second largest eigenvalue. (e) In a more general case, a single molecule’s rotation may be confined to an elliptical region of the unit hemisphere, parameterized by two angles α and β. (f) For rotation within an elliptic region, the equivalent eigenvectors provide three different dipoles, each with a distinct amplitude determined from the square roots of the eigenvalues of the M matrix.
Fig. 3
Fig. 3 Images of single molecules simulated with mean orientation { Φ 0 = 45 , Θ 0 = 45 } , and varying α. For these images, the defocus was set to d = 1.25 μm. All other simulation parameters are specified in section 3.
Fig. 4
Fig. 4 Analytical calculation of the eigenvalues of the M matrix for different cone angles α and β. Note that these parameters do not change as a function of mean orientation, { Φ 0 , Θ 0 } . Furthermore, they are not affected by experimental variables such as defocus, emission wavelength, or microscope NA. (a)-(c) Eigenvalue calculations, β = α, β = α/2, and β = α/8 respectively.
Fig. 5
Fig. 5 The image of any single molecule, fixed or rotationally mobile, may be decomposed into a linear combination of six basis functions. These six basis functions have been calculated at three representative defocus depths, given the simulation parameters presented in the main text. The x/y polarized components of these basis functions are also shown with x/y superscripts. Units of intensity are scaled such that the brightest pixel for a dipole parallel to the focal plane at 0.55 μm defocus has a magnitude of 1.
Fig. 6
Fig. 6 Experimental schematic assumed for all numerical experiments.
Fig. 7
Fig. 7 Results of numerical experiment 1. (a) Mean angular error as a function of defocus, d, for unpolarized and (b) polarized image data, with varying numbers of background photons per pixel.
Fig. 8
Fig. 8 Results of numerical experiment 2. (a) and (b) Eigenvalue measurements from single-molecule images for unpolarized data (a) and polarized data (b). Overall standard deviations in eigenvalue measurements for each trial are noted on their respective plots. Error bars are ±σ . (c) Sample raw images of molecules with different α.
Fig. 9
Fig. 9 Results of numerical experiment 3. (a) Linear dichroism histogram. From this data alone, the presence of two distinct populations of molecules is not clearly evident. (b) Histogram of largest eigenvalues measured for each single-molecule image using unpolarized data. (c) Histogram of largest eigenvalues measured for each single-molecule image using polarized data.

Equations (35)

Equations on this page are rendered with MathJax. Learn more.

LD=( I 0° I 90° )/( I 0° + I 90° )
[ E x img ( r ) E y img ( r ) ]=[ E x μ x ( r ) E x μ y ( r ) E x μ z ( r ) E y μ x ( r ) E y μ y ( r ) E y μ z ( r ) ][ μ x μ y μ z ]=G( r )μ
A=|  μ |= μ x + μ y + μ z
μ | μ | =[ sin( Θ )cos( Φ ) sin( Θ )sin( Φ ) cos( Θ ) ]
U( r ) = ( E x img ( r ) ) * E x img ( r )+ ( E y img ( r ) ) * E y img ( r ) = j=x,y [ ( E j μ x ( r ) ) * ( E j μ y ( r ) ) * ( E j μ z ( r ) ) * ]( μ μ T )[ E j μ x ( r ) E j μ y ( r ) E j μ z ( r ) ]
M=[ M xx M xy M xz M xy M yy M yz M xz M yz M zz ]= 1 N n=1 N μ n μ n T
U( r )= j=x,y [ ( E j μ x ( r ) ) * ( E j μ y ( r ) ) * ( E j μ z ( r ) ) * ]M[ E j μ x ( r ) E j μ y ( r ) E j μ z ( r ) ]
M= j=1 3 λ j v j v j T
M= A 2 S ϕ =0 2π θ =0 α V V T sin( θ' )dθ'dϕ'
V=R[ sin( θ )cos( ϕ' ) sin( θ )sin( ϕ' ) cos( θ ) ]
μ | μ | =[ sin( Θ )cos( Φ ) sin( Θ )sin( Φ ) cos( Θ ) ]=R[ sin( θ )cos( ϕ' ) sin( θ )sin( ϕ' ) cos( θ ) ]
R=[ xxC+c xyC+zs xzC+ys xyC+zs yyC+c yzC+xs xzCys yzC+xs zzC+c ]
x= sin( Φ 0 ) y=cos( Φ 0 ) z=0 c=cos( Θ 0 ) s=sin( Θ 0 ) C=1c
M= A 2 R[ ( 1cos( α ) )( cos( α )+2 ) 6 0 0 0 ( 1cos( α ) )( cos( α )+2 ) 6 0 0 0 ( cos 3 ( α )1 ) ( 3cos( α )3 ) ] R T
λ 1 = A 2 ( cos 3 ( α )1 ) ( 3cos( α )3 )                        λ 2 = A 2 ( 1cos( α ) )( cos( α )+2 ) 6 λ 3 = A 2 ( 1cos( α ) )( cos( α )+2 ) 6
τ r τ f T
M= A 2 S' ϕ =0 2π θ =0 α 2 cos 2 ( ϕ' )+ β 2 sin 2 ( ϕ' ) V V T sin( θ' )dθ'dϕ'
M= A 2 S' R[ a 0 0 0 b 0 0 0 c ] R T
a = 1 3 ϕ =0 2π cos 2 ( ϕ ) ( cos( α 2 cos 2 ( ϕ )+ β 2 sin 2 ( ϕ ) )1 ) 2 ×( cos( α 2 cos 2 ( ϕ )+ β 2 sin 2 ( ϕ ) )+2 )d ϕ b = 1 3 ϕ =0 2π cos 2 ( ϕ ) ( cos( α 2 cos 2 ( ϕ )+ β 2 sin 2 ( ϕ ) )1 ) 2 ×( cos( α 2 cos 2 ( ϕ )+ β 2 sin 2 ( ϕ ) )+2 )d ϕ c = 1 3 ϕ =0 2π 1  cos 3 ( α 2 cos 2 ( ϕ )+ β 2 sin 2 ( ϕ ) )d ϕ
S' = ϕ =0 2π 1 cos( α 2 cos 2 ( ϕ )+ β 2 sin 2 ( ϕ ) )d ϕ
λ 1 = c A 2 S' λ 2 = a A 2 S' λ 3 = b A 2 S'
U( r ) = j=x,y | E j μ x ( r ) | 2 M xx + 2{ ( E j μ x ( r ) ) * E j μ y ( r ) } M xy + | E j μ y ( r ) | 2 M yy + 2{ ( E j μ x ( r ) ) * E j μ z ( r ) } M xz + | E j μ z ( r ) | 2 M zz + 2{ ( E j μ y ( r ) ) * E j μ z ( r ) } M yz
XX( r )= j=x,y | E j μ x ( r ) | 2 XY( r )=2 j=x,y { ( E j μ x ( r ) ) * E j μ y ( r ) } YY( r )= j=x,y | E j μ y ( r ) | 2 XZ( r )=2 j=x,y { ( E j μ x ( r ) ) * E j μ z ( r ) } ZZ( r )= j=x,y | E j μ z ( r ) | 2 YZ( r )=2 j=x,y { ( E j μ y ( r ) ) * E j μ z ( r ) }
U( r ) = [ XX( r ) YY( r ) ZZ( r ) XY( r ) XZ( r ) YZ( r ) ][ M xx M yy M zz M xy M xz M yz ] = [ XX( r ) YY( r ) ZZ( r ) XY( r ) XZ( r ) YZ( r ) ] M'
U = [ X X 1 X X 2 X X N Y Y 1 Y Y 2 Y Y N Z Z 1 Z Z 2 Z Z N X Y 1 X Y 2 X Y N X Z 1 X Z 2 X Z N Y Z 1 Y Z 2 Y Z N ]M' = [ | | | XX YY ZZ | | | | | | XY XZ YZ | | | ]M' = BM'
B + = ( B T B ) 1 B T
M' = B + U
[ | U x | | U y | ] = [ | X X x | | Y Y x | | Z Z x | | X Y x | | X Z x | | Y Z x | | X X y | | Y Y y | | Z Z y | | X Y y | | X Z y | | Y Z y | ]M' = B pol M'
U ˜ j = ois{ U j +b }
XX ^ =P XX j=1 N X X j YY ^ =P YY j=1 N X X j ZZ ^ =P ZZ j=1 N X X j XZ ^ =P XZ j=1 N X X j YZ ^ =P YZ j=1 N X X j XY ^ =P XY j=1 N X X j
U=[ | | | XX ^ YY ^ ZZ ^ | | | | | | XZ ^ YZ ^ XY ^ | | | ]M'= B ^ M'
Φ = 2πU{ 0,1 } Θ = cos 1 ( U{ 0,1 } )
error=| cos 1 ( μ true T μ estimated ) |
ξ x μ x ( ρ ) = e i n 1 kd 1  ρ 2 n 1 n 0 ( 1 ρ 2 ) 1/2 ( sin 2 ( ϕ )+ cos 2 ( ϕ ) 1  ρ 2 ) ξ x μ y ( ρ ) = e i n 1 kd 1  ρ 2 n 1 n 0 ( 1 ρ 2 ) 1/2 ( sin( 2ϕ )( 1  ρ 2 1 )/2 ) ξ x μ z ( ρ ) = e i n 1 kd 1  ρ 2 n 1 n 0 ( 1 ρ 2 ) 1/2 ( ρcos( ϕ ) ) ξ y μ x ( ρ ) = e i n 1 kd 1  ρ 2 n 1 n 0 ( 1 ρ 2 ) 1/2 ( sin( 2ϕ )( 1  ρ 2 1 )/2 ) ξ y μ y ( ρ ) = e i n 1 kd 1  ρ 2 n 1 n 0 ( 1 ρ 2 ) 1/2 ( cos 2 ( ϕ )+ sin 2 ( ϕ ) 1  ρ 2 ) ξ y μ z ( ρ ) = e i n 1 kd 1  ρ 2 n 1 n 0 ( 1 ρ 2 ) 1/2 ( ρsin( ϕ ) )
E x,y μ x,y,z ( r )={ ξ x,y μ x,y,z ( ρ ) }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.