## Abstract

We show that the position of single molecules in all three spatial dimensions can be estimated alongside its emission color by diffractive optics based design of the Point Spread Function (PSF). The phase in a plane conjugate to the aperture stop of the objective lens is modified by a diffractive structure that splits the spot on the camera into closely spaced diffraction orders. The distance between and the size of these sub-spots are a measure of the emission color. Estimation of the axial position is enabled by imprinting aberrations such as astigmatism and defocus onto the orders. The overall spot shape is fitted with a fully vectorial PSF model. Proof-of-principle experiments on quantum dots indicate that a spectral precision of 10 to 20 nm, an axial localization precision of 25 to 50 nm, and a lateral localization precision of 10 to 30 nm can be achieved over a 1 *μ*m range of axial positions for on average 800 signal photons and 17 background photons/pixel. The method appears to be rather sensitive to PSF model errors such as aberrations, giving in particular rise to biases in the fitted wavelength of up to 15 nm.

© 2016 Optical Society of America

## 1. Introduction

The resolution in single molecule localization microscopy based on widefield fluorescence imaging [1–4] is limited by the localization precision and the density of fluorescent labels [5] and can be on the order of several tens of nanometers in practice. This leap in resolution has opened up new venues in the study of macromolecular complexes at the interface of chemistry, biology and physics. The study of molecular interaction and function necessitates the detection of multiple molecular species, which is conventionally done by labeling the target molecules with fluorophores that differ in their emission color. These species can be differentiated in a number of ways. The most straightforward ways are to image the species subsequently by changing the illumination laser and the dichroics [6] or to image the species simultaneously by color splitting and projecting images side by side on one or on multiple cameras [7–9]. Hyperspectral imaging with a line scan camera offers a more direct measurement of the emission color [10]. An entirely different way of color detection relies on activator-reporter labeling in which the illumination wavelength is switched but the detection wavelength is kept the same [11]. This approach requires a calibration procedure for handling the cross-talk between the different activator molecules. A similar approach is the label washing method in which the target molecules are labeled sequentially with the same fluorophore and the labels are washed away in intermediate sample preparation steps [12, 13]. The key advantage of this approach is that it benefits from the high photon count per on-event and the relatively high photostability of the Alexa647 fluorophore.

Recently, a direct estimation of the emission color has been proposed making use of a diffraction grating with large pitch placed in the detection branch of the microscope [14]. The diffraction orders generated by the grating appear as closely spaced spots on the camera, the distance between the orders being a measure for the wavelength. In this way the simultaneous measurement of the in-plane emitter position and the emission color with a single camera setup is enabled, reaching an estimation precision on the order of 10 nm for both the position and the (peak) emission wavelength. A similar technique with similar performance has recently been proposed in [15], the difference being the use of a prism as dispersing optical element and the use of a 4*π* setup to increase photon collection efficiency. The technique described in [14] belongs to the family of Point Spread Function (PSF) engineering or optical Fourier processing techniques [16]. A relay optical path is added to the detection branch of the microscope and a phase profile is added to the emission beam in a plane conjugate to the pupil plane, often making use of a Spatial Light Modulator (SLM). So far, PSF engineering has mainly been applied to 3D single emitter localization. A wide range of 3D methods have been described in the literature. The main flavors are the astigmatic method [17–19], the bi-/multifocal method [20–24], the double-helix method [25–29], the phase-ramp method [30], the self-bending method [31] and the interferometric method [32]. Pupil optimization of 3D localization has been investigated in [33,34], leading to so-called saddle-point and tetrapod PSFs. These are variants of the astigmatic method that use higher order astigmatism for splitting the astigmatic focal lines into halter shaped spot pairs, which improves precision and axial range. Based on these developments we set out to explore if 3D spatial and color information can be encoded simultaneously on a single camera.

The scale of a diffraction limited point source image is on the order of *λ*/NA with *λ* the wavelength and NA the numerical aperture of the objective lens. Estimation of the spot width would seemingly be enough then to estimate the wavelength. In standard 2D imaging, however, variations in spot width could also be due to small focus errors, so that the effects of wavelength and axial position cannot be disentangled. This results in a divergent Crámer Rao Lower Bound (CRLB), the best possible precision for an unbiased estimator, for these two parameters in diffraction limited imaging at the focal plane. This divergence is overcome by PSF engineering, in which spot shape parameters are used to encode the axial position. For example, the axial position is encoded by the elongation of the spot for the astigmatic method, and by the orientation of the spot for the double helix method. Now that the *shape* of the PSF is used for estimation of the axial position, it follows that in principle the wavelength can be fitted from the *scale* of the PSF.

Here, we propose a further advance by making use of diffractive optics based design principles. Figure 1 shows the gist of the idea. The phase of the emission beam is modified at a plane conjugate to the exit pupil of the objective lens. The imprinted phase profile is divided into repetitive zones that results in a split of the spot on the camera into different diffraction orders (sub-spots) that are closely positioned on the camera, similar to [14]. The shape of the diffractive zones is no longer straight as in a conventional grating but instead curved so as to generate a desired aberration (astigmatism, defocus) in the different diffraction orders. This enables the detection of the emission color from the distance and relative magnitude of the sub-spots and of the axial position from the shape of and the shape differences between the sub-spots. In this way both the shape and scale are used in a mixed way to extract the information on wavelength and axial position with a better precision.

The outline of the paper is as follows. In section 2 the theoretical details underlying the design and the fitting procedure are explained, section 3 contains a numerical analysis of the performance of different designs, section 4 describes proof-of-principle experiments, and section 5 concludes the paper with a discussion and outlook to future work.

## 2. Theory

#### 2.1. Diffractive Optical Elements (DOEs)

The aberration profile of the SLM is that of a DOE. We describe the DOE as a thin surface, locally altering the phase of the passing beam, independent of the local direction of incidence. This fits within the framework of scalar diffraction theory and is justified in the paraxial (low NA) limit, which is appropriate at the SLM plane. We also assume that the SLM is placed at a plane conjugate to the pupil plane. The thin surface of the DOE can be divided in zones, labeled with a discrete index *j*, the zone indices. Points with normalized pupil coordinates *ρ⃗* = (*ρ _{x}*,

*ρ*) are in zone

_{y}*j*if:

*λ*

_{0}is the design wavelength, chosen to be in the center of the spectral region of interest, and where the function

*K*(

*ρ⃗*) is called the zone function. For a standard grating structure the zone function is a linear function of

*ρ⃗*. Within each zone

*j*a variable

*t*that takes values 0 ≤

*t*< 1 is defined by:

*x*) is the largest integer smaller than

*x*. The aberration profile added to an incoming beam is given by the so-called profile function

*f*(

*t*):

*f*(

*t*) describes the shape of the grating within each zone. For example, a blazed structure has

*f*(

*t*) =

*h*

_{max}

*t*with

*h*

_{max}the step in optical path length, and a sinusoidal structure has

*f*(

*t*) =

*h*

_{max}(1 + cos (2

*πt*))/2 with

*h*

_{max}the optical path length modulation. The transmission function

*T*(

*ρ⃗*) = exp(2

*πiW*(

*ρ⃗*)/

*λ*) can be written as a sum over diffraction orders with index

*m*:

*η*= |

_{m}*C*|

_{m}^{2}, and obeys conservation of energy, i.e. ${\sum}_{m=-\infty}^{\infty}{\eta}_{m}=1$. In summary, the profile function

*f*(

*t*) can be designed to give a desired distribution of light over the diffraction orders, and the zone function

*K*(

*ρ⃗*) can be designed to give the desired aberration to the different contributing orders

*m*. On top of the DOE profile a nominal aberration profile

*W*

_{nom}(

*ρ⃗*) can be introduced giving each diffraction order a total aberration

*W*

_{nom}(

*ρ⃗*) +

*W*(

_{m}*ρ⃗*).

#### 2.2. Design configurations

Diffractive optics based designs based on adding astigmatism or defocus to the diffraction orders have been described in [35]. In the following we will provide details on these designs and investigate the performance. A first configuration is based on a binary grating, which has *f*(*t*) = *h*_{max}sign (sin (2*πt*))*t* with *h*_{max} = *λ*_{0}/2. This will split the spot in two dominant diffraction orders, *m* = ±1, with efficiencies *η*_{±1} = 4sin^{2} (*πλ*_{0}/ (2*λ*))/*π*^{2}, the zeroth order being essentially suppressed. The zone function is taken to be the sum of tip and diagonal astigmatism:

*x*= 2

*A*

_{11}

*λ*/(

*λ*

_{0}NA) and astigmatism of equal magnitude but opposite sign to the two orders. The wavelength can be estimated from the spot separation, and the axial position from the (opposite) elongation of the two orders, as indicated by the simulated spot shapes shown in Fig. 1. The particular design shown there has a zone function with Zernike coefficients

*A*

_{11}= 1.4

*λ*

_{0},

*A*

_{2−2}= 0.27

*λ*

_{0}for

*λ*

_{0}= 550 nm. It is also possible to generate variations on this design by mixing in higher order astigmatism, i.e. having non-zero Zernike coefficients

*A*

_{4−2},

*A*

_{6−2},.... Note that we use the convention of labeling the Zernike coefficients

*A*with a radial index

_{nl}*n*and an azimuthal index

*l*[36].

A design variation on this first configuration is based on the blazed grating, which has *f*(*t*) = *h*_{max}*t* with *h*_{max} = *λ*_{0}/2. This will split the spot in two diffraction orders, *m* = 0 with efficiency *η*_{0} = sinc (*πλ*_{0}/ (2*λ*))^{2}, and *m* = 1 with efficiency *η*_{1} = sinc (*π* − *πλ*_{0}/ (2*λ*))^{2}, with losses to higher orders amounting to less than 1 − 8/*π*^{2} = 0.19. The zone function is again taken to be the sum of tip and diagonal astigmatism giving only one of the two spots (the first order) that is astigmatically aberrated. Symmetry between the two spots can be restored by adding an overall aberration *W*_{nom} (*ρ⃗*) = −*K*(*ρ⃗*)/2 so that order *m* = 0 has an overall aberration −*K*(*ρ⃗*)/2 and order *m* = 1 has an overall aberration *K*(*ρ⃗*)(2*λ* − *λ*_{0}) / (2*λ*_{0}), i.e. they are (nearly) equal in magnitude but of opposite sign.

A second configuration uses a zone function with tip and defocus:

*A*

_{40},

*A*

_{60},....

Instead of splitting the spot in two sub-spots it is also possible to split in three sub-spots. By varying the modulation of a binary or sinusoidal profile function the ratio between a central *m* = 0 spot and two *m* = ±1 side spots can be tuned. The central spot is unaberrated and the side spots have aberrations (astigmatism, defocus) of equal magnitude but opposite sign.

The double helix PSF designed according to [28, 29] can also be framed in the currently proposed DOE formalism. Now the zone function cannot be conveniently expressed as the sum of a limited number of Zernike functions as it contains singular points. The pupil is divided into *L* annular zones *l* = 1, 2,...,*L* with outer radius *ρ _{l}* = (

*l/L*)

*with*

^{α}*α*an exponent, within annular zone

*l*the zone function is:

*ϕ*= arctan(

*ρ*) the azimuthal pupil coordinate. The profile function is a blaze with

_{y}/ρ_{x}*h*

_{max}=

*λ*

_{0}so that predominantly order

*m*= 1 is selected. Figure 2(a) shows a simulation of such a double helix PSF with

*L*= 2 and

*α*= 1/2 for

*λ*

_{0}= 550 nm. The orientation encodes for the axial position, and the distance between the two lobes for the wavelength. The axial range of the resulting double helix PSF increases with

*L*, but at the expense of an increase in the lateral footprint of the spot on the camera. For the purpose of comparison we show simulated astigmatic spots in Fig. 2(b) (nominal aberration function with Zernike coefficients

*A*

_{2−2}= 0.34

*λ*

_{0}for

*λ*

_{0}= 550 nm). It is possible to add higher order astigmatism (non-zero

*A*

_{4−2}) to mimic the saddle-point/tetrapod PSF of [33,34], which has a lower and more constant precision for different axial positions. For the sake of simplicity this is not further pursued here.

#### 2.3. Single emitter localization by Maximum Likelihood Estimation (MLE)

The procedure of MLE for single emitter localization has been documented in e.g. [40–42]. In brief, parameters of interest *θ*_{1},...,*θ _{L}* are found by fitting the expected photon count

*μ*to the observed photon counts

_{k}*n*for all pixels

_{k}*k*= 1,...,

*N*inside a ROI by optimization of the log-likelihood function for the model given the observation. The log-likelihood that incorporates both Gaussian readout noise and shot noise is given by [43–45]:

*σ*is the root mean square (rms) readout noise. The average and variance of the observed photon count

*n*are given by: The derivatives of the log-likelihood w.r.t. the fit parameters are needed in optimization routines such as the Levenberg-Marquardt algorithm:

_{k}*μ*with respect to the fit parameters

_{k}*θ*is usually neglected as it is small close to the optimum of log

_{j}*L*. The Fisher matrix is found as:

*j*= 1,...,

*L*.

#### 2.4. Vectorial PSF model

The complexity of the spot shapes resulting from diffractive optics inspired PSF designs warrants a more exact PSF model than standard Gaussian based effective PSF models. We will use an exact vectorial PSF model to fit spots, as originally described in [46], but now for fitting the axial position and emission wavelength as well.

The expected photon count at pixel *k* depends on the emitter position *r⃗*_{0} = (*x*_{0}, *y*_{0}, *z*_{0}), the emission wavelength *λ*, the signal photon count *N*, and the number of background photons per pixel *b*, giving a total of 6 fit parameters. The expected photon count is given by the integration of the PSF over the pixel area *𝒟 _{k}* of size

*a*×

*a*:

*z*= 0. The emitted radiation is assumed to be imaged by an aplanatic and telecentric combination of objective lens and tube lens, and the emission dipole is assumed to rotate freely during the image acquisition time of the camera. The PSF of a freely rotating dipole in the presence of background is given by [47–51]:

*w*(

_{lj}*r⃗*) represent the electric field component

*l*=

*x*,

*y*in the image plane proportional to the emission dipole component

*j*=

*x*,

*y*,

*z*. These functions can be expressed as integrals over the pupil plane:

*ρ⃗*,

*w*is a normalization factor,

_{n}*A*(

*ρ⃗*) is the amplitude, including the well-known aplanatic correction factor [47, 48], the

*q*(

_{lj}*ρ⃗*) are polarization vector components defined elsewhere [51], and where

*W*(

*ρ⃗*) is the aberration function induced by the SLM. The wavevector

*k⃗*(

*ρ⃗*) depends on the normalized pupil coordinates

*ρ⃗*by:

*n*the medium refractive index and NA the objective NA.

The *μ _{k}* are linear functions of the signal photon count

*N*and the background per pixel

*b*, making the first order derivatives of the

*μ*with respect to these fit parameters easy to evaluate. The derivatives with respect to the fit parameters

_{k}*θ*=

_{m}*x*

_{0},

*y*

_{0},

*z*

_{0},

*λ*are considerably more elaborate:

*w*(

_{lj}*r⃗*−

*r⃗*

_{0}) are given by:

## 3. Numerical results

We have tested the performance limits of the different configurations by computing the CRLB as a function of the axial position and the emission wavelength. This has been done for an ROI with an *x* × *y*-size of 25×15 pixels (diffractive astigmatic PSF, blazed grating design described in section 2.2) or 15×15 pixels (double helix PSF and astigmatic PSF) of pixel size *a* = 100 nm, for *N* = 1500 signal photons and *b* = 10 background photons/pixel, taking only shot noise into account. Figure 3 shows the results of this analysis. It appears that the position (*xyz*) precision limit is comparable for all cases with the double helix PSF giving the most constant precision across all *z*-values. The axial (*z*) precision is typically twice as large as the lateral (*xy*) precision. The spectral (*λ*) precision is best for the diffractive astigmatic PSF and worst for the astigmatic PSF. For an axial range on the order of 1 *μ*m for the double helix design it appears that *L* = 2 or *L* = 3 annular zones suffice.

The vectorial PSF fitting method has been tested with a simulation study. Fig. 4 shows the numerically determined fit error and the CRLB in *x*, *y* and *z*-position of the emitter and in the emission wavelength *λ* as a function of signal photon count for two background levels. For each signal photon count and background setting *N*_{inst} = 100 random instances for the 3D-position and wavelength were generated (axial range ±500 nm, spectral range ±75 nm around central wavelength of 550 nm), the fit error was determined as the standard deviation of the fit parameter values at convergence. The uncertainty in the fit error was taken to be a factor
$\sqrt{2/{N}_{\text{inst}}}$ times the fit error, assuming a Gaussian distribution of errors. This numerical fit error was compared to the mean and standard deviation of the CRLB-values for the fit parameters at the found optimum. Outliers were identified by a merit function at convergence that was more than three times the standard deviation of the distribution of final merit function values less than the mean of that distribution or by an *x* or *y*-position more than 1.5 pixels away from the center of the ROI. In the simulations only shot noise is taken into account, the readout noise is set to zero. Initial values for the MLE-fit were found as follows. The background was estimated from the median of all the pixels at the edge of the ROI, the signal photon count was found from the sum signal after background subtraction, and the lateral positions from the centroid position of the spot. The initial estimation for the axial position and wavelength depends on the PSF type. For the diffractive astigmatism design we proceeded with dividing the image in two halves corresponding to the two diffraction orders. The distance between the centroid estimates for the two halves and the ratio of the signal photon count in the two halves were used in an estimate for the wavelength, the second order moments in the two halves were used for an estimate of the diagonal astigmatism, which measures the axial position of the emitter. For the double helix and the astigmatic PSF the center wavelength of the spectral range was chosen as initial value. The second order moments of the light distribution were used to infer an estimate for the orientation (double helix) or elongation (astigmatism) of the spot and in this way to provide an initial estimate for the axial position. This set of initial values was used for a Levenberg-Marquardt optimization of the log-likelihood. Reasonable initial values promote a speedy and robust convergence of the optimizer.

The numerical fitting routine works well for most of the signal and background settings, as may be concluded from the correspondence between the numerically determined fit error and the CRLB, and from the fraction of outlier fits that is below 1 %. The method breaks down for the least favorable signal-to-background levels (200 signal photons at 10 background photons/pixel), where the precision exceeds the CRLB and where the fraction of outlier fits is increased to about 5 % for the diffractive astigmatic PSF and even to 20 to 40 % for the double helix and astigmatic PSF. The numerically found fit uncertainty agrees reasonably well with the CRLB values reported in Fig. 3. The quantitative level of deterioration of fit uncertainty by high background is comparable for all PSF cases considered, which can be understood from the comparable size of the spot footprint on the detector. The fit uncertainty in *x* and *y* is comparable for all methods, the fit uncertainty in *z* is typically twice as large, and the fit uncertainty in *λ* is best for the diffractive astigmatic PSF, slightly worse for the double helix PSF, and significantly worse for the astigmatic PSF.

Additional simulations have been done to test the robustness of the vector PSF fitting method. In the simulations of Fig. 4 monochromatic synthetic data was generated and fitted with a monochromatic vector PSF model. Real fluorescence signals have a non-zero spectral bandwidth, which gives rise to differences with the monochromatic model PSF. In order to test the effect of these differences synthetic data is generated by computing a weighted average of the PSF over a Gaussian emission spectrum with standard deviation *σ*_{em} (full-width-half maximum
$2\sqrt{2\text{log}(2)}{\sigma}_{\text{em}}$) and fitting this spectrally averaged PSF with a monochromatic PSF for estimating the 3D-position and the peak wavelength of the emission spectrum. Figure 5(abcd) show the fit error and CRLB as a function of the spectral bandwidth *σ*_{em}. For values below roughly 20 nm no significant loss in precision is found, implying that the use of a monochromatic model PSF can safely be used for most emitters. It is mentioned that an asymmetric emission spectrum, which organic fluorophores typically have, is expected to give rise to a bias in the estimation of the peak wavelength but not to a loss in precision.

The microscope setup may suffer from (unknown) aberrations, which will result in deviations of the image data from the model PSF, which only takes into account the designed aberrations generated by the SLM. In order to assess the gravity of unknown aberrations synthetic data is generated with random aberrations of a given root mean square (rms) value *W*_{rms} and fitted with the vector model PSF without these random aberrations. Figure 5(efgh) show the fit error and CRLB as as function of the rms value *W*_{rms}. The data points are averaged over 100 random instances, where we take randomly different aberration coefficients for the 12 lowest order aberrations for each of the 100 instances. It appears that the precision for all parameters increases steeply with aberration level. Control of aberrations at the conventional level of the diffraction limit (*W*_{rms} = 72*mλ*) is not good enough, about half the diffraction limit seems a more sensible tolerance limit. This implies that the vectorial fitting method is relatively sensitive to model PSF errors such as aberrations.

## 4. Experiment

We have done proof-of-principle experiments on different quantum dots. The setup uses an Olympus set of lenses consisting of a 150X/NA1.4 UPLSAPO objective lens and a standard LU095500 tube lens (180 mm focal length). An intermediate relay path is built with two Olympus LU095500 180 mm focal length lenses and a Meadowlark SLM (model XY-SLM, reflective Liquid Crystal on Silicon design, 256×256 pixels, pixel size 24 *μ*m). A wire grid polarizer (WP25M-VIS, Thorlabs) was used to filter out the polarization component that is not modulated by the SLM. The SLM was configured to produce the diffractive astigmatic blazed grating PSF design with a. The raw images were captured on an Andor iXon Ultra 897 EM-CCD camera (512×512 pixels, pixel size 16 *μ*m). Excitation was done with a 405 nm diode laser, directed towards the objective via a 458 nm long pass dichroic. The emitted light was captured and filtered with a notch filter (405 StopLine, Semrock) and a 450–650 nm bandpass emission filter (FF01-550/200, Semrock). The setup was used to image Quantum Dots (QDs) with specified emission peaks at 525 nm and 585 nm (ThermoFisher Q21341MP and Q21311MP) prepared on separate glass slides and immersed in glycerol (refractive index 1.47). Through-focus image stacks were taken by moving the objective lens with a stage (nanoXYZ, Physik Instrumente).

Visual inspection of the measured spots revealed a large asymmetry between the spot shape above and below the nominal focal plane. Such an asymmetry is usually indicative for spherical aberration. It appears that the observed spot shapes can be reasonably accounted for by assuming that the setup suffers from about 0.06*λ* rms spherical aberration. The parameters characterizing the SLM pattern were adjusted as well to improve the correspondence between observed spot shapes and the vector PSF model. We found that a tip equal to 2.2*λ* and astigmatism equal to 0.49*λ* for a center wavelength of 550 nm, and a blaze step height of about 145 nm agree best with the observed spot shapes. It turned out that the spots appeared rotated over about 14 deg with the *xy* axes of the camera frame due to a rotational misalignment of the SLM with respect to the camera. This rotation of the SLM phase pattern was incorporated into the vector PSF model. With these parameter settings we have applied our vectorial PSF fitting model to the data, the results are shown in Figure 6. The fits were done over 25×15 pixel ROIs. The precision was determined from fits to 30 repeated acquisitions of the same QD. The mean and standard deviation of each of the fit parameters (*xyzλ*) was found from a weighted average over the 30 fit values with weight proportional to the fitted signal photon count. The same procedure was followed for the mean and standard deviation of the CRLB. Fit error and CRLB data was obtained in this way for 5 different QDs, the displayed data points are the weighted averages over the fit error and CRLB for the 5 QDs with weight inversely proportional to the square of the standard deviation. The mean background over the 30 repeated acquisitions was typically 17 photons per pixel, the mean signal photon count per acquisition was typically around 800 photons. The experimental precision (Fig. 6(abcd)) does not quite reach the CRLB. The resulting precision in *x* and *y* ranges from 10 to 30 nm, the precision in *z* ranges from 25 to 50 nm, and the precision in *λ* ranges from 10 to 20 nm, an axial localization precision of 25 to 50 nm, and a lateral localization precision of 10 to 30 nm. There is an asymmetry in precision between positive and negative *z* values that we attribute to the apparent spherical aberration. In addition, there is a substantial bias in the fitted wavelength (up to 15 nm) and the axial position that depends on the axial position of the stage (Fig. 6(ef)). The resulting spread in fitted wavelengths amounts to about 20 nm (Fig. 6(g)).

The experimental PSF for all axial stage positions is determined by first upsampling the raw images (with a factor 5), then displacing the images to the center of the ROI using the fitted lateral (*xy*) position values, and then summing all individual images (30 repeated acquisitions of 5 QDs).
Visualization 1 (first frame shown in Fig. 7) shows the through-focus experimental PSF and the model PSF at the fitted axial position. The asymmetry in spot shape below and above focus due to spherical aberration is clearly visible. There is a reasonable match between the data and the model but not a very good quantitative match. The differences are bigger than the blurring arising from the shift of the measured spots to the center of the ROI with the fitted *xy* positions, which are subject to errors on the order of 25 nm. The reasons for the mismatch are not clear but could be found in unknown aberrations, optical alignment errors, and errors in the calibration of the SLM. An additional factor may be found in the spectral diffusion of QDs, which can have jumps in emission wavelength of around 10 nm after blinking events [52].

We have also done a proof-of-principle experiment in which we have imaged a mixture of the two types of QDs. Visualization 2 (one of the frames is shown in Fig. 8) shows the measured through-focus images and ROIs for 6 QDs that were visible in most focal slices. The ROIs have been assigned a pseudo-color in Visualization 2 derived from the fit of the measured spots with our vectorial PSF model. First, a spectrally weighted average of the RGB-values corresponding to monochromatic light is computed, where the weight is taken to be a Gaussian function with peak at the fitted wavelength and width equal to the CRLB of the fitted wavelength. Subsequently, this spectrally weighted RGB is used as colormap in the MATLAB plot, thus giving a visual impression of the measured color of the QDs. It appears that three of the QDs have a red-amber color and may be identified as belonging to the species with emission peak at 585 nm, and three of the QDs have a green-yellow color and may be identified as belonging to the species with emission peak at 525 nm. The effect of signal-to-noise ratio is also apparent from Visualization 2. Close to the nominal focus the spots are clearly visible and the fitting procedure works reasonably well. Away from the nominal focus (the smallest and largest axial coordinate values) the signal-to-noise ratio is so low that the spots can not or hardly be seen in the recorded image and the fitting becomes unreliable.

## 5. Conclusion

In summary, we have proposed a new method for the simultaneous measurement of the 3D-position and the emission wavelength of single emitters. In particular, we have investigated a diffractive optics based PSF design in which the spot is split into closely spaced diffraction orders, and we have evaluated, both numerically and experimentally, a fitting method based on a vector PSF model. A lateral localization precision between 10 and 30 nm, an axial localization precision between 25 and 50 nm, and a spectral precision between 10 and 20 nm was achieved. The main limiting factor for these figures of merit is the accuracy of the PSF model. A full characterization and possibly compensation of aberrations [53] may be expected to bring down the level of precision in *xyzλ* to the numbers predicted by simulations.

An avenue which we have not explored in full is the study of fully optimized SLM profiles, which could possible lead to new solutions with further improved performance. In particular, the addition of higher order astigmatism (non-zero *A*_{4−2}, *A*_{6−2}) may significantly improve precision. An open issue here is how to handle or incorporate aberration profiles that have one or more singularities, such as the pattern needed for double-helix type PSFs. In view of the singularities an expansion in Zernike modes does not seem suitable, and a different way to parametrize a broad class of patterns with a small number of parameters would be needed.

The main drawback of the proposed technique is the increased footprint of the PSF. In standard 2D imaging with diffraction-limited spots the smallest possible footprint is achieved with the highest possible signal-to-background ratio. This results in the best possible (lateral) localization precision, which depends significantly on the signal-to-background ratio [42]. The price of an increased spot footprint, however, is already paid upon making the transition to 3D imaging. The additional increase in the PSF support size of the newly proposed PSFs is on the order of several tens of percents and can possibly be reduced by tweaking the design parameters. An additional drawback is related to possible applications in which the single molecules are imaged and randomly activated simultaneously. In order to prevent having too big a fraction of overlapping spots per recorded frame, the requirements on the ratio of the off time *τ*_{off} and the on time *τ*_{on} and/or on the maximum allowable labeling density *ρ* are more stringent. These should satisfy *ρA*_{PSF}*τ*_{on}/*τ*_{off} ≪ 1, with *A*_{PSF} the PSF footprint area, so the upper limit for *ρ* is smaller by the same percentage as the PSF footprint is larger.

In our experiment we have made use of a liquid crystal based SLM, which is polarization sensitive as the SLM aberration profile is added to only one of the two orthogonal linear polarization states. For the sake of simplicity we have filtered out the unaffected polarization state, but this is not desirable in photon starved applications such as single molecule imaging. An improvement in photon efficiency over the currently used architecture may be to simply leave out the polarization filter. Then the unmodified polarization state is added to the 0the order diffraction spot, just as in [14]. Another alternative may be the use of a polarization splitter architecture as e.g. described in [29]. Such an architecture would also open up opportunities for imaging of fixed or slowly diffusing dipole emitters [51, 54].

A follow-up experiment would be a more refined treatment of differentiating fluorescent species that are imaged simultaneously on a single camera. An alternative to direct fitting of the emission wavelength would be a Generalized Likelihood Ratio Testing (GLRT) approach, similar to the one recently proposed by us for detecting dim single emitter events at relatively high background [55]. In a GLRT approach each spot is fitted *N* times assuming the known emission wavelength for the *N* different species. The fit with the highest likelihood would then lead to the identification of the fluorescent species at hand.

## Acknowledgments

This work was supported by a National Institutes of Health (NIH) grant 1U01EB021238-01 to D.G.

## References and links

**1. **E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science **313**, 1643–1645 (2006). [CrossRef]

**2. **M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Meth. **3**, 793–795 (2006). [CrossRef]

**3. **S. T. Hess, T. P. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. **91**, 4258–4272 (2006). [CrossRef] [PubMed]

**4. **S. W. Hell, “Far-field optical nanoscopy,” Science **316**, 1153–1158 (2007). [CrossRef] [PubMed]

**5. **R. P. J. Nieuwenhuizen, K. A. Lidke, M. Bates, D. Leyton Puig, D. Grünwald, S. Stallinga, and B. Rieger, “Measuring image resolution in optical nanoscopy,” Nat. Meth. **10**, 557–562 (2013). [CrossRef]

**6. **H. Shroff, C. G. Galbraith, J. A. Galbraith, H. White, J. Gillette, S. Olenych, M. W. Davidson, and E. Betzig, “Dual-color superresolution imaging of genetically expressed probes within individual adhesion complexes,” Proc. Natl. Acad. Sci. U.S.A. **104**, 20308–20313 (2007). [CrossRef] [PubMed]

**7. **M. Bossi, J. Folling, V. N. Belov, V. P. Boyarskiy, R. Medda, A. Egner, C. Eggeling, A. Schonle, and S. W. Hell, “Multicolor far-field fluorescence nanoscopy through isolated detection of distinct molecular species,” Nano Lett. **8**, 2463–2468 (2008). [CrossRef] [PubMed]

**8. **I. Testa, C. A. Wurm, R. Medda, E. Rothermel, C. von Middendorf, J. Fölling, S. Jakobs, A. Schönle, S. W. Hell, and C. Eggeling, “Multicolor fluorescence nanoscopy in fixed and living cells by exciting conventional fluorophores with a single wavelength,” Biophys. J. **99**, 2686–2694 (2010). [CrossRef] [PubMed]

**9. **D. Grünwald and R. H. Singer, “In vivo imaging of labelled endogenous β-actin mRNA during nucleocytoplasmic transport,” Nature **467**, 604–607 (2010). [CrossRef]

**10. **P. J. Cutler, M. D. Malik, S. Liu, J. M. Byars, D. S. Lidke, and K. A. Lidke, “Multi-color quantum dot tracking using a high-speed hyperspectral line-scanning microscope,” PLoS One **8**, e64320 (2013). [CrossRef] [PubMed]

**11. **M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science **317**, 1749–1753 (2007). [CrossRef] [PubMed]

**12. **J. Tam, G. A. Cordier, J. S. Borbely, Á. S. Álvarez, and M. Lakadamyali, “Cross-talk-free multi-color STORM imaging using a single fluorophore,” PLoS One **9**, e101772 (2014). [CrossRef] [PubMed]

**13. **C. C. Valley, S. Liu, D. S. Lidke, and K. A. Lidke, “Sequential superresolution imaging of multiple targets using a single fluorophore,” PLoS One **10**, e0123941 (2015). [CrossRef] [PubMed]

**14. **J. Broeken, B. Rieger, and S. Stallinga, “Simultaneous measurement of position and color of single fluorescent emitters using diffractive optics,” Opt. Lett. **39**, 3352–3355 (2014). [CrossRef] [PubMed]

**15. **Z. Zhang, S. J. Kenny, M. Hauser, W. Li, and K. Xu, “Ultrahigh-throughput single-molecule spectroscopy and spectrally resolved super-resolution microscopy,” Nat. Meth. **12**, 935–938 (2015). [CrossRef]

**16. **A. S. Backer and W. E. Moerner, “Extending single-molecule microscopy using optical Fourier processing,” J. Phys. Chem. B **118**, 8313–8329 (2014). [CrossRef] [PubMed]

**17. **H. P. Kao and A. S. Verkman, “Tracking of single fluorescent particles in three dimensions: use of cylindrical optics to encode particle position,” Biophys. J. **67**, 1291–1300 (1994). [CrossRef] [PubMed]

**18. **L. Holtzer, T. Meckel, and T. Schmidt, “Nanometric three-dimensional tracking of individual quantum dots in cells,” Appl. Phys. Lett. **90**, 053902 (2007). [CrossRef]

**19. **B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science **319**, 810–813 (2008). [CrossRef] [PubMed]

**20. **E. Toprak, H. Balci, B. H. Blehm, and P. R. Selvin, “Three-dimensional particle tracking via bifocal imaging,” Nano Lett. **7**, 2043–2045 (2007). [CrossRef] [PubMed]

**21. **S. Ram, P. Prabhat, J. Chao, E. S. Ward, and R. J. Ober, “High accuracy 3D quantum dot tracking with multifocal plane microscopy for the study of fast intracellular dynamics in live cells,” Biophys. J. **95**, 6025–6043 (2008). [CrossRef] [PubMed]

**22. **M.F. Juette, T.J. Gould, M.D. Lessard, M.J. Mlodzianoski, B.S. Nagpure, B.T. Bennett, S.T. Hess, and J. Bewersdorf, “Three-dimensional sub-100nm resolution fluorescence microscopy of thick samples,” Nat. Meth. **5**, 527–530 (2008). [CrossRef]

**23. **M. J. Mlodzianoski, M. F. Juette, G. L. Beane, and J. Bewersdorf, “Experimental characterization of 3D localization techniques for particle-tracking and super-resolution microscopy,” Opt. Express **17**, 8264–8277 (2009). [CrossRef] [PubMed]

**24. **S. Abrahamsson, J. Chen, B. Haij, S. Stallinga, A. Y. Katsov, J. Wisniewski, G. Mizuguchi, P. Soule, F. Mueller, C. D. Darzacq, X. Darzacq, C. Wu, C. I. Bargmann, D. A. Agard, M. Dahan, and M. G. L. Gustafsson, “Fast multicolor 3D imaging using aberration-corrected multifocus microscopy,” Nat. Meth. **10**, 60–63 (2013). [CrossRef]

**25. **S. R. P. Pavani and R. Piestun, “Three dimensional tracking of fluorescent microparticles using a photon-limited double-helix response system,” Opt. Express **16**, 22048–22057 (2008). [CrossRef] [PubMed]

**26. **S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. U.S.A. **106**, 2995–2999 (2009). [CrossRef] [PubMed]

**27. **G. Grover, K. DeLuca, S. Quirin, J. DeLuca, and R. Piestun, “Super-resolution photon-efficient imaging by nanometric double-helix point spread function localization of emitters (SPINDLE),” Opt. Express **20**, 26681–26695 (2012). [CrossRef] [PubMed]

**28. **S. Prasad, “Rotating point spread function via pupil-phase engineering,” Opt. Lett. **38**, 585–587 (2013). [CrossRef] [PubMed]

**29. **C. Roider, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Axial super-localisation using rotating point spread functions shaped by polarisation-dependent phase modulation,” Opt. Express **22**, 4029–4037 (2014). [CrossRef] [PubMed]

**30. **D. Baddeley, M. B. Cannell, and C. Soeller, “Three-dimensional sub-100 nm super-resolution imaging of biological samples using a phase ramp in the objective pupil,” Nano Res. **4**, 589–598 (2011). [CrossRef]

**31. **S. Jia, J. C. Vaugan, and X. Zhuang, “Isotropic 3D super-resolution imaging with a self-bending point spread function,” Nat. Photonics **8**, 302–306 (2014). [CrossRef] [PubMed]

**32. **G. Shtengel, J. A. Galbraith, C. G. Galbraith, J. Lippincott-Schwartz, J. M. Gillette, S. Manley, R. Sougrat, C. M. Waterman, P. Kanchanawong, M. W. Davidson, R. D. Fetter, and H. F. Hess, “Interferometric fluorescent super-resolution microscopy resolves 3D cellular ultrastructure,” Proc. Natl. Acad. Sci. U.S.A. **106**, 3125–3130 (2009). [CrossRef] [PubMed]

**33. **Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. **113**, 133902 (2014). [CrossRef] [PubMed]

**34. **Y. Shechtman, L. E. Weiss, A. S. Backer, S. J. Sahl, and W. E. Moerner, “Precise 3D scan-free multiple-particle tracking over large axial ranges with tetrapod point spread functions,” Nano Lett. **15**, 4194–4199 (2015). [CrossRef] [PubMed]

**35. **S. Stallinga, “Diffractive optical element for localization microscopy,” Patent Application NL2012187, Filed 3 February 2014.

**36. **M. Born and E. Wolf, *Principles of Optics*, 6th ed. (Cambridge University, 1997).

**37. **X. Deng, B. Bihari, J. Gan, F. Zhao, and R.T. Chen, “Fast algorithm for chirp transforms with zooming-in ability and its applications,” J. Opt. Soc. Am. A , **17**, 762–771 (2000). [CrossRef]

**38. **J. L. Bakx, “Efficient computation of opical disk readout by use of the chirp z transform,” Appl. Opt. **41**, 4879–4903 (2002). [CrossRef]

**39. **M. Leutenegger, R. Rao, R. A. Leitgeb, and T. Lasser, “Fast focus field calculations,” Opt. Express **14**, 11277–11291 (2006). [CrossRef] [PubMed]

**40. **R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. **86**, 1185–1200 (2004). [CrossRef] [PubMed]

**41. **C. S. Smith, N. Joseph, B. Rieger, and K. A. Lidke, “Fast, single-molecule localization that achieves theoretically minimum uncertainty,” Nat. Meth. **7**, 373–375 (2010). [CrossRef]

**42. **B. Rieger and S. Stallinga, “The lateral and axial localization uncertainty in super-resolution light microscopy,” Chem. Phys. Chem. **15**, 664–670, (2014).

**43. **F. Huang, T. M. P. Hartwich, F. E. Rivera-Molina, Y. Lin, W. C. Duim, J. J. Long, P. D. Uchil, J. R. Myers, M. A. Baird, W. Mothes, M. W. Davidson, D. Toomre, and J. Bewersdorf, “Video-rate nanoscopy using sCMOS cameraspecific single-molecule localization algorithms,” Nat. Meth. **10**, 653–658 (2013). [CrossRef]

**44. **J. Chao, E. S. Ward, and R. J. Ober, “Fisher information matrix for branching processes with application to electron-multiplying charge-coupled devices,” Multidimens. Syst. Signal Process. **23**, 349–379 (2012). [CrossRef] [PubMed]

**45. **S. Stallinga, “Single emitter localization analysis in the presence of background,” Proc. SPIE **9630**, 96300V (2015). [CrossRef]

**46. **K. I. Mortensen, L. Stirling Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Meth. **7**, 377–381 (2010). [CrossRef]

**47. **T. Wilson, R. Juskaitis, and P. D. Higdon, “The imaging of dielectric point scatterers in conventional and confocal polarisation microscopes,” Opt. Commun. **141**, 298–313 (1997). [CrossRef]

**48. **P. Török, P. D. Higdon, and T. Wilson, “Theory for confocal and conventional microscopes imaging small dielectric scatterers,” J. Mod. Opt. **45**, 1681–1698 (1998). [CrossRef]

**49. **J. Enderlein, E. Toprak, and P. R. Selvin, “Polarization effect on position accuracy of fluorophore localization,” Opt. Express **14**, 8111–8120 (2006). [CrossRef] [PubMed]

**50. **S. Stallinga and B. Rieger, “Accuracy of the Gaussian point spread function model in 2D localization microscopy,” Opt. Express **18**, 24461–24476 (2010). [CrossRef] [PubMed]

**51. **S. Stallinga, “Effect of rotational diffusion in an orientational potential well on the point spread function of electric dipole emitters,” J. Opt. Soc. Am. A , **32**, 213–223 (2015). [CrossRef]

**52. **R. G. Neuhauser, K. T. Shimizu, W. K. Woo, S. A. Empedocles, and M. G. Bawendi, “Correlation between Fluorescence Intermittency and Spectral Diffusion in Single Semiconductor Quantum Dots,” Phys. Rev. Lett. **85**, 3301 (2000). [CrossRef] [PubMed]

**53. **I. Izeddin, M. El Beheiry, J. Andilla, D. Ciepielewski, X. Darzacq, and M. Dahan, “PSF shaping using adaptive optics for three-dimensional single-molecule super-resolution imaging and tracking,” Opt. Express **20**, 4957–4967 (2012). [CrossRef] [PubMed]

**54. **M. P. Backlund, M. D. Lew, A. S. Backer, S. J. Sahl, and W. E. Moerner, “The role of molecular dipole orientation in single-molecule fluorescence microscopy and implications for super-resolution imaging,” Chem. Phys. Chem. **15**, 587–599 (2014). [PubMed]

**55. **C. S. Smith, S. Stallinga, K. A. Lidke, B. Rieger, and D. Grünwald, “Probability-based particle detection that enables threshold-free and robust in vivo single molecule tracking,” Mol. Biol. Cell. **26**(22), 4057–4062 (2015). [CrossRef] [PubMed]