Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Effect of time discretization of the imaging process on the accuracy of trajectory estimation in fluorescence microscopy

Open Access Open Access

Abstract

In fluorescence microscopy, high-speed imaging is often necessary for the proper visualization and analysis of fast subcellular dynamics. Here, we examine how the speed of image acquisition affects the accuracy with which parameters such as the starting position and speed of a microscopic non-stationary fluorescent object can be estimated from the resulting image sequence. Specifically, we use a Fisher information-based performance bound to investigate the detector-dependent effect of frame rate on the accuracy of parameter estimation. We demonstrate that when a charge-coupled device detector is used, the estimation accuracy deteriorates as the frame rate increases beyond a point where the detector’s readout noise begins to overwhelm the low number of photons detected in each frame. In contrast, we show that when an electron-multiplying charge-coupled device (EMCCD) detector is used, the estimation accuracy improves with increasing frame rate. In fact, at high frame rates where the low number of photons detected in each frame renders the fluorescent object difficult to detect visually, imaging with an EMCCD detector represents a natural implementation of the Ultrahigh Accuracy Imaging Modality, and enables estimation with an accuracy approaching that which is attainable only when a hypothetical noiseless detector is used.

© 2014 Optical Society of America

1. Introduction

In the study of subcellular dynamics with fluorescence microscopy, an event of interest is temporally sampled and recorded as a sequence of time-discretized images. Fast sampling is needed to achieve a high temporal resolution, and in the case of fast dynamics, it is often necessary in order that the acquired data can be properly visualized and analyzed. Relatively high frame rates, for example, have been used to observe and quantitatively analyze the fast movement of single molecules through nuclear pore complexes [1, 2], the fast transport of vesicles along microtubules [3], and the rapid transition of chromatin between different localized motion regimes [4]. Fast imaging has also been employed for the visualization and tracking of endosome movement in an investigation of intracellular cargo trafficking [5], and for the observation and quantitative analysis of virus transport [6].

Though the given examples differ widely in terms of the biological questions they address, a unifying theme, besides the necessity of fast imaging, is that the quantitative analysis of the data relies on the accurate extraction of information about the moving objects of interest from the acquired images. To explore the relationship between fast imaging and the accuracy of the subsequent information extraction, we consider in this paper the recording of the movement of a non-stationary fluorescent object by a detector, and investigate the effect of time discretization of the imaging process on the accuracy with which a quantity pertaining to the object can be estimated from the resulting image sequence. A fluorescent object can be any subcellular entity, such as an organelle, a vesicle, a quantum dot, a single molecule, or a cluster of quantum dots or molecules. An estimated quantity can in principle be any parameter that is of interest. It can, for example, be the starting position of the object, the direction in which the object traveled, or the speed at which the object traveled.

Intuitively, one would expect that by increasing the acquisition frame rate to produce an image sequence of higher temporal resolution, the accuracy for estimating a parameter can be improved. We demonstrate this to be the case provided that the image acquisition is carried out with a hypothetical detector that does not contribute noise to the images that it produces. In practice, however, a camera such as a charge-coupled device (CCD) detector, which is commonly used in fluorescence microscopy, adds noise to the image data. With such a detector, the accuracy improvement conferred by increasing the temporal resolution can only be expected up to a certain frame rate before detector noise begins to worsen the accuracy. Indeed, we show that given a constant level of readout noise that is added by a CCD detector to each acquired image, the accuracy for estimating a parameter begins, at some point, to deteriorate with increasing frame rate. The deterioration in accuracy is due to a lowering of the ratio of signal to readout noise for each image in the acquired sequence, which results from the fact that whereas the amount of signal allocated to each image (i.e., the number of photons detected from the object in each image) decreases with increasing frame rate, the readout noise level per image remains unchanged. (Note that a scientific complementary metal-oxide-semiconductor (sCMOS) detector also has additive readout noise as its major noise source, and that a deterioration in parameter estimation accuracy can therefore also be expected when the frame rate is increased to a point where the readout noise significantly corrupts the photon signal.)

The applicability of a CCD detector to high-speed imaging is thus limited to relatively low acquisition frame rates that yield images where the readout noise is not significant in comparison to the photon signal. This is not unexpected, since at higher frame rates the acquired data becomes essentially a sequence of low-light images. CCD detectors are well known to be unsuitable for imaging under effectively low-light circumstances, where their readout noise corrupts the data in a substantial way. We show, however, that by using instead an electron-multiplying charge-coupled device (EMCCD) detector at a high electron multiplication gain, a parameter can be estimated with very high accuracy from an effectively low-light image sequence acquired at a high frame rate.

The EMCCD detector, an image sensor that is also commonly employed in fluorescence microscopy, is the standard low-light alternative to the CCD detector. In [7], the EMCCD detector has been demonstrated to enable high-accuracy parameter estimation when it is used in an unconventional setting where very few photons are, on average, detected in each of its pixels. In fact, the principle of the low-light imaging method described in [7], called the Ultrahigh Accuracy Imaging Modality (UAIM), is such that one can improve the accuracy for estimating a parameter by purposely reducing the photon count in each EMCCD pixel to well below an average of less than one. This might seem counterintuitive, since an image with such low pixel photon counts often makes it very difficult to detect the fluorescent object visually, and supposably even more difficult to estimate a parameter pertaining to the object. Contrary to intuition, however, a UAIM image actually allows parameter estimation with a high accuracy approaching that which is only attainable when imaging is performed with a hypothetical noiseless detector. Since increased time discretization represents a straightforward way to significantly reduce the photon count in each pixel, EMCCD imaging at a high enough frame rate is in and of itself an implementation of UAIM, and we show in this paper that it indeed enables the high-accuracy estimation of parameters pertaining to a moving object.

While the main focus of this paper is to investigate how the temporal resolution of an acquired image sequence affects the accuracy of parameter estimation, we also consider how the spatial resolution of the image sequence affects how accurately a parameter can be estimated. In particular, we explain and demonstrate how acquiring a sequence of more finely pixelated images can further improve the already high accuracy that is achieved with UAIM.

To demonstrate the points made above on the detector-dependent effect of time discretization on the accuracy for estimating a parameter from an image sequence, we make use of a performance bound, which we refer to as a limit of accuracy, that specifies the best possible estimation accuracy that can be expected for a given set of conditions. More precisely, a limit of accuracy is defined as the square root of the Cramér-Rao lower bound [8], obtained by computing the Fisher information matrix corresponding to the particular detector type used, the particular image acquisition setting, and the specific mathematical descriptions of the object of interest and its movement. As such, a limit of accuracy specifies the best possible standard deviation that can be expected for the estimation of a parameter under the given conditions. To investigate the effect of time discretization, we compute limits of accuracy corresponding to different detector models and different values of the acquisition frame rate, and examine the results as functions of the acquisition frame rate.

In addition to imaging with a hypothetical noiseless detector, a CCD detector, and an EMCCD detector, we derive and compute the performance bound corresponding to imaging with an ideal detector. Unlike the other detector types which deteriorate the image data with pixelation and/or extraneous noise, an ideal detector is one that does not introduce noise to the images that it captures, and has an infinite non-pixelated detection area that allows the position of each detected photon to be recorded with arbitrarily high precision. We refer to the performance bound corresponding to imaging with such a detector as a fundamental limit of accuracy. It is important because it puts the accuracy limits for the practical (i.e., CCD and EMCCD) detector models and the hypothetical noiseless detector model into perspective by serving as the ultimate accuracy benchmark for comparison.

Note that in the theory on which the limit of accuracy is based, it is assumed that the shape of the trajectory of the moving object is known, and that therefore the trajectory can be described deterministically, in terms of some or all of the parameters of interest. In this paper, we present theoretical results for a general deterministic trajectory, and use for our illustrations the specific realizations of a linear trajectory and a circular arc trajectory, which are given by mathematical expressions parameterized by quantities of interest such as the starting position and the speed of the moving object.

We also note that while the current work was motivated by applications in cellular microscopy, the underlying approach and results are also applicable to time-discretized imaging in other disciplines such as astronomy and computer vision.

The organization of the remainder of this paper is as follows. In Section 2, we present the theoretical background for the limits of accuracy that are used for our analyses, and provide the Fisher information matrix expressions from which limits of accuracy corresponding to different detector models can be obtained. In Section 3, we investigate the detector-dependent effect of time discretization of the imaging process on the accuracy of parameter estimation by examining limits of accuracy as functions of the acquisition frame rate. In addition, we demonstrate the effect that the spatial resolution of the acquired images has on the accuracy of parameter estimation. To further illustrate the use of limits of accuracy as an analytical tool, we also include in this section an investigation of how the levels of various noise sources might impact the selection of a detector for image acquisition. Conclusions are presented in Section 4.

2. Fisher information and limits of accuracy

We describe in this section the calculation of the limits of accuracy which are used subsequently in Section 3 to investigate the detector-dependent effect of time discretization on the accuracy of parameter estimation. Since much of the underlying mathematics has been previously described in detail [9, 10], we restrict the treatment here to a concise description of the theoretical background in Section 2.1, an explanation of how time discretization is accounted for in Section 2.2, and a brief presentation in Sections 2.3 and 2.4 of the main mathematical expressions that are utilized for the analyses in this paper.

2.1. Theoretical background

The limit of the accuracy for estimating a parameter of interest from image data is defined to be the best possible standard deviation with which the parameter can be estimated using any unbiased estimator. To calculate a limit of accuracy, we first compute and take the inverse of the Fisher information matrix corresponding to the particular image data. For a vector θ of n parameters which we wish to estimate, the Fisher information matrix I(θ) is an n × n matrix where the rows and columns correspond to the n parameters in the order they are arranged in θ. The inverse matrix I−1(θ) is thus also n × n, and its jth main diagonal element is a lower bound on the variance with which the jth parameter in θ can be estimated using any unbiased estimator. This lower bound is known as the Cramér-Rao lower bound, and by taking its square root, we obtain the limit of accuracy for estimating the jth parameter. Note that throughout this paper, we let θ ∈ Θ, where Θ denotes the parameter space that is an open subset of ℝn.

The key to arriving at a limit of accuracy is therefore the calculation of the Fisher information matrix, which depends on the modeling of the image data and the specific estimation problem at hand. For an image generated by a microscope and captured by a detector, a general mathematical framework for calculating the Fisher information matrix has been rigorously described in [9]. Based on this framework, expressions for Fisher information matrices and limits of accuracy pertaining to parameter estimation in the context of the imaging of a moving object have been derived in [10]. As we explain in the next section, the limits of accuracy which we compute and analyze in this paper are based on time-discretized versions of the Fisher information expressions from [10].

2.2. Time discretization

In [10], Fisher information expressions, both general and specific, are presented which correspond to a single image that is acquired of a moving object during an arbitrary time interval. The limits of accuracy computed based on these Fisher information matrices therefore apply to the estimation of parameters from a single image that captures an object’s trajectory. In this paper, we consider the scenario where an object’s trajectory is captured instead by a sequence of multiple images, and where parameters are subsequently estimated from the image sequence.

The terminology and notation which we use to describe the time discretization of the imaging process are illustrated in Fig. 1(a). A total acquisition time interval [t0, tNf], over which an object’s trajectory is observed, is divided into Nf frame intervals [ti−1, ti], i = 1, 2,..., Nf. One image is acquired in each frame interval, though in the most general case the actual exposure time need not span the entire frame interval. More precisely, the Nf images in a sequence are acquired over the exposure intervals [ti−1, ei], eiti, i = 1, 2,..., Nf. Note that the Nf frame intervals need not be of the same duration. Similarly, the Nf exposure intervals can in general have different durations.

 figure: Fig. 1

Fig. 1 (a) Time discretization notation and terminology. An acquisition of Nf frames, spanning a total acquisition time Ttat over the time interval [t0, tNf], consists of the frame intervals [ti−1, ti], i = 1, 2,..., Nf. During each frame interval, the camera exposure begins at the start of the frame interval and stops at or before the end of the frame interval. More precisely, the exposure intervals are given by [ti−1, ei], eiti, i = 1, 2,..., Nf. (b) Schematic sketch of a linear trajectory. The trajectory is depicted as a line segment, with an arrowhead indicating the direction of movement. It is described by four parameters: the coordinates (x0, y0) of the starting position, the angle ϕ specifying the direction of movement with respect to the x-axis, and the speed v at which the object travels.

Download Full Size | PDF

To arrive at limits of accuracy for the estimation of parameters from a given sequence of Nf images, a Fisher information matrix I(θ) that corresponds to the entire sequence needs to be calculated. Given that the trajectory of the moving object is described deterministically as noted in Section 1, it does not in any way contribute to the stochasticity of the imaging process. The stochastic differences between the images in a sequence are therefore solely accounted for by the intrinsic stochasticity of the photon detection process and the detector’s noise processes. By the standard assumption that such processes in one exposure interval are independent of those in all the other exposure intervals, the Fisher information matrix I(θ) for an entire image sequence is just the sum of the Fisher information matrices for the individual images [11], i.e., I(θ)=i=1NfIi(θ), where Ii(θ) is the Fisher information matrix for the image acquired during the exposure interval [ti−1, ei]. For each image i, Ii(θ) is then given, unless otherwise noted, by an appropriate expression from [10], which again readily provides Fisher information matrix expressions that correspond to a single image acquired during an arbitrary time interval, in this case [ti−1, ei]. Given I(θ), limits of accuracy are then easily computed as described in Section 2.1. In Sections 2.3 and 2.4, we present specific expressions for I(θ) that correspond to imaging using the various detector types considered in this paper. In Section 2.4 where the scenario of imaging with an ideal detector is presented, we also provide explicit expressions for the limits of accuracy, under certain assumptions, for a specific estimation problem involving an object moving in a linear trajectory.

2.3. Imaging with a pixelated detector

In this section, we provide the Fisher information expressions for the case where images of a moving object are acquired by a pixelated detector in a sequence of exposure intervals. This general category includes imaging with a practical CCD or EMCCD detector, but we begin with the benchmark scenario of imaging with a hypothetical detector that introduces no noise to the acquired images. The noiseless detector scenario is important in that any practical detector scenario can be compared against it to determine the extent to which detector noise deteriorates the obtainable parameter estimation accuracy.

2.3.1. Hypothetical noiseless detector

Photon emission by an object, and accordingly the detection of those photons by a detector, are typically assumed to follow a Poisson process. Therefore, for a hypothetical noiseless detector, each image in an acquired sequence is assumed to contain just a Poisson-distributed number of photons in each of its pixels, uncorrupted by detector noise. Given such a data model, the Fisher information matrix for a sequence of Nf images, comprising Np pixels each and acquired over the exposure intervals [ti−1, ei], eiti, i = 1, 2,..., Nf, is given by

I(θ)=i=1NfIi(θ)=i=1Nfk=1Np1υθ,k,i(μθ,k,iθ)T(μθ,k,iθ),θΘ,
where for k = 1, 2,..., Np and i = 1, 2,..., Nf, the function υθ,k,i = μθ,k,i + βk,i is the photon signal level in the kth pixel of the ith image. The summand function μθ,k,i gives the mean of the Poisson-distributed number of photons in the kth pixel of the ith image that are detected from the object of interest, and it can be expressed generally as
μθ,k,i=1M2ti1eiCkΛ(τ)qzθ(τ),oθ(τ)(xMxθ(τ),yMyθ(τ))dxdydτ,
where (xθ(τ), yθ(τ), zθ(τ)) and oθ(τ), τt0, represent respectively the 3-dimensional (3D) trajectory and the orientation of the object of interest, Λ(τ), τt0, denotes the rate at which photons are detected from the object of interest, M > 0 denotes the lateral magnification of the microscope, Ck is the region occupied by the pixel, and qzθ (τ),oθ (τ) is the image function [9, 10], which describes the image of the object of interest on the detector plane, at unit lateral magnification, when the object is located along the optical (z-)axis. The summand function βk,i gives the mean of the Poisson-distributed number of photons from the background component (i.e., photons that do not originate from the object of interest) in the kth pixel of the ith image. It is given by
βk,i=1M2ti1eiCkΛb(τ)bτ(xM,yM)dxdydτ,
where Λb(τ) and bτ, τt0, denote, respectively, the rate at which photons are detected from the background component, and the image function that describes the spatial distribution of those photons at unit lateral magnification.

Note that when the background component is assumed to be absent, we have βk,i = 0 photons for k = 1, 2,..., Np and i = 1, 2,..., Nf. Also, in our notation, subscripts to a function, besides indices referring to the image number or pixel number, are used to specify the parameters on which the function depends. The image function qzθ (τ),oθ (τ), for example, depends on the z-position and orientation of the object, and both object properties in turn depend on the vector θ of parameters to be estimated.

2.3.2. CCD detector

For a CCD detector (or an sCMOS detector), each image in an acquired sequence is assumed to contain, in each of its pixels, a Poisson-distributed photon signal that is corrupted by the detector’s additive readout noise. The readout noise is typically modeled as a Gaussian random variable, and in the most general case, its mean and variance can differ from one pixel to another in a given detector. Hence, for k = 1, 2,..., Np, we let the readout noise in the kth pixel be Gaussian-distributed with mean ηk and variance σk2. For this data model, the Fisher information matrix for a sequence of Nf images, comprising Np pixels each and acquired over the exposure intervals [ti−1, ei], eiti, i = 1, 2,..., Nf, is given by

I(θ)=i=1NfIi(θ)=i=1Nfk=1Np(μθ,k,iθ)T(μθ,k,iθ)×(e2υθ,k,i2πσk21pθ,k,i(z)(l=1υθ,k,il1(l1)!e12(zlηkσk)2)2dz1),
where for k = 1, 2,..., Np and i = 1, 2,..., Nf, pθ,k,i is the Poisson-Gaussian mixture probability density function given by
pθ,k,i(z)=eυθ,k,i2πσkl=0[υθ,k,i]ll!e12(zlηkσk)2,z,
and the functions υθ,k,i and μθ,k,i are as defined in Section 2.3.1.

2.3.3. EMCCD detector

For an EMCCD detector, each image in an acquired sequence is assumed to contain, in each of its pixels, a Poisson-distributed photon signal that is stochastically amplified before being corrupted by the detector’s additive, Gaussian-distributed readout noise. The stochastic signal amplification is modeled here as a geometrically multiplied branching process [12]. For this data model, the Fisher information matrix for a sequence of Nf images, comprising Np pixels each and acquired over the exposure intervals [ti−1, ei], eiti, i = 1, 2,..., Nf, is given by

I(θ)=i=1NfIi(θ)=i=1Nfk=1Np(μθ,k,iθ)T(μθ,k,iθ)×(e2υθ,k,i2πσk2g21pθ,k,i(z)(l=1e12(zlηkσk)2j=0l1(l1j)(11g)lj1j!(gυθ,k,i)j)2dz1),
where g is the electron multiplication gain (i.e., the average number of electrons that the EMCCD signal amplification produces for each detected photon), ηk and σk2 are the mean and variance of the Gaussian readout noise at the kth pixel, and for k = 1, 2,..., Np and i = 1, 2,..., Nf, pθ,k,i is the probability density function given by
pθ,k,i(z)=eυθ,k,i2πσk[e12(zηkσk)2+l=1e12(zlηkσk)2j=0l1(l1j)(11g)lj1(j+1)!(gυθ,k,i)j+1],z,
and the functions υθ,k,i and μθ,k,i are as defined in Section 2.3.1. Note that the expression Ii(θ) for each image i is found in [12], as opposed to [10] for the other detector types considered in this paper.

2.4. Imaging with an ideal detector

In this section, we provide the Fisher information expression for the case where images of a moving object are acquired by an ideal detector. Imaging with an ideal detector represents an important benchmark scenario, as is the case with imaging with a hypothetical noiseless detector (see Section 2.3.1). The assumptions made with an ideal detector, however, go beyond those made with a noiseless detector to eliminate all possible deterioration of the image data. Hence, the resulting limit of accuracy provides an ultimate accuracy benchmark against which the limit of accuracy computed for any of the imaging scenarios of Section 2.3 can be compared.

Not only does the ideal detector scenario assume the absence of detector noise, it assumes the detector to be non-pixelated, meaning that the precision with which the detector records the locations at which photons are detected is not limited by the dimensions of a pixel, but is instead arbitrarily high. Additionally, an ideal detector has an infinite detection area, such that no photon escapes detection by falling outside of the detection area.

Under the assumptions of the ideal detector scenario, the limit of the accuracy for estimating a parameter is, as mentioned in Section 1, referred to as a fundamental limit of accuracy. To calculate fundamental limits of accuracy for the estimation of parameters from a sequence of Nf images acquired over the exposure intervals [ti−1, ei], eiti, i = 1, 2,..., Nf, the Fisher information matrix that is needed is given by

I(θ)=i=1NfIi(θ)=i=1Nfti1eiΛ2(τ)VθT(τ)×[(qzθ(τ),oθ(τ)(x,y)p(τ))T(qzθ(τ),oθ(τ)(x,y)p(τ))Λ(τ)qzθ(τ),oθ(τ)(x,y)+Λb(τ)bτ(x,y)dxdy]Vθ(τ)dτ,
where Vθ(τ) := [−∂xθ(τ)/∂θ − ∂yθ(τ)/∂θ ∂zθ(τ)/∂θ ∂oθ(τ)/∂θ]T and p(τ) := [xyzθ(τ) oθ(τ)], τt0, and all functions are as defined in Section 2.3.1. As can be seen from the photon detection rate Λb and the photon spatial distribution function bτ in Eq. (4), the ideal detector scenario presented here accounts for the detection of photons from a background component. In doing so, it represents a generalized version of the ideal detector scenario presented in our previous work (e.g., [9, 10]), which assumes the absence of a background component. The generalized version is easily reduced to the more specific scenario by setting the background photon detection rate to zero (i.e., by letting Λb(τ) = 0, t0τtNf).

Equation (4) is specific in that it applies only to the ideal detector scenario, but is very general in the sense that it applies to the imaging of an object with any arbitrary 3D trajectory (xθ(τ), yθ(τ), zθ(τ)), τt0, any arbitrary orientation oθ(τ), τt0, and any arbitrary image function qzθ (τ),oθ (τ). The generality of Eq. (4) makes it a very complex expression. However, the introduction of specific assumptions can lead to very simple and explicit expressions for the limits of accuracy. In this paper, for example, we consider a small fluorescent object moving in a linear trajectory that is confined to the focal plane of the microscope. The trajectory is thus 2-dimensional (2D), and has no z-component zθ(τ). As depicted in Fig. 1(b), we express the linear trajectory parametrically as xθ(τ) = x0 + v(τt0) cosϕ and yθ(τ) = y0 + v(τt0) sinϕ, t0τtNf, where (x0, y0) are the coordinates of the starting position of the object, ϕ is the object’s direction of movement specified as an angle with respect to the x-axis, and v is the speed at which the object travels. We assume the object to be small enough to be modeled as a point source, and approximate the diffraction pattern formed by the detected photons with a 2D Gaussian profile [13, 14]. The image function is thus independent of the object’s orientation oθ (τ), and is given by

q(x,y)=12πσgauss2ex2+y22σgauss2,(x,y)2,
where σgauss > 0 is the standard deviation of the Gaussian function. We further assume the common scenario where the rate at which photons are detected from the object is modeled as a constant, i.e., Λ(τ) = Λ0 ∈ ℝ+, t0τtNf, and where the durations of all frame intervals are configured to be equal, i.e., titi−1 = ti+1ti, i = 1, 2,..., Nf − 1, and the durations of all exposure intervals are configured to be equal, i.e., eiti−1 = ei+1ti := Te, i = 1, 2,..., Nf − 1. Additionally, we assume the absence of a background component. Given these conditions, and given that we wish to estimate the starting position, the direction, and the speed of the object’s trajectory, i.e., θ = (x0, y0, ϕ, v) ∈ Θ, the fundamental limits of accuracy δx0, δy0, δϕ, and δv for the estimation of x0, y0, ϕ, and v, respectively, are given by
δx0=δy0=2σgaussTe2+32Te(Ttat1F)+(Ttat23Ttat2F+12F2)Λ0FTtatTe(Te2+Ttat21F2),δϕ=2σgaussv3Λ0FTtatTe(Te2+Ttat21F2),δv=2σgauss3Λ0FTtatTe(Te2+Ttat21F2),
where Ttat := tNft0 denotes the duration of the total acquisition time interval, and F := Nf/Ttat denotes the acquisition frame rate. This result is stated formally as Theorem 1 in the Appendix, where a proof is also provided. The theorem includes a more general result that does not require equal frame durations and equal exposure durations, and it also includes the case where the image of the object is modeled with the classical Airy profile [15]. The simple expressions in Eq. (6) have the important advantage that they can be easily evaluated without having to explicitly calculate a Fisher information matrix.

If we additionally assume the common scenario of a continuous acquisition where there is no time gap between the end of exposure of one frame and the start of exposure of the next frame (i.e., if we set the exposure duration to be equal to the frame duration by letting Te = 1/F), then the fundamental limits of accuracy in Eq. (6) reduce to

δx0=δy0=2σgaussΛ0Ttat,δϕ=23σgaussvTtatΛ0Ttat,δv=23σgaussTtatΛ0Ttat,
which no longer depend on parameters related to frame intervals or exposure intervals, and instead depend only on the total acquisition time Ttat. With the time gaps between successive exposure intervals removed, the continuous acquisition scenario in the case of an ideal detector is equivalent to the recording of the entire trajectory in a single image. Accordingly, the expressions in Eq. (7) are identical to the fundamental limits of accuracy derived in [10] for the capture of an entire 2D linear trajectory in a single image under the same assumptions of a 2D Gaussian image function and a constant photon detection rate.

3. Results and discussion

Using the theoretical results of Section 2, we first illustrate in Section 3.1, using the example of a small fluorescent object moving in a linear trajectory, how time discretization of the imaging process affects the limits of accuracy corresponding to the different detector-dependent data models presented in Sections 2.3 and 2.4. We compare the practical limits of accuracy for CCD and EMCCD imaging with each other, and against the benchmarks provided by the limits of accuracy for imaging with a hypothetical noiseless detector and an ideal detector. To further illustrate the usefulness of computing and comparing limits of accuracy, we also present a study on how the levels of various noise sources might affect the selection of a detector for image acquisition. Subsequently, in Section 3.2, we present results that demonstrate the same detector-dependent effects of time discretization for an object moving in a circular arc trajectory. Lastly, in Section 3.3, we look at how increasing the spatial resolution of the detector might be used to improve the accuracy of parameter estimation.

For our illustrations, we assume the trajectory of the object to be confined to the focal plane of a microscope. We further assume a constant rate for the detection of photons from the object. We model the object as a point source, and assume its image to be given by the 2D Gaussian image function of Eq. (5). Images of the moving object are assumed to be acquired in sequence, without any time gaps between successive exposure intervals (i.e., ei = ti, i = 1, 2,..., Nf, in Fig. 1(a)). Images in a sequence are also assumed to have equal exposure durations (i.e., titi−1 = ti+1ti, i = 1, 2,..., Nf − 1, in Fig. 1(a)). With the exception of an exploration of the effect of background noise on detector choice in Section 3.1.5, we additionally assume the absence of a background component, and all detected photons thus originate from the object.

3.1. Effect of acquisition frame rate

In Fig. 2, we show, as functions of the acquisition frame rate, the limits of accuracy for the estimation of the starting coordinates x0 and y0, the direction-specifying angle ϕ, and the speed v for a point source moving in a linear trajectory (see Fig. 1(b); also see Section 2.4 for the definition of a linear trajectory (xθ(τ), yθ(τ))). Each of these four parameters of interest is given its own plot, in which limits of accuracy corresponding to different detector types are shown as the frame rate is varied from a low 5 frames per second (fps) to a high 200 fps.

 figure: Fig. 2

Fig. 2 Limits of accuracy, shown as functions of the acquisition frame rate, for the estimation of (a) the coordinate x0 and (b) the coordinate y0 of the starting position, (c) the angle ϕ specifying the direction of movement with respect to the x-axis, and (d) the speed v, of a point source moving in a linear trajectory (see Fig. 1(b)). In each plot, the limits of accuracy correspond to imaging with an ideal detector (*), a hypothetical noiseless detector (□), a CCD detector (⋄), and an EMCCD detector (○). For each pixelated detector type, the pixel size is 16μm × 16μm, and an image consists of an 8×8 pixel array. The CCD detector adds readout noise with mean ηk = 0 e and standard deviation σk = 2 e to each pixel k. The EMCCD detector amplifies photon signals at an electron multiplication gain of g = 950, and adds readout noise with mean ηk = 0 e and standard deviation σk = 24 e to each pixel k. The absence of a background component is assumed. The 2D Gaussian profile that models the image of the point source has a standard deviation of σgauss = 84 nm, and the rate at which photons are detected from the point source is Λ0 = 2000 photons/s. The magnification of the microscope is M = 100. The values of the estimated parameters are x0 = y0 = −250 nm with respect to the optical (z-)axis which passes through the center of an image, ϕ = 30°, and v = 1500 nm/s. At any given frame rate, the total acquisition time is Ttat = 0.4 s, and is divided equally among all frames. The acquisition has no time gaps between successive exposures. The CCD limit of accuracy attains its best (i.e., lowest) value, in (a) and (d), at 15 fps, where the average photon signal level per frame and per pixel are 133 and 2.08 photons, and, in (b) and (c), at 10 fps, where the average photon signal level per frame and per pixel are 200 and 3.125 photons. In (a), (b), (c), and (d), the EMCCD limit of accuracy first attains a lower value than the CCD limit of accuracy at around 25 fps, where the average photon signal level per frame and per pixel are 80 and 1.25 photons.

Download Full Size | PDF

3.1.1. Ideal detector provides the ultimate accuracy benchmark

In each plot of Fig. 2, the fundamental limit of accuracy is computed using the appropriate expression in Eq. (7), and is plotted as a straight line because it does not depend on the acquisition frame rate. The fact that the fundamental limit of accuracy attains the lowest numerical value of all the curves in the plot is expected, as it is meant to be the ultimate benchmark (i.e., the lowest possible standard deviation for estimating the parameter) based on the assumptions of an infinite detection area and the absence of detector noise and image pixelation.

3.1.2. Hypothetical noiseless detector yields accuracy that improves with increasing frame rate

For all four parameters of interest, it can be seen from Fig. 2 that the limit of accuracy for a hypothetical noiseless detector, computed using Eq. (1), improves (i.e., decreases in value) monotonically with increasing acquisition frame rate. Intuitively, this behavior can be attributed to the fact that a higher frame rate produces an image sequence that represents a finer temporal sampling of the trajectory, thereby capturing more information about the trajectory, and enabling the determination of the trajectory’s parameters with higher accuracy.

The plots of Fig. 2 suggest that beyond a certain frame rate (25 fps or so in this particular example), the improvement of the limit of accuracy for the hypothetical noiseless detector becomes substantially less appreciable. Moreover, one can see that the limit of accuracy levels off at a value that is higher than the fundamental limit of accuracy, and that it will therefore never attain the ultimate benchmark. Consequently, the best possible estimation accuracy that can be expected when a noiseless detector is used will always be poorer than the best possible accuracy that can be expected when an ideal detector is used. The primary reason for this is that whereas an ideal detector produces non-pixelated images of arbitrarily high spatial resolution, (i.e., images where the position at which each photon is detected is recorded with arbitrarily high precision), a noiseless detector produces pixelated images of lower spatial resolution (i.e., images where the position at which each photon is detected is recorded with a precision that is limited by the dimensions of a pixel). In other words, while the noiseless detector data model accounts for the data-deteriorating effect of image pixelation, the ideal detector data model assumes its absence. Though both of these data models are based on unrealistic assumptions, a comparison between their limits of accuracy provides a means for studying the effect of pixelation on the accuracy of parameter estimation.

3.1.3. CCD detector yields poor accuracy at high frame rates

For image acquisition with a CCD detector, the plots of Fig. 2 show that the limit of accuracy, computed using Eq. (2), improves as the frame rate is increased up to a certain point. Beyond this certain frame rate, however, the limit of accuracy steadily worsens as the frame rate continues to be increased. For example, the limit of accuracy for estimating the x0 coordinate improves from 10.1 nm at 5 fps to 8.8 nm at 15 fps, but exhibits a deteriorating trend thereafter. This interesting behavior can be explained by a tradeoff between two opposing effects. On the one hand, more information about the trajectory is gained when the frame rate is increased to produce an image sequence that represents a finer temporal sampling of the trajectory. This is the same effect that is seen with the hypothetical noiseless detector (see Section 3.1.2). On the other hand, some information about the trajectory is lost when the frame rate is increased, since fewer photons are detected in each frame due to the shortened exposure interval, resulting in the readout noise in each image pixel becoming increasingly significant compared to the photon signal detected in the pixel.

When the shortened exposure interval is still long enough such that a sufficient number of photons are still detected in each frame, the deteriorative effect of a lowered signal to detector noise ratio does not entirely negate the advantage gained with the increased temporal resolution. This tradeoff in favor of the increased temporal resolution is what accounts for the improving trend that is seen for the limit of accuracy at relatively low frame rates (up to 15 fps for the x0 coordinate in our example). On the contrary, when the frame rate increase shortens the exposure interval to such an extent that the advantage gained with the higher temporal resolution is eclipsed by the deteriorative effect of a lowered signal to detector noise ratio, the tradeoff in favor of the latter results in a worsening trend for the limit of accuracy. This is seen for frame rates beyond 15 fps for the x0 coordinate in our example. At these higher frame rates, an average of less than 133 photons are detected per image in a given sequence.

The plots of Fig. 2 thus demonstrate that while a CCD detector is appropriate for imaging at relatively low frame rates, its readout noise renders it unsuitable for imaging at higher frame rates. This is especially the case for imaging under conditions where only a relatively low number of photons can be expected to be detected from the moving object of interest over the course of its trajectory.

Note that at any given frame rate, the limit of accuracy for a CCD detector is worse than the limit of accuracy for a hypothetical noiseless detector. This is expected, since the difference between the two imaging scenarios is the data-deteriorating effect of the CCD detector’s readout noise. The best possible estimation accuracy that can be expected when a CCD detector is used will therefore always be worse than the best possible accuracy that can be expected when a noiseless detector is used. Despite the fact that imaging with a noiseless detector is a hypothetical scenario, a comparison of its limit of accuracy with the CCD limit of accuracy represents a useful way of investigating the effect of readout noise on the accuracy of parameter estimation.

3.1.4. EMCCD detector implements UAIM and yields high accuracy at high frame rates

An EMCCD detector has readout noise just like a CCD detector, but is capable of substantially reducing the corruptive effect of the noise on the photon signal. It achieves this by amplifying the signal in a given pixel before the signal is read out, thereby producing an augmented signal that is large in comparison to the noise introduced when it is read out. The signal amplification is a stochastic process, however, meaning that it is itself a source of detector noise that deteriorates the photon signal. Nevertheless, by virtue of the signal amplification, the plots of Fig. 2 show that, unlike what is observed for the CCD scenario, the limit of accuracy for an EMCCD detector (computed using Eq. (3)) improves with increasing frame rate throughout the entire range of frame rates shown, even in the range of higher frame rates where relatively few photons are detected per image. At frame rates of 100 fps and higher, for example, an average of no more than 20 photons are detected per image in a given sequence.

The fact that the EMCCD limit of accuracy improves rather than deteriorates at higher frame rates is explained by the noise characteristics of an EMCCD detector. Provided that a high level of signal amplification (i.e., a high electron multiplication gain) is used, a small photon signal detected in an EMCCD pixel will be less corrupted by detector noise (i.e., by both the readout noise and the stochasticity of the signal amplification) than a large photon signal [12]. In other words, the overall effect of the stochastic signal amplification and the subsequent readout of the amplified signal is such that the original signal in a given pixel will experience less corruption when it is small to begin with. Therefore, at high frame rates where the shortened exposures result in few photons being detected per frame (and, accordingly, very small amounts of signal being detected per pixel), the limit of accuracy continues to improve because the advantage gained with the increased temporal resolution is not offset by a significant loss of information due to corruption of the signals in the pixels by detector noise. This is in direct contrast to imaging with a CCD detector, where at higher frame rates the benefit of the increased temporal resolution is negated by corruption of the signal by readout noise.

Importantly, when a sufficiently high frame rate is used, such that images are produced where the photon count in each pixel generally averages less than one, the imaging method UAIM [7] is effectively implemented. A UAIM image is unusual, in that its unconventionally low pixel photon counts often make visual detection of the imaged object a difficult task. From the perspective of parameter estimation, however, such an unconventional image enables estimation with very high accuracy, owing to the fact that the very low signals in its pixels are minimally corrupted by detector noise. (Indeed, the minimal corruption when the signal level in a pixel is less than one photon, which has been demonstrated using an information-theoretic approach in [7], correlates with the fact that under such an extreme low-light regime, one can discern signal from the EMCCD detector’s readout noise with relatively high certainty (e.g., [16])). In fact, a parameter estimation accuracy can be attained that is close to the accuracy that one can only achieve when a detector that introduces no noise is used. This is demonstrated in Fig. 2, where in each plot the EMCCD limit of accuracy can be seen to approach the limit of accuracy for the hypothetical noiseless detector at high frame rates. At 200 fps (the highest frame rate shown), for example, the EMCCD limit of accuracy for estimating the x0 coordinate is 8.0 nm, and is within 18% of the limit of accuracy of 6.8 nm for the noiseless detector. At this high frame rate, the brightest of all pixels in the entire sequence of 80 images (acquired over the total acquisition time of 0.4 s) detects an average of only 4.3 photons, and nearly 95% of the pixels in the sequence detect an average of less than 1 photon each.

Note that while the EMCCD limit of accuracy can get close to the limit of accuracy for the hypothetical noiseless detector, it will never actually attain it. As is the case with the CCD scenario, this is due to the data-deteriorating effect of detector noise, which can never be completely eliminated. Also, as can be seen in the example of Fig. 2, it is often the case that the EMCCD limit of accuracy is worse than the CCD limit of accuracy at low frame rates. This can be expected whenever the relatively long exposures at low frame rates allow enough photons to be captured in each frame to sufficiently overcome the readout noise of the CCD detector, and to render the EMCCD detector’s signal amplification unnecessary. In general, it is not always easy to determine when to use one type of detector over the other, as the answer depends on the precise experimental setting (e.g., frame rate, photon budget, detector noise parameters, magnification). However, our approach of computing and comparing limits of accuracy provides a useful means of arriving at the answer. We give examples in the next section, where we make use of limits of accuracy to examine how the levels of different noise sources might affect the choice of detector.

3.1.5. Effect of noise on detector choice

By comparing the limits of accuracy corresponding to different levels of various noise sources, we explore in this section how the readout noise level of a CCD detector, the readout noise level and signal amplification level of an EMCCD detector, and the noise level of the background component might impact the selection of a detector for image acquisition. With the exception of the noise levels which are varied, the examples considered assume the experimental setting of Fig. 2, and use the estimation of the x0 coordinate for illustration. Note that the general results presented below (e.g., the shifting of the frame rate at which the CCD and EMCCD limits of accuracy intersect as a result of changing the CCD detector’s readout noise level) are also applicable to the estimation of parameters in other problems. However, the specific results (e.g., the specific frame rate at which the CCD and EMCCD limits of accuracy intersect) pertain strictly to the problem of Fig. 2.

The lower the level of readout noise, the lesser the extent to which the acquired image data is corrupted. Therefore, the lower the readout noise level of a CCD detector, the better the accuracy with which one can expect to estimate a parameter of interest from an image sequence acquired at a given frame rate. Further, it follows that when a CCD detector with a lower readout noise level is used, one can acquire images at higher frame rates and yet still expect to carry out parameter estimation with an accuracy that is superior or comparable to that which is attainable if an EMCCD detector is used instead. Figure 3(a) demonstrates both of these points with CCD limits of accuracy that correspond to readout noise standard deviations of 1, 2, and 6 electrons. At each frame rate shown, it can be seen that the detector with the low 1-electron noise level has the best (i.e., smallest) limit of accuracy, and that the detector with the high 6-electron noise level has the worst (i.e., largest) limit of accuracy. Moreover, whereas the limit of accuracy for the detector with the 2-electron noise level starts to become worse than the EMCCD limit of accuracy at around 25 fps, the limit of accuracy for the detector with the 1-electron noise level only starts to become worse than the EMCCD limit of accuracy at a significantly higher 70 fps or so. Lowering the readout noise level for the CCD detector thus shifts the intersection of the CCD and EMCCD limits of accuracy to a higher frame rate, and allows the use of a CCD instead of an EMCCD detector at higher acquisition speeds without losing any accuracy in the parameter estimation. (Note that for the detector with the 6-electron noise level, the image data is corrupted to such an extent that the limit of accuracy is worse than the EMCCD limit of accuracy across all frame rates shown.)

 figure: Fig. 3

Fig. 3 Comparing the limits of accuracy, corresponding to imaging with CCD and EMCCD detectors at different levels of various noise sources and shown as functions of the acquisition frame rate, for the estimation of the coordinate x0 of the starting position of a point source moving in a linear trajectory. In (a), the limits of accuracy correspond to CCD imaging with a readout noise standard deviation (SD) of σk = 1 e (red ⋄), 2 e (black ⋄), and 6 e (blue ⋄) in each pixel k, and to EMCCD imaging (○) with an electron multiplication (EM) gain of g = 950 and a readout noise SD of σk = 24 e in each pixel k. In (b), the limits of accuracy correspond to EMCCD imaging with an EM gain of g = 950 and a readout noise SD of σk = 12 e (green ○), 24 e (black ○), 36 e (red ○), and 64 e (blue ○) in each pixel k, and to CCD imaging (⋄) with a readout noise SD of σk = 2 e in each pixel k. In (c), the limits of accuracy correspond to EMCCD imaging with an EM gain of g = 2000 (green ○), 950 (black ○), 300 (red ○), and 50 (blue ○), and a readout noise SD of σk = 24 e in each pixel k, and to CCD imaging (⋄) with a readout noise SD of σk = 2 e in each pixel k. In (a), (b), and (c), the absence of a background component is assumed. In (d), the limits of accuracy correspond to CCD imaging (⋄) and EMCCD imaging (○) with background noise levels of βk,i = 0 (black), 5 (red), and 10 (blue) photons in each pixel k of each frame i at 5 fps. (At each noise level, the background photons are assumed to be detected at a constant rate, and to be distributed uniformly over the detector.) For CCD imaging, readout noise with an SD of σk = 2 e in each pixel k is assumed. For EMCCD imaging, an EM gain of g = 950 and readout noise with an SD of σk = 24 e in each pixel k are assumed. In (a), (b), (c), and (d), the readout noise in all cases has a mean of ηk = 0 e in each pixel k. Other details of the acquisition setting and problem description, including the values of parameters not mentioned here, are as specified in Fig. 2.

Download Full Size | PDF

As in the case of a CCD detector, increasing the readout noise level of an EMCCD detector can be expected to produce image data that is more corrupted. However, given that an EMCCD detector is operated at a high level of signal amplification, as is typically the case, its readout noise level makes a relatively small impact on the extent to which the photon signal in a given pixel is corrupted. This has been reported in [7], where it was shown that when the photon signal level in a pixel is low, increasing the readout noise level results in greater signal corruption, though the effect is not substantial, in the sense that the extent of signal corruption is increased only by a relatively small amount over a large range of readout noise levels. Further, it was shown that as the photon signal level in a pixel increases, the effect of the readout noise level becomes even more insignificant. Extending this result to each pixel of an image sequence, one can expect that increasing the readout noise level of an EMCCD detector will only deteriorate the accuracy of parameter estimation by a relatively small amount. This is illustrated in Fig. 3(b), where EMCCD limits of accuracy are shown which correspond to a high electron multiplication gain of 950 and readout noise standard deviations of 12, 24, 36, and 64 electrons. At each frame rate shown, the EMCCD limit of accuracy worsens with increasing readout noise level. As expected, however, the values of the limits at a given frame rate are very close, especially at the lowest frame rates where the readout noise level has an almost negligible effect due to the relatively large photon signal levels in the pixels. At the higher frame rates, the differences in the values of the limits are a little bigger, as the readout noise level has more of an effect when the photon signal levels are smaller. Figure 3(b) further shows that due to the minor impact of the readout noise level, the performance of the EMCCD detector, relative to that of the CCD detector, is essentially unchanged. For all four readout noise levels considered, the EMCCD limit of accuracy intersects the CCD limit of accuracy at around 25 fps.

It has been shown in [12] that the effect of the EMCCD signal amplification level on the extent to which the photon signal in a given pixel is corrupted depends on the pixel’s photon signal level. When the photon signal level is low, increasing the amplification level can be expected to lessen the signal corruption. When the photon signal level is high, increasing the amplification level can potentially lead to greater signal corruption. Since an image sequence is a collection of pixels with different photon signal levels, the overall level of corruption, and hence the accuracy for estimating a parameter from the sequence, will be determined by the combined effect of the varied levels of signal corruption in the individual pixels. For a relatively low-light image sequence, one can generally expect that increasing the amplification level will lessen the overall level of corruption and yield an improved parameter estimation accuracy. This is the case for our example, where the average photon signal level ranges from 6.25 photons per pixel at 5 fps to 0.156 photons per pixel at 200 fps. In Fig. 3(c), where EMCCD limits of accuracy that correspond to electron multiplication gains of 50, 300, 950, and 2000 are plotted, it can be seen, at each frame rate shown, that increasing the signal amplification level improves the limit of accuracy. The improvement can be seen to become more substantial as the photon signal per frame decreases with increasing frame rate. Going from a relatively high gain of 300 to the highest gain of 2000, for example, the improvement in accuracy is 0.5% (from 11.88 nm to 11.82 nm) at 5 fps, compared to 4.7% (from 8.26 nm to 7.87 nm) at 200 fps. Going from a low gain of 50 to the highest gain of 2000, the improvement is more drastic, ranging from 2.6% (from 12.13 nm to 11.82 nm) at 5 fps to 21.7% (from 10.05 nm to 7.87 nm) at 200 fps. Figure 3(c) further suggests that changing the signal amplification level in the high range of 300 to 2000 does not alter very much the frame rate at which the CCD and EMCCD limits of accuracy intersect. At all three gains of 300, 950, and 2000, the EMCCD detector begins to yield better accuracies than the CCD detector at around 25 fps. The figure also shows that at the low gain of 50, the intersection occurs at around 35 fps, indicating that under a relatively low-light setting, a CCD detector can outperform an EMCCD detector up to a higher frame rate if the latter is operated at a low signal amplification level. (Note that at the low electron multiplication gain of 50, the limit of accuracy actually starts to exhibit a deteriorating trend at around 50 fps. This scenario therefore serves to demonstrate the necessity of using a high level of signal amplification when implementing UAIM.)

The background component introduces photons that originate from anything other than the object of interest, and are indistinguishably detected along with the photons originating from the object of interest. It is therefore a source of noise, and as such, it can only worsen the accuracy for estimating a parameter, regardless of the specific detector type that is used to acquire the image data. From the perspective of its interplay with detector noise, however, the effect of the background component depends on the particular detector type. For a CCD detector, the detection of background photons increases the photon signal level in each pixel, resulting in an improved signal to readout noise ratio. For an EMCCD detector, the increased photon signal level in each pixel has the undesirable effect of rendering the signal amplification less beneficial (see Section 3.1.4). Therefore, with increasing levels of background noise, the general expectation is that the CCD detector will be able to outperform the EMCCD detector up to increasingly higher frame rates. Figure 3(d) provides an illustration of the points made with CCD and EMCCD limits of accuracy corresponding to three different levels of background noise. The background photons are assumed to be distributed uniformly over the detector, and the three noise levels correspond to constant background photon detection rates that translate to the detection of an average of 0, 5, and 10 background photons per pixel at 5 fps. (Note that an average of 0 photons per pixel is equivalent to the absence of a background component.) Demonstrating that the parameter estimation accuracy worsens with increasing noise level regardless of the detector type, Fig. 3(d) shows that at any given frame rate, the CCD and EMCCD limits of accuracy worsen as the background noise level (at 5 fps) is increased from 0 to 5 to 10 photons per pixel. Demonstrating that the nature of the interplay between the background component and detector noise is such that an increased background noise level allows the CCD detector to outperform the EMCCD detector up to a higher frame rate, the figure shows that the frame rate at which the CCD and EMCCD limits of accuracy intersect changes from around 25 fps to 44 fps to 54 fps as the noise level (at 5 fps) increases from 0 to 5 to 10 photons per pixel. Another way to appreciate that the nature of the interplay favors the CCD detector is to note that even though both the CCD and EMCCD limits of accuracy worsen with increasing levels of background noise, the deterioration of the EMCCD limit of accuracy, at each frame rate shown, is more substantial than the deterioration of the CCD limit of accuracy. (Note that though not shown in Fig. 3(d), the limits of accuracy for the hypothetical noiseless detector and the ideal detector will also worsen with increasing levels of background noise.)

3.2. A second example: circular arc trajectory

In Fig. 4, the case of a point source moving in a trajectory described by a circular arc is considered. There are five parameters of interest (see Fig. 5), namely the coordinates xc and yc of the center of the circular arc, the radius R of the circular arc, the angular speed ω at which the point source moves along the arc, and the angular offset ψ0 that specifies the point source’s starting position with respect to the x-axis. Expressed in terms of these parameters, the trajectory is given by xθ(τ) = xc + R cos(ω(τt0) + ψ0) and yθ(τ) = yc + R sin(ω(τt0) + ψ0), θ = (xc, yc, R, ω, ψ0), t0τtNf.

 figure: Fig. 4

Fig. 4 Limits of accuracy, shown as functions of the acquisition frame rate, for the estimation of (a) the coordinate xc and (b) the coordinate yc of the center of the circular arc traversed by a point source, (c) the radius R of the circular arc, (d) the angular speed ω at which the point source travels, and (e) the angular offset ψ0 specifying the starting position of the point source with respect to the x-axis (see Fig. 5). In each plot, the limits of accuracy correspond to imaging with an ideal detector (*), a hypothetical noiseless detector (□), a CCD detector (⋄), and an EMCCD detector (○). For each pixelated detector type, the pixel size is 16μm × 16μm, and an image consists of an 8×8 pixel array. The CCD detector adds readout noise with mean ηk = 0 e and standard deviation σk = 2 e to each pixel k. The EMCCD detector amplifies photon signals at an electron multiplication gain of g = 950, and adds readout noise with mean ηk = 0 e and standard deviation σk = 24 e to each pixel k. The absence of a background component is assumed. The 2D Gaussian profile that models the image of the point source has a standard deviation of σgauss = 84 nm, and the rate at which photons are detected from the point source is Λ0 = 2000 photons/s. The magnification of the microscope is M = 100. The values of the estimated parameters are xc = yc = 0 nm with respect to the optical (z-)axis which passes through the center of an image, R = 250 nm, ω = 6 rad/s, and ψ0 = 20°. At any given frame rate, the total acquisition time is Ttat = 0.4 s, and is divided equally among all frames. The acquisition has no time gaps between successive exposures. The CCD limit of accuracy attains its best (i.e., lowest) value, in (a), at 5 fps, where the average photon signal level per frame and per pixel are 400 and 6.25 photons, and, in (b), (c), (d), and (e), at 15 fps, where the average photon signal level per frame and per pixel are 133 and 2.08 photons. In (a), (b), (c), (d), and (e), the EMCCD limit of accuracy first attains a lower value than the CCD limit of accuracy at around 25 fps, where the average photon signal level per frame and per pixel are 80 and 1.25 photons.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Schematic sketch of a circular arc trajectory. The trajectory is depicted as a circular arc, with an arrowhead indicating the direction of movement. It is described by five parameters: the coordinates (xc, yc) of the center of the circular arc, the radius R of the circular arc, the angular speed ω at which the object travels along the arc, and the angular offset ψ0 specifying the object’s starting position with respect to the x-axis.

Download Full Size | PDF

For the scenarios involving a pixelated detector, the limits of accuracy shown in Fig. 4 are obtained using the Fisher information matrix expressions of Section 2.3 as in the case of the linear trajectory, but with the trajectory (xθ(τ), yθ(τ)) as defined here for the circular arc. For the ideal detector scenario, the fundamental limits of accuracy can be computed using the Fisher information matrix of Eq. (4). However, by the assumption of a continuous acquisition with no time gaps between successive exposure intervals, they can also be computed using a more specific Fisher information matrix expression that is presented in Corollary 5 in [10].

We again consider the effect of the acquisition frame rate on the limits of accuracy corresponding to the various detector-dependent data models. For each parameter, the limits of accuracy in Fig. 4 exhibit trends similar to those shown in Fig. 2 for a linearly moving point source. This example helps to demonstrate that similar results can be expected for the limits of accuracy regardless of the specific trajectory and the specific parameters of interest.

3.3. Effect of spatial resolution

As discussed in Section 3.1.4 and shown in Figs. 2 and 4, the limit of accuracy for an EMCCD detector can get close to the limit of accuracy for a hypothetical noiseless detector at high frame rates, but never actually attain it because the EMCCD detector produces images that are corrupted by detector noise. The limit of accuracy for a noiseless detector is thus a bound for the EMCCD limit of accuracy. We demonstrate in this section, however, that the bound itself can be improved, and that by improving the bound, the EMCCD limit of accuracy is also improved.

As explained in Section 3.1.2 and shown in Figs. 2 and 4, a gap exists between the limit of accuracy for a hypothetical noiseless detector and the fundamental limit of accuracy primarily because the noiseless detector produces pixelated images that are of lower spatial resolution than the non-pixelated images produced by an ideal detector. Therefore, to improve the limit of accuracy for the noiseless detector, the idea is to increase the spatial resolution of the resulting image so that it better approximates an ideal non-pixelated image of arbitrarily high resolution. The general strategy is to somehow reduce the effective pixel size of the detector so that more finely pixelated images are produced. The finer the pixelation that one can achieve, the higher the spatial resolution of the resulting image, and the closer the image will be to an ideal non-pixelated image. While the effective pixel size of a detector can be reduced, for example, by increasing the magnification of the microscope system [7], another approach is to simply use a detector that has a smaller physical pixel size. For our illustration here, we assume that the latter approach is taken.

We revisit the estimation problem in Section 3.1, but for each pixelated detector type, we consider two detectors with different spatial resolutions. The detector with the lower resolution has 16μm × 16μm pixels, and the corresponding limits of accuracy are the ones plotted in Fig. 2, which have been duplicated in Fig. 6. The detector with the higher resolution has 8μm × 8μm pixels, and the corresponding limits of accuracy are plotted in Fig. 6 for comparison.

 figure: Fig. 6

Fig. 6 Comparing the limits of accuracy, corresponding to imaging with detectors of different spatial resolutions, for the estimation of (a) the coordinate x0 and (b) the coordinate y0 of the starting position, (c) the angle ϕ specifying the direction of movement with respect to the x-axis, and (d) the speed v, of a point source moving in a linear trajectory (see Fig. 1(b)). In each plot, limits of accuracy as functions of the acquisition frame rate are shown which correspond to imaging with an ideal detector (*), a hypothetical noiseless detector (□), a CCD detector (⋄), and an EMCCD detector (○). For each pixelated detector type, the limits of accuracy correspond to imaging with a low resolution detector (—) having a 16μm × 16μm pixel size, and imaging with a high resolution detector (–.–) having an 8μm × 8μm pixel size. In either case, the size of an image is 128μm × 128μm, such that an image for the low resolution detector consists of an 8×8 pixel array, and an image for the high resolution detector consists of a 16×16 pixel array. Other details of the acquisition setting and problem description, including the values of parameters not mentioned here, are as specified in Fig. 2. Note that due to identical assumptions, the curves corresponding to the low resolution detector are the same as those shown in Fig. 2. For the high resolution detector, the CCD limit of accuracy attains its best (i.e., lowest) value, in (a), (b), and (d), at 10 fps, where the average photon signal level per frame and per pixel are 200 and 3.125 photons, and, in (c), at 5 fps, where the average photon signal level per frame and per pixel are 400 and 6.25 photons. Also for the high resolution detector, and in (a), (b), (c), and (d), the EMCCD limit of accuracy has a lower value than the CCD limit of accuracy at each frame rate shown. For analogous information on the CCD and EMCCD limits of accuracy for the low resolution detector, see Fig. 2.

Download Full Size | PDF

3.3.1. Hypothetical noiseless detector with higher spatial resolution yields improved accuracy

From the plots of Fig. 6, it can be seen that by virtue of its twofold resolution improvement in both the x and the y dimensions over its lower resolution counterpart, the higher resolution noiseless detector has a limit of accuracy curve that is lower, and hence closer to the fundamental limit of accuracy, than the curve for the lower resolution noiseless detector. At the high acquisition frame rate of 200 fps, for example, the limit of accuracy for estimating the x0 coordinate improves from 6.8 nm (within 15% of the fundamental limit of accuracy of 5.9 nm) for the low resolution detector, to 6.2 nm (within 5% of the fundamental limit of accuracy) for the high resolution detector.

3.3.2. EMCCD detector with higher spatial resolution yields improved accuracy

Analogous to what we see for the noiseless detector scenario, the plots of Fig. 6 show that the limit of accuracy curve for the higher resolution EMCCD detector is lower, and hence closer to the fundamental limit of accuracy, than the curve for the lower resolution EMCCD detector. At 200 fps, for example, the limit of accuracy for estimating the x0 coordinate improves from 8.0 nm (within 36% of the fundamental limit of accuracy of 5.9 nm) for the low resolution detector, to 6.8 nm (within 15% of the fundamental limit of accuracy) for the high resolution detector. Therefore, by improving the bound set by the noiseless detector scenario, we have accordingly improved the EMCCD limit of accuracy. Interestingly, in this particular example, the improvement is such that at the highest frame rates shown, the limit of accuracy for the higher resolution EMCCD detector attains the limit of accuracy for the lower resolution noiseless detector. This is something that the limit of accuracy for the lower resolution EMCCD detector cannot achieve.

The improvement observed for the EMCCD limit of accuracy is possible because time discretization and pixel size reduction go hand in hand in attaining the desirable condition that a small photon signal is detected in each EMCCD pixel, with the added benefit of producing a higher spatial resolution that cannot be achieved by time discretization alone. In other words, a detector with a smaller pixel size distributes the detected photons over more pixels, thereby increasing the spatial resolution while, at the same time, reducing the amount of signal detected in each pixel to lessen the corruption of the signal by detector noise (see Section 3.1.4).

3.3.3. CCD detector with higher spatial resolution yields poorer accuracy

For the CCD imaging scenario, a deterioration rather than an improvement in the limit of accuracy is observed when the spatial resolution is increased. In fact, in each plot of Fig. 6 and throughout the entire range of frame rates shown, the limit of accuracy curve for the higher resolution CCD detector can be seen to be higher, and hence worse, than the curve for the lower resolution CCD detector. This can be attributed to the fact that due to its smaller pixel size, the higher resolution CCD detector captures fewer photons per pixel, and consequently has a lower ratio of signal to readout noise in its pixels compared to the lower resolution CCD detector. The beneficial effect of the higher spatial resolution (i.e., the beneficial effect, owing to the smaller pixel size, of the increased precision with which the positions of the detected photons are recorded) is thus offset by the deteriorative effect of a lowered signal to detector noise ratio in each pixel, just as the advantage of a higher temporal resolution is negated by a lowered signal to detector noise ratio at higher frame rates (see Section 3.1.3).

It is important to note that the relationships observed at the lower frame rates between the various curves in the plots of Fig. 6 are specific to the example, and should not be expected in general. In particular, the fact that the lower resolution CCD detector outperforms the higher resolution CCD detector at the lower frame rates, and the fact that the higher resolution EMCCD detector outperforms or has comparable performance to the lower resolution CCD detector at the lower frame rates, can both be largely attributed to the relatively low photon budget (average of 800 photons for the entire acquired sequence of images) assumed in our example. These relationships could easily be reversed if, for example, the photon budget was higher by a sufficient amount.

4. Conclusions

In the context of fluorescence microscopy, we have investigated the effect of time discretization of the imaging process on the accuracy for estimating parameters pertaining to a non-stationary fluorescent object. For different image data models based on different detector types, we have provided Fisher information matrix expressions from which limits of accuracy for estimating a parameter can be obtained. For the case of a point source moving in a linear trajectory that is confined to the focal plane of a microscope, we have also provided explicit expressions for the fundamental limit of accuracy, which assumes the use of an ideal detector. By comparing limits of accuracy for different data models, we have demonstrated the suitability of an EMCCD detector for imaging at high frame rates, and the appropriateness of a CCD detector for imaging at relatively low frame rates. Importantly, we have shown that by reducing the photon signal in each image pixel to very low levels, imaging with an EMCCD detector at high frame rates is a natural way of implementing UAIM, and hence allows parameter estimation with very high accuracy. In addition, we have demonstrated that the obtainable accuracy can be further improved by increasing the spatial resolution of the EMCCD detector. To provide further illustration of the use of limits of accuracy as a tool for experimental design, we have also examined how the levels of detector and background noise sources might impact the selection of a detector for image acquisition. While the current study has been carried out in the context of fluorescence microscopy, the approach taken and the results presented are also applicable to time-discretized imaging processes found in other areas such as astronomy and computer vision.

Appendix

Theorem 1. Let a sequence of Nf images of a moving photon-emitting object be captured by an ideal detector (i.e., a non-pixelated, noiseless detector with infinite detection area2) during the total acquisition time interval [t0, tNf], which is split up into the frame intervals [ti−1, ti], i = 1, 2,..., Nf. The Nf images are captured over the exposure intervals [ti−1, ei], eiti, i = 1, 2,..., Nf. Let the trajectory of the object be described by a line within an xy-plane (which is orthogonal to the optical (z-)axis of the imaging system), and let it be parameterized by θ = (x0, y0, ϕ, v) ∈ Θ, where x0 and y0 are the coordinates of the starting position of the object, ϕ is the angle that specifies the object’s direction of movement with respect to the x-axis, v is the speed at which the object moves, and Θ denotes the parameter space that is an open subset of4. Specifically, the linear trajectory (xθ(τ), yθ(τ)), t0τtNf, is given by xθ(τ) = x0 + v(τt0) cosϕ and yθ (τ) = y0 + v(τt0) sinϕ. Let the detection of the object’s photons by the ideal detector be a spatio-temporal random process. The temporal part describes the time points at which the photons are detected, and is represented by a Poisson process with intensity given by the photon detection rate Λ(τ), τt0. The spatial part describes the positional coordinates of the detected photons, and is represented by a family of mutually independent random variables that is independent of the temporal Poisson process. For a photon that is detected at time τ, τt0, the random variable representing its location of detection is distributed according to the probability density fθ,τ(x,y)=1M2q(xMxθ(τ),yMyθ(τ)), (x, y) ∈ ℝ2, where q is the image function which describes the image of the object at unit lateral magnification when the object is located at the origin of the xy-plane, and M > 0 is the lateral magnification of the imaging system. Let q be radially symmetric, i.e., there exists a function q̃ : ℝ → ℝ such that q(x, y) = (x2 + y2) = (r2), (x, y) ∈ ℝ2. Let γ2=4π0r3/q˜(r2)(q˜(r2)/r2)2dr.

1) The fundamental limits of accuracy δx0, δy0, δϕ, and δv for estimating, respectively, the trajectory parameters x0, y0, ϕ, and v, are given by

δx0=δy0=1γb3b1b3b22,δϕ=1γvb1b1b3b22,δv=1γb1b1b3b22,
where
b1=i=1Nfti1eiΛ(τ)dτ,b2=i=1Nfti1eiΛ(τ)(τt0)dτ,b3=i=1Nfti1eiΛ(τ)(τt0)2dτ.
When the image of the object is described by a 2D Gaussian function, such that q(x,y)=1/(2πσgauss2)exp((x2+y2)/(2σgauss2)), σgauss > 0, (x, y) ∈ ℝ2, the term γ is given by γ := 1/σgauss. When the image of the object is described by an Airy function, such that q(x,y)=J12(2πnax2+y2/λ)/(π(x2+y2)), (x, y) ∈ ℝ2, the term γ is given by γ:= 2πna/λ. The parameters na and λ are the numerical aperture of the imaging system and the wavelength of the detected photons, respectively, and J1 is the first order Bessel function of the first kind.

2) When the photon detection rate is a constant, i.e., Λ(τ) = Λ0 ∈ ℝ+, t0τtNf, and when the durations of all Nf frame intervals are equal, i.e., titi−1 = ti+1ti, i = 1, 2,..., Nf − 1, and the durations of all Nf exposure intervals are equal, i.e., eiti−1 = ei+1ti := Te, i = 1, 2,..., Nf − 1, the fundamental limits of accuracy reduce to

δx0=δy0=2γTe2+32Te(Ttat1F)+(Ttat23Ttat2F+12F2)Λ0FTtatTe(Te2+Ttat21F2),δϕ=2γv3Λ0FTtatTe(Te2+Ttat21F2),δv=2γ3Λ0FTtatTe(Te2+Ttat21F2),
where Ttat := tNft0 denotes the duration of the total acquisition time interval [t0, tNf], and F := Nf /Ttat denotes the acquisition frame rate.

Proof of Theorem 1

1) It has been shown in [10] that given the conditions specified by the theorem, the general Fisher information matrix expression of Eq. (4) (with the background photon detection rate set to Λb(τ) = 0, t0τtNf) simplifies to

I(θ)=4π0r3q˜(r2)(q˜(r2)r2)2dri=1Nfti1eiΛ(τ)[xθ(τ)θyθ(τ)θ]T[xθ(τ)θyθ(τ)θ]dτ,
which can be rewritten as
I(θ)=γ2i=1Nfti1eiΛ(τ)[xθ(τ)θyθ(τ)θ]T[xθ(τ)θyθ(τ)θ]dτ,
since γ2=4π0r3/q˜(r2)(q˜(r2)/r2)2dr. For the linear trajectory xθ (τ) = x0 + v(τt0) cosϕ, yθ (τ) = y0 + v(τt0) sinϕ, t0τtNf, the partial derivatives in Eq. (10) evaluate, for θ = (x0, y0, ϕ, v) ∈ Θ, to
xθ(τ)θ=[10v(τt0)sinϕ(τt0)cosϕ],yθ(τ)θ=[01v(τt0)cosϕ(τt0)sinϕ],
and Eq. (10) becomes
I(θ)=γ2[b10b2vsinϕb2cosϕ0b1b2vcosϕb2sinϕb2vsinϕb2vcosϕb3v20b2cosϕb2sinϕ0b3],
where b1=i=1Nfti1eiΛ(τ)dτ, b2=i=1Nfti1eiΛ(τ)(τt0)dτ and b3=i=1Nfti1eiΛ(τ)(τt0)2dτ. Since this Fisher information matrix is similar in form to the Fisher information matrix found in the proof of Corollary 4 in [10], it can be inverted using the approach taken there. The fundamental limits of accuracy in Eq. (8) are then obtained by taking the square root of each of the four main diagonal elements of the inverted matrix.

2) When the photon detection rate Λ(τ) = Λ0 ∈ ℝ+, t0τtNf, and the durations of all Nf exposure intervals are equal, i.e., eiti−1 = ei+1ti := Te, i = 1, 2,..., Nf − 1, the terms b1, b2, and b3 from result 1 of this theorem can be expressed in terms of the photon detection rate Λ0, the number of frames Nf, and the duration Te of the exposure interval:

b1=i=1Nf0eiti1Λ0dτ=i=1Nf0TeΛ0dτ=NfΛ0Te,
b2=i=1Nf0eiti1Λ0(τ+ti1t0)dτ=Λ0NfTe22+Λ0Tei=1Nf(ti1t0),
b3=i=1Nf0eiti1Λ0(τ+ti1t0)2dτ=Λ0NfTe33+Λ0Te2i=1Nf(ti1t0)+Λ0Tei=1Nf(ti1t0)2.
Since the durations of all Nf frame intervals are equal, we have ti−1t0 = (i − 1)Ttat/Nf, where Ttat := tNft0 is the duration of the total acquisition time interval. Accordingly, the sums i=1Nf(ti1t0) and i=1Nf(ti1t0)2 in Eqs. (12) and (13) can be expressed in terms of Ttat and Nf :
i=1Nf(ti1t0)=Ttat(Nf1)2,i=1Nf(ti1t0)2=Ttat2(Nf1)(2Nf1)6Nf.
Substituting the identities in Eq. (14) into Eqs. (12) and (13), we obtain
b2=Λ0NfTe22+Λ0TeTtat(Nf1)2,b3=Λ0NfTe33+Λ0Te2Ttat(Nf1)2+Λ0TeTtat2(Nf1)(2Nf1)6Nf.
Substituting the expressions in Eqs. (11) and (15) into the expressions in Eq. (8), we obtain the fundamental limits of accuracy in Eq. (9).

Acknowledgments

This research was supported in part by the National Institutes of Health ( R01 GM085575). We thank Dongyoung Kim for his expert assistance with figure preparation.

References and links

1. W. Yang and S. M. Musser, “Visualizing single molecules interacting with nuclear pore complexes by narrow-field epifluorescence microscopy,” Methods 39(4), 316–328 (2006). [CrossRef]   [PubMed]  

2. T. Dange, D. Grünwald, A. Grünwald, R. Peters, and U. Kubitscheck, “Autonomy and robustness of translocation through the nuclear pore complex: a single-molecule study,” J. Cell Biol. 183(1), 77–86 (2008). [CrossRef]   [PubMed]  

3. X. Nan, P. A. Sims, P. Chen, and X. S. Xie, “Observation of individual microtubule motor steps in living cells with endocytosed quantum dots,” J. Phys. Chem. B 109(51), 24220–24224 (2005). [PubMed]  

4. V. Levi, Q. Ruan, M. Plutz, A. S. Belmont, and E. Gratton, “Chromatin dynamics in interphase cells revealed by tracking in a two-photon excitation microscope,” Biophys. J. 89(6), 4275–4285 (2005). [CrossRef]   [PubMed]  

5. J. Rink, E. Ghigo, Y. Kalaidzidis, and M. Zerial, “Rab conversion as a mechanism of progression from early to late endosomes,” Cell 122(5), 735–749 (2005). [CrossRef]   [PubMed]  

6. M. P. Taylor, R. Kratchmarov, and L. W. Enquist, “Live cell imaging of alphaherpes virus anterograde transport and spread,” J. Vis. Exp. (78), e50723 (2013).

7. J. Chao, S. Ram, E. S. Ward, and R. J. Ober, “Ultrahigh accuracy imaging modality for super-localization microscopy,” Nat. Methods 10(4), 335–338 (2013). [CrossRef]   [PubMed]  

8. C. R. Rao, Linear Statistical Inference and its Applications (Wiley, 1965).

9. S. Ram, E. S. Ward, and R. J. Ober, “A stochastic analysis of performance limits for optical microscopes,” Multidim. Syst. Sign. P. 17(1), 27–57 (2006). [CrossRef]  

10. Y. Wong, Z. Lin, and R. J. Ober, “Limit of the accuracy of parameter estimation for moving single molecules imaged by fluorescence microscopy,” IEEE T. Signal Proces. 59(3), 895–911 (2011). [CrossRef]  

11. S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (Prentice Hall PTR, 1993), Vol. I.

12. J. Chao, E. S. Ward, and R. J. Ober, “Fisher information matrix for branching processes with application to electron-multiplying charge-coupled devices,” Multidim. Syst. Sign. P. 23(3), 349–379 (2012). [CrossRef]  

13. B. Zhang, J. Zerubia, and J.-C. Olivo-Marin, “Gaussian approximations of fluorescence microscope point-spread function models,” Appl. Optics 46(10), 1819–1829 (2007). [CrossRef]  

14. A. Santos and I. T. Young, “Model-based resolution: applying the theory in quantitative microscopy,” Appl. Optics 39(17), 2948–2958 (2000). [CrossRef]  

15. M. Born and E. Wolf, Principles of Optics (Cambridge University, 1999). [CrossRef]  

16. A. G. Basden, C. A. Haniff, and C. D. Mackay, “Photon counting strategies with low-light-level CCDs,” Mon. Not. R. Astron. Soc. 345(3), 985–991 (2003). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 (a) Time discretization notation and terminology. An acquisition of Nf frames, spanning a total acquisition time Ttat over the time interval [t0, tNf], consists of the frame intervals [ti−1, ti], i = 1, 2,..., Nf. During each frame interval, the camera exposure begins at the start of the frame interval and stops at or before the end of the frame interval. More precisely, the exposure intervals are given by [ti−1, ei], eiti, i = 1, 2,..., Nf. (b) Schematic sketch of a linear trajectory. The trajectory is depicted as a line segment, with an arrowhead indicating the direction of movement. It is described by four parameters: the coordinates (x0, y0) of the starting position, the angle ϕ specifying the direction of movement with respect to the x-axis, and the speed v at which the object travels.
Fig. 2
Fig. 2 Limits of accuracy, shown as functions of the acquisition frame rate, for the estimation of (a) the coordinate x0 and (b) the coordinate y0 of the starting position, (c) the angle ϕ specifying the direction of movement with respect to the x-axis, and (d) the speed v, of a point source moving in a linear trajectory (see Fig. 1(b)). In each plot, the limits of accuracy correspond to imaging with an ideal detector (*), a hypothetical noiseless detector (□), a CCD detector (⋄), and an EMCCD detector (○). For each pixelated detector type, the pixel size is 16μm × 16μm, and an image consists of an 8×8 pixel array. The CCD detector adds readout noise with mean ηk = 0 e and standard deviation σk = 2 e to each pixel k. The EMCCD detector amplifies photon signals at an electron multiplication gain of g = 950, and adds readout noise with mean ηk = 0 e and standard deviation σk = 24 e to each pixel k. The absence of a background component is assumed. The 2D Gaussian profile that models the image of the point source has a standard deviation of σgauss = 84 nm, and the rate at which photons are detected from the point source is Λ0 = 2000 photons/s. The magnification of the microscope is M = 100. The values of the estimated parameters are x0 = y0 = −250 nm with respect to the optical (z-)axis which passes through the center of an image, ϕ = 30°, and v = 1500 nm/s. At any given frame rate, the total acquisition time is Ttat = 0.4 s, and is divided equally among all frames. The acquisition has no time gaps between successive exposures. The CCD limit of accuracy attains its best (i.e., lowest) value, in (a) and (d), at 15 fps, where the average photon signal level per frame and per pixel are 133 and 2.08 photons, and, in (b) and (c), at 10 fps, where the average photon signal level per frame and per pixel are 200 and 3.125 photons. In (a), (b), (c), and (d), the EMCCD limit of accuracy first attains a lower value than the CCD limit of accuracy at around 25 fps, where the average photon signal level per frame and per pixel are 80 and 1.25 photons.
Fig. 3
Fig. 3 Comparing the limits of accuracy, corresponding to imaging with CCD and EMCCD detectors at different levels of various noise sources and shown as functions of the acquisition frame rate, for the estimation of the coordinate x0 of the starting position of a point source moving in a linear trajectory. In (a), the limits of accuracy correspond to CCD imaging with a readout noise standard deviation (SD) of σk = 1 e (red ⋄), 2 e (black ⋄), and 6 e (blue ⋄) in each pixel k, and to EMCCD imaging (○) with an electron multiplication (EM) gain of g = 950 and a readout noise SD of σk = 24 e in each pixel k. In (b), the limits of accuracy correspond to EMCCD imaging with an EM gain of g = 950 and a readout noise SD of σk = 12 e (green ○), 24 e (black ○), 36 e (red ○), and 64 e (blue ○) in each pixel k, and to CCD imaging (⋄) with a readout noise SD of σk = 2 e in each pixel k. In (c), the limits of accuracy correspond to EMCCD imaging with an EM gain of g = 2000 (green ○), 950 (black ○), 300 (red ○), and 50 (blue ○), and a readout noise SD of σk = 24 e in each pixel k, and to CCD imaging (⋄) with a readout noise SD of σk = 2 e in each pixel k. In (a), (b), and (c), the absence of a background component is assumed. In (d), the limits of accuracy correspond to CCD imaging (⋄) and EMCCD imaging (○) with background noise levels of βk,i = 0 (black), 5 (red), and 10 (blue) photons in each pixel k of each frame i at 5 fps. (At each noise level, the background photons are assumed to be detected at a constant rate, and to be distributed uniformly over the detector.) For CCD imaging, readout noise with an SD of σk = 2 e in each pixel k is assumed. For EMCCD imaging, an EM gain of g = 950 and readout noise with an SD of σk = 24 e in each pixel k are assumed. In (a), (b), (c), and (d), the readout noise in all cases has a mean of ηk = 0 e in each pixel k. Other details of the acquisition setting and problem description, including the values of parameters not mentioned here, are as specified in Fig. 2.
Fig. 4
Fig. 4 Limits of accuracy, shown as functions of the acquisition frame rate, for the estimation of (a) the coordinate xc and (b) the coordinate yc of the center of the circular arc traversed by a point source, (c) the radius R of the circular arc, (d) the angular speed ω at which the point source travels, and (e) the angular offset ψ0 specifying the starting position of the point source with respect to the x-axis (see Fig. 5). In each plot, the limits of accuracy correspond to imaging with an ideal detector (*), a hypothetical noiseless detector (□), a CCD detector (⋄), and an EMCCD detector (○). For each pixelated detector type, the pixel size is 16μm × 16μm, and an image consists of an 8×8 pixel array. The CCD detector adds readout noise with mean ηk = 0 e and standard deviation σk = 2 e to each pixel k. The EMCCD detector amplifies photon signals at an electron multiplication gain of g = 950, and adds readout noise with mean ηk = 0 e and standard deviation σk = 24 e to each pixel k. The absence of a background component is assumed. The 2D Gaussian profile that models the image of the point source has a standard deviation of σgauss = 84 nm, and the rate at which photons are detected from the point source is Λ0 = 2000 photons/s. The magnification of the microscope is M = 100. The values of the estimated parameters are xc = yc = 0 nm with respect to the optical (z-)axis which passes through the center of an image, R = 250 nm, ω = 6 rad/s, and ψ0 = 20°. At any given frame rate, the total acquisition time is Ttat = 0.4 s, and is divided equally among all frames. The acquisition has no time gaps between successive exposures. The CCD limit of accuracy attains its best (i.e., lowest) value, in (a), at 5 fps, where the average photon signal level per frame and per pixel are 400 and 6.25 photons, and, in (b), (c), (d), and (e), at 15 fps, where the average photon signal level per frame and per pixel are 133 and 2.08 photons. In (a), (b), (c), (d), and (e), the EMCCD limit of accuracy first attains a lower value than the CCD limit of accuracy at around 25 fps, where the average photon signal level per frame and per pixel are 80 and 1.25 photons.
Fig. 5
Fig. 5 Schematic sketch of a circular arc trajectory. The trajectory is depicted as a circular arc, with an arrowhead indicating the direction of movement. It is described by five parameters: the coordinates (xc, yc) of the center of the circular arc, the radius R of the circular arc, the angular speed ω at which the object travels along the arc, and the angular offset ψ0 specifying the object’s starting position with respect to the x-axis.
Fig. 6
Fig. 6 Comparing the limits of accuracy, corresponding to imaging with detectors of different spatial resolutions, for the estimation of (a) the coordinate x0 and (b) the coordinate y0 of the starting position, (c) the angle ϕ specifying the direction of movement with respect to the x-axis, and (d) the speed v, of a point source moving in a linear trajectory (see Fig. 1(b)). In each plot, limits of accuracy as functions of the acquisition frame rate are shown which correspond to imaging with an ideal detector (*), a hypothetical noiseless detector (□), a CCD detector (⋄), and an EMCCD detector (○). For each pixelated detector type, the limits of accuracy correspond to imaging with a low resolution detector (—) having a 16μm × 16μm pixel size, and imaging with a high resolution detector (–.–) having an 8μm × 8μm pixel size. In either case, the size of an image is 128μm × 128μm, such that an image for the low resolution detector consists of an 8×8 pixel array, and an image for the high resolution detector consists of a 16×16 pixel array. Other details of the acquisition setting and problem description, including the values of parameters not mentioned here, are as specified in Fig. 2. Note that due to identical assumptions, the curves corresponding to the low resolution detector are the same as those shown in Fig. 2. For the high resolution detector, the CCD limit of accuracy attains its best (i.e., lowest) value, in (a), (b), and (d), at 10 fps, where the average photon signal level per frame and per pixel are 200 and 3.125 photons, and, in (c), at 5 fps, where the average photon signal level per frame and per pixel are 400 and 6.25 photons. Also for the high resolution detector, and in (a), (b), (c), and (d), the EMCCD limit of accuracy has a lower value than the CCD limit of accuracy at each frame rate shown. For analogous information on the CCD and EMCCD limits of accuracy for the low resolution detector, see Fig. 2.

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

I ( θ ) = i = 1 N f I i ( θ ) = i = 1 N f k = 1 N p 1 υ θ , k , i ( μ θ , k , i θ ) T ( μ θ , k , i θ ) , θ Θ ,
μ θ , k , i = 1 M 2 t i 1 e i C k Λ ( τ ) q z θ ( τ ) , o θ ( τ ) ( x M x θ ( τ ) , y M y θ ( τ ) ) d x d y d τ ,
β k , i = 1 M 2 t i 1 e i C k Λ b ( τ ) b τ ( x M , y M ) d x d y d τ ,
I ( θ ) = i = 1 N f I i ( θ ) = i = 1 N f k = 1 N p ( μ θ , k , i θ ) T ( μ θ , k , i θ ) × ( e 2 υ θ , k , i 2 π σ k 2 1 p θ , k , i ( z ) ( l = 1 υ θ , k , i l 1 ( l 1 ) ! e 1 2 ( z l η k σ k ) 2 ) 2 d z 1 ) ,
p θ , k , i ( z ) = e υ θ , k , i 2 π σ k l = 0 [ υ θ , k , i ] l l ! e 1 2 ( z l η k σ k ) 2 , z ,
I ( θ ) = i = 1 N f I i ( θ ) = i = 1 N f k = 1 N p ( μ θ , k , i θ ) T ( μ θ , k , i θ ) × ( e 2 υ θ , k , i 2 π σ k 2 g 2 1 p θ , k , i ( z ) ( l = 1 e 1 2 ( z l η k σ k ) 2 j = 0 l 1 ( l 1 j ) ( 1 1 g ) l j 1 j ! ( g υ θ , k , i ) j ) 2 d z 1 ) ,
p θ , k , i ( z ) = e υ θ , k , i 2 π σ k [ e 1 2 ( z η k σ k ) 2 + l = 1 e 1 2 ( z l η k σ k ) 2 j = 0 l 1 ( l 1 j ) ( 1 1 g ) l j 1 ( j + 1 ) ! ( g υ θ , k , i ) j + 1 ] , z ,
I ( θ ) = i = 1 N f I i ( θ ) = i = 1 N f t i 1 e i Λ 2 ( τ ) V θ T ( τ ) × [ ( q z θ ( τ ) , o θ ( τ ) ( x , y ) p ( τ ) ) T ( q z θ ( τ ) , o θ ( τ ) ( x , y ) p ( τ ) ) Λ ( τ ) q z θ ( τ ) , o θ ( τ ) ( x , y ) + Λ b ( τ ) b τ ( x , y ) d x d y ] V θ ( τ ) d τ ,
q ( x , y ) = 1 2 π σ gauss 2 e x 2 + y 2 2 σ gauss 2 , ( x , y ) 2 ,
δ x 0 = δ y 0 = 2 σ gauss T e 2 + 3 2 T e ( T tat 1 F ) + ( T tat 2 3 T tat 2 F + 1 2 F 2 ) Λ 0 F T tat T e ( T e 2 + T tat 2 1 F 2 ) , δ ϕ = 2 σ gauss v 3 Λ 0 F T tat T e ( T e 2 + T tat 2 1 F 2 ) , δ v = 2 σ gauss 3 Λ 0 F T tat T e ( T e 2 + T tat 2 1 F 2 ) ,
δ x 0 = δ y 0 = 2 σ gauss Λ 0 T tat , δ ϕ = 2 3 σ gauss v T tat Λ 0 T tat , δ v = 2 3 σ gauss T tat Λ 0 T tat ,
δ x 0 = δ y 0 = 1 γ b 3 b 1 b 3 b 2 2 , δ ϕ = 1 γ v b 1 b 1 b 3 b 2 2 , δ v = 1 γ b 1 b 1 b 3 b 2 2 ,
b 1 = i = 1 N f t i 1 e i Λ ( τ ) d τ , b 2 = i = 1 N f t i 1 e i Λ ( τ ) ( τ t 0 ) d τ , b 3 = i = 1 N f t i 1 e i Λ ( τ ) ( τ t 0 ) 2 d τ .
δ x 0 = δ y 0 = 2 γ T e 2 + 3 2 T e ( T tat 1 F ) + ( T tat 2 3 T tat 2 F + 1 2 F 2 ) Λ 0 F T tat T e ( T e 2 + T tat 2 1 F 2 ) , δ ϕ = 2 γ v 3 Λ 0 F T tat T e ( T e 2 + T tat 2 1 F 2 ) , δ v = 2 γ 3 Λ 0 F T tat T e ( T e 2 + T tat 2 1 F 2 ) ,
I ( θ ) = 4 π 0 r 3 q ˜ ( r 2 ) ( q ˜ ( r 2 ) r 2 ) 2 d r i = 1 N f t i 1 e i Λ ( τ ) [ x θ ( τ ) θ y θ ( τ ) θ ] T [ x θ ( τ ) θ y θ ( τ ) θ ] d τ ,
I ( θ ) = γ 2 i = 1 N f t i 1 e i Λ ( τ ) [ x θ ( τ ) θ y θ ( τ ) θ ] T [ x θ ( τ ) θ y θ ( τ ) θ ] d τ ,
x θ ( τ ) θ = [ 1 0 v ( τ t 0 ) sin ϕ ( τ t 0 ) cos ϕ ] , y θ ( τ ) θ = [ 0 1 v ( τ t 0 ) cos ϕ ( τ t 0 ) sin ϕ ] ,
I ( θ ) = γ 2 [ b 1 0 b 2 v sin ϕ b 2 cos ϕ 0 b 1 b 2 v cos ϕ b 2 sin ϕ b 2 v sin ϕ b 2 v cos ϕ b 3 v 2 0 b 2 cos ϕ b 2 sin ϕ 0 b 3 ] ,
b 1 = i = 1 N f 0 e i t i 1 Λ 0 d τ = i = 1 N f 0 T e Λ 0 d τ = N f Λ 0 T e ,
b 2 = i = 1 N f 0 e i t i 1 Λ 0 ( τ + t i 1 t 0 ) d τ = Λ 0 N f T e 2 2 + Λ 0 T e i = 1 N f ( t i 1 t 0 ) ,
b 3 = i = 1 N f 0 e i t i 1 Λ 0 ( τ + t i 1 t 0 ) 2 d τ = Λ 0 N f T e 3 3 + Λ 0 T e 2 i = 1 N f ( t i 1 t 0 ) + Λ 0 T e i = 1 N f ( t i 1 t 0 ) 2 .
i = 1 N f ( t i 1 t 0 ) = T tat ( N f 1 ) 2 , i = 1 N f ( t i 1 t 0 ) 2 = T tat 2 ( N f 1 ) ( 2 N f 1 ) 6 N f .
b 2 = Λ 0 N f T e 2 2 + Λ 0 T e T tat ( N f 1 ) 2 , b 3 = Λ 0 N f T e 3 3 + Λ 0 T e 2 T tat ( N f 1 ) 2 + Λ 0 T e T tat 2 ( N f 1 ) ( 2 N f 1 ) 6 N f .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.