## Abstract

Localization based microscopy using self-interference digital holography (SIDH) provides three-dimensional (3D) positional information about point sources with nanometer scale precision. To understand the performance limits of SIDH, here we calculate the theoretical limit to localization precision for SIDH when designed with two different configurations. One configuration creates the hologram using a plane wave and a spherical wave while the second configuration creates the hologram using two spherical waves. We further compare the calculated precision bounds to the 3D single molecule localization precision from different Point Spread Functions. SIDH results in almost constant localization precision in all three dimensions for a 20 *µ*m thick depth of field. For high signal-to-background ratio (SBR), SIDH on average achieves better localization precision. For lower SBR values, the large size of the hologram on the detector becomes a problem, and PSF models perform better.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

In widefield fluorescence microscopy, each image captured by the camera captures a two-dimensional slice of the object in-focus. To acquire a three-dimensional image, 2D images must be acquired as the focal plane is moved through the sample. With holography, a two-dimensional image of the complex field is captured. Because both the amplitude and phase of the field are recorded, an image can be generated re-focused to any axial position in the sample. This allows for rapid volumetric imaging [1].

But traditional holography requires coherent light and is not compatible with fluorescence. In Self-Interference Digital Holography (SIDH), the light from the sample is interfered with itself rather than a reference beam so that coherence is no longer required [2]. SIDH enables imaging of incoherently emitting objects over large axial ranges without refocusing and can provide unique properties such as infinite depth of field [3,4], violation of the Lagrange invariant [5,6] and edge enhancement with a vortex filter [7]. SIDH works by dividing the incoherent light emitted from the sample into two beams that are separately phase-modulated and recombined in a common plane to produce interference fringes. The axial position of the object is encoded in the density of the interference fringes. Three images are collected with the phase of one path shifted for each hologram. The three images are then combined to create the final hologram, eliminating both the bias image and the holographic "twin image." SIDH can be used for a variety of imaging applications including particle tracking [8,9] and high resolution fluorescence imaging of biological samples [10].

In modern fluorescence microscopy, single particle tracking (SPT) and single molecule localization microscopy (SMLM) have become essential tools for imaging and tracking particles at resolutions exceeding the diffraction limit, sometimes by over an order of magnitude [11–13]. In SPT and SMLM, the point spread functions (PSFs) of individual fluorophores are imaged and localized. By fitting the PSF to a model function, the center of the PSF can be determined with much greater accuracy than the diffraction limit. In SMLM, single molecules emitting a few thousand photons are imaged; in SPT, the emitters can also be fluorescent microspheres emitting tens of thousands of photons [14]. For 3D SPT or SMLM, it is necessary to extract the positional information of the single emitter with high precision in all three dimensions. Unfortunately, the recorded two-dimensional PSF of a standard fluorescence microscope does not change rapidly with defocus and therefore does not provide sufficient information about the axial position of the emitter. To address this issue, a variety of techniques have been developed over the past decade to facilitate unambiguous 3D localization of single emitters with high precision. One approach is to modify the microscope to make multiple simultaneous measurements of the 3D PSF at different focal planes [15–17]. Another idea is to extract the axial position of the emitter using other properties such as the intensity gradient of the excitation field [18], the near-field coupling of the emission to the coverslip [19,20] or the phase information of the emitted light [21]. PSF engineering relies on altering the PSF of the microscope to encode the axial position of an emitter in its shape. The most widely used PSF engineering technique uses a cylindrical lens to impart astigmatism to the system, producing an elliptical PSF that encodes the axial position of the emitter [22]. Other designs include, the rotating double-helix PSF [23], the corkscrew PSF [24], the phase ramp PSF [25], the self-bending PSF [26] and the saddle-point and tetrapod PSFs [27,28]. All these engineered PSFs have distinct features that change rapidly with defocus thus encoding the axial position of the emitter.

The ability of any particular method to determine the position of a single emitter can be determined by calculating the Cramér-Rao lower bound (CRLB) from the Fisher information matrix [29–31]. This calculation yields the theoretically best precision that can be achieved for a given estimator and has been performed for most of the SML techniques. The lower bound provides a limit with which the position of the emitter can be estimated. This quantity is useful because it (1) serves as a benchmark to which the precision provided by a particular estimator can be compared, thus indicating how much room there might be for improvement and (2) by calculating the quantity for a particular estimator under different experimental conditions, the lower bound helps in designing experiments where various parameters of the optical setup can be varied to achieve the desired level of precision. Calculation of the CRLB requires computation of the Fisher information matrix which provides a measure of the amount of information the data contains about the parameters being estimated. The precision with which a parameter can be estimated is directly proportional to the amount of information the collected data contains about the parameter. The amount of information in the data is determined by considering how the likelihood of obtaining a set of measurements in the presence of noise changes with the value of the parameter of interest. If the parameter of interest is the position of the emitter and the data collected is the response of the optical system, the Fisher information matrix calculates how the likelihood of the acquired data varies with changes in the position of the emitter. If the likelihood of the acquired data is very sensitive to changes in the position of the emitter, then the data contains a relatively large amount of information about the position and can be estimated with relatively high precision. The CRLB of a certain parameter (e.g., position) is obtained from the inverse of the Fisher information matrix, thus confirming the expectation that a large amount of information about the parameter should result in a smaller bound on the variance with which the parameter can be estimated [32]. The CRLB has been calculated for many different PSF models to determine the precision with which a single emitter can be localized [32–38]. Saddle point and tetrapod PSFs were developed by optimizing the Fisher information matrix in an engineered PSF over a certain axial range, thereby generating a PSF that contains the maximum amount of information with regard to the position of the emitter.

An attractive feature of SIDH is that the hologram is converted into an image at a particular z-plane using a mathematically well-defined kernel. The image of a point source is then approximately Gaussian in shape with width $\sim \frac {\lambda }{2NA}$ and can be localized using standard single molecule localization (SML) algorithms. In contrast to some of the engineered PSFs, the PSF will not be larger, increasing the likelihood of overlapping PSFs. And, because the PSF can be re-focused, it can be localized in the axial direction as well. Recently, we investigated single particle localization of fluorescent beads using SIDH [39]. We demonstrated that holograms can be reconstructed into images with good SNR with as few as 15,000 photons. We demonstrated localization with precision of 5 nm laterally and 40 nm axially from 50,000 photons.

The CRLB has not yet been calculated for SIDH, and understanding the CRLB will be useful to researchers further developing holographic approaches to SPT and SMLM. Here we ask whether SIDH can provide high localization precision over a large axial range and compare the performance of SIDH to PSF-based localization. We use the Fisher information matrix to calculate the best-case localization precision using SIDH for a 20 $\mu$m axial range. We compare our results to different PSF models used in SPT and SMLM. We show that SIDH compares favorably in the case of low-background noise but suffers due to the large hologram size when background noise becomes appreciable.

## 2. Methods

In this section we first provide a theoretical analysis of SIDH. We then introduce the concept of Fisher information and the Cramér-Rao lower bound (CRLB) and describe our CRLB calculations for SIDH.

#### 2.1 Theoretical analysis of SIDH

Following the convention in [2], and referring to the generalized diagram in Fig. 1, we consider a point source at ($\bar {r_{s}},z_{s})$ in front of the objective lens with focal length $f_{o}$. After light from the point source propagates through the objective lens to the diffractive optical element (DOE), it gets focused by the DOE displaying the phase components of two quadratic phase functions with focal lengths $f_{d1}$ and $f_{d2}$. After further propagating a distance of $z_{h}$ to the camera, the complex amplitude at the camera is given by

The final complex hologram $I_{\textrm {final}}(x,y)$ can be numerically back-propagated to produce a reconstructed 3D image $s(x,y,z)$ of the sample.

where $z$, which is a function of $z_{r}$, is the distance of the emitter from the objective focal plane.#### 2.2 Calculating the CRLB for the SIDH imaging model

The CRLB is the limiting lower bound of the variance for any unbiased estimator and is given by the inverse of the Fisher information matrix

where $S_{\theta }$ is the standard deviation of the estimator, $F(\theta )$ is the Fisher information matrix and $\theta$ are the parameters being estimated. In this paper $\theta$ = [$x,y,z$]. The equal sign is the minimum value of the estimation and is referred to as the CRLB. The elements of the Fisher information matrix are given by [29,30]#### 2.3 Calculating the CRLB for PSF imaging models

We also calculate the CRLB for the PSF model from scalar diffraction theory, the Cropped Oblique Secondary Astigmatism (COSA) PSF [14], and Gaussian and Astigmatic Gaussian approximations to the PSF. The scalar PSF and COSA PSF are based on Fourier Optics theory. The COSA PSF is similar to the tetrapod PSF and provides high axial localization precision over an adjustable axial range. The Gaussian PSFs are approximations to the “true” PSF that can be calculated quickly and are frequently used in SPT and SMLM to localize the PSF [33]. For all these calculations we use a Numerical Aperture (NA) of 1.4, an emission wavelength of 670 nm, and a refractive index of 1.515. For the numerical simulation of the Gaussian PSF, we have matched our simulations to the cases previously discussed [33]. The focal depth for a high-NA imaging system is given by $d$ = $\lambda$/($n$(1-(1- NA$^{2}$/$n^{2}$)$^{1/2}$)) $\approx$ 679 nm [41] and the spot size $\sigma _{0}$ $\approx$ $\lambda$ /(4NA$\sqrt {2\hspace {0.3em}ln\hspace {0.3em}2}$) $\approx$ 122 nm (at $z$ = 0). For simulating the astigmatic PSF, the amount of astigmatism introduced in the system was assumed to be $\gamma$ = 400 nm. The focal depth and the spot size are the same as assumed in the Gaussian case. The three imaging setups have been designed to have the same overall magnification equal to $M_{T}$ = 50. For the COSA PSF, the adjustable parameter, $\alpha$, is set to 11.46 which corresponds to a 15$\mu$m axial range for our parameters. Details of these calculations can be found in the appendix, Secs. 7.2 - 7.4.

#### 2.4 Effects of background noise

The calculated fundamental limits of localization precision assume the best case scenario for the acquisition system. These expressions derived above present the best possible localization precision in the absence of deteriorating factors such as background noise. The elements of the Fisher information matrix in the case where the PSF is corrupted by noise were calculated analogously to the ideal case and are given by

## 3. Results and discussion

#### 3.1 Configuration 1: SIDH with one plane wave and one spherical wave

We first review the simplest case in SIDH, where the hologram is formed as a result of the interference between one spherical wave and one plane wave. Referring to Fig. 2, $f_{d1}$ is responsible for the spherical wave and the plane wave is created when $f_{d2} \rightarrow \infty$. In this case the reconstruction distance ($z_{r}$) and the transverse magnification ($M_{T}$) are given by

Using Eqs. (13)–(16) we calculate various parameters of an SIDH system such as the radius of the hologram ($r_{h}$), transverse magnification of the system ($M_{T}$) and the reconstruction distance ($z_{r}$) for different DOE-CCD distances ($z_{h}$)(shown in Fig. 3). In order to simulate the SIDH system, a 60x, 1.42 numerical aperture (NA) objective is assumed ($f_{o}$ = 3 mm), imaging into an index of 1.515. The wavelength of light is 670 nm. The distance between the objective lens and the DOE is $d$ = 3 mm, the focal length of the quadratic phase pattern is $f_{d1}$ = 300 mm and $f_{d2}$ = $\infty$. The CRLB for SIDH was calculated for $z_{h}$ = 150 mm. When the camera is placed at $z_{h}$ $\geq$ 150 mm, the hologram comes to focus within the 20 $\mu$m axial range thereby reducing the achievable axial range.

Figure 4 compares $x$, $y$, and $z$ localization precision of SIDH with the Gaussian PSF and the astigmatic PSF as a function of the axial position $z_{s}$ with respect to focus. The first row of the figure [(A)-(C)] corresponds to $N$=6000 photons in the absence of background noise ($\beta$ = 0 photons/mm$^2$), the second row [(D)-(F)] shows the localization precision for typical single molecule imaging when $N$=6000 photons in the presence of background noise $\beta$ = 60,000 photons/mm$^2$ ($\approx$ 10 photons/pixel) in the image space and the third row [(G)-(I)] shows the localization precision, typical for imaging a fluorescent bead, when $N$=50,000 photons in the presence of background noise $\beta$ = 60,000 photons/mm$^2$ ($\approx$ 10 photons/pixel) in the image space. In the absence of noise, SIDH provides $<$20 nm precision and has more uniform localization precision than the other two PSF models in all three dimensions throughout the entire 20 $\mu$m depth of field. This is a direct result of the constant spatial variation rate in all three dimensions, which makes it more suitable for 3D localization based microscopy. In the low SBR case ($N$=6000, $\beta$=10 photons/pixel in Fig. 4(F)), SIDH does not perform as well as the Gaussian and the astigmatic PSF over the 20 $\mu$m range, thus making SIDH less suitable for 3D imaging at low SBR conditions. In the case of high SBR ($N$=50,000, $\beta$= 10 photons/pixel in Fig. 4(I)), the axial localization precision provided by SIDH outperforms the astigmatic and the Gaussian PSF when imaging away from focus thus making it suitable for single particle tracking applications.

#### 3.2 Configuration 2: SIDH with two spherical waves

SIDH has also been performed using two spherical waves with different focal lengths $f_{d1}$ and $f_{d2}$ to create the interferogram [44]. This configuration shown in Fig. 5, offers the possibility of placing the camera closer to the DOE allowing greater light efficiency without adversely affecting the resolution in the reconstructed image. When SIDH is performed with this configuration, the reconstruction distance ($z_{r}$) is given by

The transverse magnification ($M_{T}$) for SIDH with two spherical waves is the same as that with one spherical and plane wave as described previously in Eq. (14). The ABCD ray transfer matrices for the system shown in Fig. 5 are given by

The simulations to calculate the CRLB using this configuration are the same as those described in the previous section. the focal lengths of the quadratic phase patterns are $f_{d1}$ = 200 mm and $f_{d2}$ = 400 mm. Figure 7 compares $x$, $y$, and $z$ localization precision of SIDH using the second configuration with the Gaussian PSF and the astigmatic PSF as a function of the axial position $z_{s}$ with respect to focus. The top row (Fig. 7 [(A)-(C)]) shows the CRLB comparisons for $N$=6000 photons in the absence for background noise, the middle row (Fig. 7 [(D)-(F)]) shows the CRLB comparisons for $N$=6000 photons in the presence of background noise $\beta$ = 60,000 photons/mm$^2$ ($\approx$ 10 photons/pixel) and the third row (Fig. 7 [(G)-(I)]) shows the CRLB comparisons for $N$=50,000 photons in the presence of background noise $\beta$ = 60,000 photons/mm$^2$.

The precision bounds for SIDH using this configuration are similar to the first configuration (with one plane wave and one spherical wave), however in the high SBR case, Fig. 7(I), this configuration provides better axial localization precision than the astigmatic and the Gaussian PSF over the entire axial range. In general it can be seen from Fig. 7 that SIDH with configuration 2 performs better than SIDH with configuration 1. This can be attributed to the smaller hologram size throughout the axial range in configuration 2 which can be seen in Figs. 3 and 6. Since the number of photons contained in each of the holograms is the same, the larger size of the hologram in configuration 1 leads to more background noise in the hologram thereby degrading the localization precision.

In Fig. 8, we calculate the CRLB for SIDH using configurations 1 and 2 as a function of background (up to $\approx$ 20 photons/pixel). For each configuration, we plot the best and worst precision over the range. We plot the maximum CRLB value with increasing background photon values up to $\beta$ = 10$^{5}$ photons/mm$^{2}$ ($\approx$ 20 photons/pixel). Figure 8 shows that while SIDH is successful at providing nm-scale precision across large axial ranges, the technique is highly sensitive to noise. As shown in Fig. 4(D-F), the SIDH CRLB has a negative slope with respect to the axial position. This is due to the change in the size of the hologram as one moves across the entire axial range. The shape of the hologram is largest when the emitter is -10 $\mu$m away from the focal plane. Therefore the worst precision occurs at -10$\mu$m, and the best occurs at +10$\mu$m.

Finally, we compare the localization precision of SIDH to the COSA PSF. The COSA PSF provides close to optimal precision over a given axial range. In Fig. 9 we show comparisons for the case of no background and for the case of a background of 60,000 photons/mm$^2$ ($\approx$ 10 photons / pixel) in the image plane. Again, we see that SIDH compares favorably in the case of no background, but does not perform as well with background.

## 4. Localization of SIDH data

In this section we compare both simulated and measured localization data to the CRLB. We first perform single molecule fitting of simulated incoherent holograms produced by a single particle emitting $N$ = 6000 photons over three images in the presence of background noise $\beta$ = 1000 photons/mm$^2$. We measure the precision of the localization by performing the localization 50 times for each axial position and comparing the standard deviation of the measurements to the CRLB values calculated above for configuration 1. The synthetic data was produced using the equations described in section 2.1 and the fitting of the reconstructed images was performed using custom written Python code described in [39] although here we localize in the z coordinate by finding the maximum intensity along the z direction rather than the minimum spot width.

Figure 10 shows that our code for fitting the data is close to the CRLB in the lateral dimensions ($x$-$y$). In the axial dimension ($z$), the measured standard deviations are roughly twice the calculated CRLB values. The spatial (x-y) localized coordinates are found by fitting a 2D Gaussian function to the images formed by reconstructing the holograms. Since the algorithm used to find the spatial coordinates converges to the maximum likelihood estimate (MLE) of the position, they reach the CRLB [32], however the reported axial (z) localized coordinates are found by fitting a curve to the intensity values in the axial direction and finding the maximum intensity value. We speculate that this is the reason the reported axial localization precision does not reach the CRLB and that if we fit the data to an imaging model to determine the axial PSF, the observed axial localization precision would reach the CRLB.

An analysis of experimental bead data is shown in Fig. 11. A single 0.2 $\mu$m dark red (660/680) fluorescent microsphere (Invitrogen) was pumped with a 647 nm laser (Coherent) with an irradiance of 1.2 kW/cm$^{2}$ and each hologram was acquired in one 30-ms frame. The calibrated EM gain setting was 500 and an average of 60,000 photons were detected per estimation (each consisting of 3 images). The average background noise was measured to be $\beta$ = 10000 photons/mm$^2$ ($\sim$ 2 photons/pixel). 100 frames were collected at five different $z$-positions and the images resulting from the reconstructions of the holograms were fitted using our iterative algorithm which performed a simultaneous fit to $x$, $y$, $z$ and the emission and background rates. Once again we notice that the lateral position precisions ($x$ and $y$) both come close to the CRLB value. In the axial ($z$) direction, the precision does not come close to the CRLB. This is attributed to the fact that our algorithm does not iteratively fit the data to a particular model. For the CRLB value to be reached, the imaging model to which the individual parameters are fit must be correct. We speculate that by fitting the $z$-position to an imaging model that accounts for aberrations in the experimental images, the $z$-position precision can also reach the CRLB.

## 5. Conclusion

We have introduced a simple calculation for the fundamental limit to precision when using SIDH to perform localization based microscopy using the Fisher information matrix. The results introduced in this paper can be used to design SIDH systems to obtain a desired localization precision. Furthermore, these expressions can be used as a benchmark against which algorithms can be evaluated that are used to estimate the location of emitters. These results show that high precision localization in all three dimensions can be achieved over a 20 $\mu$m axial range using SIDH with high SBR. We have neglected the effect of pixelation on the precision in our calculations because this does not relate directly to the effectiveness of SIDH as a localization technique, but including pixelation would be a next step in this work.

We also compare the localization precision of SIDH using two different configurations with localization precision from different PSF models including the astigmatic and the Gaussian PSF using parameters matched as closely as possible to their actual implementations. SIDH using both the configurations results in a small and almost constant localization precision over a large axial range of $\sim$ 20 $\mu$m for lower SBR conditions. In the presence of higher background noise ($\beta$ > 5000 photons/mm$^2$, $N$ = 6000 photons), SIDH continues to provide <50 nm precision in all three dimensions over the entire axial range. However, under such conditions, the astigmatic PSF provides better precision near the focal plane making it more suitable for imaging thinner samples ($\sim$1 $\mu$m thick). The high sensitivity to background noise in SIDH is primarily due to the large size of the hologram compared to a Gaussian or astigmatic PSF. Since the hologram is spread over $\sim$ 2-4 mm$^2$, for $\beta$ = 60000 photons/mm$^2$, the SBR for SIDH is significantly lower than the SBR for the astigmatic and the Gaussian PSF’s thereby degrading the precision of the system. Localization over a large axial range can also be achieved with the tetrapod PSF for which the CRLB is $\approx$ 60 nm over 20 microns with 3500 signal photons and 50 background photons per pixel [28]. With a background of 20 photons per pixel, SIDH achieves a precision of 125 nm. While the tetrapod PSF performs better than SIDH, understanding the performance of SIDH is of interest due to the analytical reconstruction process [39] and the ability to correct for optical aberrations in SIDH [45].

The results presented in this paper should be useful to investigators designing SIDH systems and algorithms for localization based imaging which could lead to SIDH approaches to localization based microscopy with superior performance.

## 6. Code availability

The software used in this study is available under an open-source (GPL-3.0) license under two packages:

## 7. Appendix

## 7.1. Derivation of Cramér-Rao lower bound for self-interference digital holography

Using Eq. (2) we now derive the fundamental limit of the localization accuracy for the point-spread hologram (PSH) in self-interference digital holography (SIDH). We assume that the emitter emits $N$ photons which and that the counting process is a Poisson process. We first consider the PSH given by

## 7.2. Calculating the CRLB for the Gaussian and the astigmatic imaging model

#### 7.2.1. Gaussian PSF

Using Eqs. (5) & (6), we calculate the fundamental limit of the localization precision for the PSF approximated by a Gaussian profile given by

By symmetry, the off-diagonal elements of the Fisher information matrix are zero. For a particle emitting $N$ photons, the elements of the Fisher information matrix describing the $x$ and $y$ localization precisions are given by#### 7.2.2. Astigmatic model

In the case of the astigmatic PSF, a weak cylindrical lens is introduced in the imaging pathway which splits the focal plane into two perpendicular focal planes at different depths giving an asymmetric PSF. The form of the astigmatic PSF on the detector can be approximated by [36]

## 7.3. Calculation of the Cramér-Rao lower bound for the scalar PSF model

We calculate the PSF from scalar diffraction theory with the equation

where $P(\rho )=\textrm {circ}(\rho )$ is the pupil function and $D(\rho ,z)=z\lambda ^{-1}\sqrt {n^2+(\textrm {NA}\rho )^2}$ is the phase that results from a defocus, z. Because the transform is rotationally symmetric, we can numerically calculate the PSF with a one-dimensional integral.## 7.4. Calculation of the Cramér-Rao lower bound for the COSA PSF

The COSA PSF can be calculated similarly.

To calculate the matrix elements quickly, we use a purely numeric approach and calculate the PSF from the FFT of the back pupil plane. We calculate the PSF over $2048 \times 2048$ pixels with a pixel size of 60 nm. $dq/dx$ is calculated from the finite difference along the x axis from the PSF image, and $dq/dz$ is calculated as the finite difference along the axial direction from the images, $q(x,y;z \pm \Delta z)$. $\Delta z = 2.5$nm.

## 7.5. CRLB plots for different experimental conditions of SIDH

Different experimental parameters can provide different CRLB results for an SIDH system. Figure 13 shows the spatial and axial CRLB results for DOE-CCD distances ($z_{h}$) for the two different configurations. In both cases the 6000 photons are detected from the emitter ($N$ = 6000) in the absence of background noise ($\beta$=0).

## Funding

National Science Foundation (1555576); National Institutes of Health (R21GM134462).

## Disclosures

The authors declare no conflicts of interest.

## References

**1. **M. Kim, “Principles and techniques of digital holographic microscopy,” J. Photonics Energy **1**(1), 018005 (2010). [CrossRef]

**2. **J. Rosen, A. Vijayakumar, M. Kumar, M. R. Rai, R. Kelner, Y. Kashter, A. Bulbul, and S. Mukherjee, “Recent advances in self-interference incoherent digital holography,” Adv. Opt. Photonics **11**(1), 1–66 (2019). [CrossRef]

**3. **D. Weigel, H. Babovsky, A. Kiessling, and R. Kowarschik, “Widefield microscopy with infinite depth of field and enhanced lateral resolution based on an image inverting interferometer,” Opt. Commun. **342**, 102–108 (2015). [CrossRef]

**4. **T. Nobukawa, Y. Katano, T. Muroi, N. Kinoshita, and N. Ishii, “Bimodal incoherent digital holography for both three-dimensional imaging and quasi-infinite–depth-of-field imaging,” Sci. Rep. **9**(1), 3363 (2019). [CrossRef]

**5. **X. Lai, S. Zeng, X. Lv, J. Yuan, and L. Fu, “Violation of the lagrange invariant in an optical imaging system,” Opt. Lett. **38**(11), 1896–1898 (2013). [CrossRef]

**6. **J. Rosen and R. Kelner, “Modified lagrange invariants and their role in determining transverse and axial imaging resolutions of self-interference incoherent holographic systems,” Opt. Express **22**(23), 29048–29066 (2014). [CrossRef]

**7. **P. Bouchal and Z. Bouchal, “Selective edge enhancement in three-dimensional vortex imaging with incoherent light,” Opt. Lett. **37**(14), 2949–2951 (2012). [CrossRef]

**8. **X. Yu, J. Hong, C. Liu, and M. Kim, “Review of digital holographic microscopy for three-dimensional profiling and tracking,” Opt. Eng. **53**(11), 112306 (2014). [CrossRef]

**9. **P. Memmolo, L. Miccio, M. Paturzo, G. D. Caprio, G. Coppola, P. A. Netti, and P. Ferraro, “Recent advances in holographic 3D particle tracking,” Adv. Opt. Photonics **7**(4), 713–755 (2015). [CrossRef]

**10. **J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics **2**(3), 190–195 (2008). [CrossRef]

**11. **Y. M. Sigal, R. Zhou, and X. Zhuang, “Visualizing and discovering cellular structures with super-resolution microscopy,” Science **361**(6405), 880–887 (2018). [CrossRef]

**12. **A. von Diezmann, Y. Shechtman, and W. Moerner, “Three-dimensional localization of single molecules for super-resolution imaging and single-particle tracking,” Chem. Rev. **117**(11), 7244–7275 (2017). [CrossRef]

**13. **Y. Zhou, M. Handley, G. Carles, and A. R. Harvey, “Advances in 3D single particle localization microscopy,” APL Photonics **4**(6), 060901 (2019). [CrossRef]

**14. **Y. Zhou and G. Carles, “Precise 3D particle localization over large axial ranges using secondary astigmatism,” Opt. Lett. **45**(8), 2466–2469 (2020). [CrossRef]

**15. **P. Prabhat, S. Ram, E. S. Ward, and R. J. Ober, “Simultaneous imaging of different focal planes in fluorescence microscopy for the study of cellular dynamics in three dimensions,” IEEE Trans.on Nanobioscience **3**(4), 237–242 (2004). [CrossRef]

**16. **M. F. Juette, T. J. Gould, M. D. Lessard, M. J. Mlodzianoski, B. S. Nagpure, B. T. Bennett, S. T. Hess, and J. Bewersdorf, “Three-dimensional sub–100 nm resolution fluorescence microscopy of thick samples,” Nat. Methods **5**(6), 527–529 (2008). [CrossRef]

**17. **A. Tahmasbi, S. Ram, J. Chao, A. V. Abraham, F. W. Tang, E. S. Ward, and R. J. Ober, “Designing the focal plane spacing for multifocal plane microscopy,” Opt. Express **22**(14), 16706–16721 (2014). [CrossRef]

**18. **Y. Fu, P. W. Winter, R. Rojas, V. Wang, M. McAuliffe, and G. H. Patterson, “Axial superresolution via multiangle TIRF microscopy with sequential imaging and photobleaching,” Proc. Natl. Acad. Sci. **113**(16), 4368–4373 (2016). [CrossRef]

**19. **J. Deschamps, M. Mund, and J. Ries, “3D superresolution microscopy by supercritical angle detection,” Opt. Express **22**(23), 29081–29091 (2014). [CrossRef]

**20. **N. Bourg, C. Mayet, G. Dupuis, T. Barroca, P. Bon, S. Lécart, E. Fort, and S. Lévêque-Fort, “Direct optical nanoscopy with axially localized detection,” Nat. Photonics **9**(9), 587–593 (2015). [CrossRef]

**21. **G. Shtengel, J. A. Galbraith, C. G. Galbraith, J. Lippincott-Schwartz, J. M. Gillette, S. Manley, R. Sougrat, C. M. Waterman, P. Kanchanawong, and M. W. Davidson, “Interferometric fluorescent super-resolution microscopy resolves 3D cellular ultrastructure,” Proc. Natl. Acad. Sci. **106**(9), 3125–3130 (2009). [CrossRef]

**22. **B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science **319**(5864), 810–813 (2008). [CrossRef]

**23. **S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. **106**(9), 2995–2999 (2009). [CrossRef]

**24. **M. D. Lew, S. F. Lee, M. Badieirostami, and W. Moerner, “Corkscrew point spread function for far-field three-dimensional nanoscale localization of pointlike objects,” Opt. Lett. **36**(2), 202–204 (2011). [CrossRef]

**25. **D. Baddeley, M. B. Cannell, and C. Soeller, “Three-dimensional sub-100 nm super-resolution imaging of biological samples using a phase ramp in the objective pupil,” Nano Res. **4**(6), 589–598 (2011). [CrossRef]

**26. **S. Jia, J. C. Vaughan, and X. Zhuang, “Isotropic three-dimensional super-resolution imaging with a self-bending point spread function,” Nat. Photonics **8**(4), 302–306 (2014). [CrossRef]

**27. **Y. Shechtman, S. J. Sahl, A. S. Backer, and W. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. **113**(13), 133902 (2014). [CrossRef]

**28. **Y. Shechtman, L. E. Weiss, A. S. Backer, S. J. Sahl, and W. Moerner, “Precise three-dimensional scan-free multiple-particle tracking over large axial ranges with tetrapod point spread functions,” Nano Lett. **15**(6), 4194–4199 (2015). [CrossRef]

**29. **C. R. Rao, * Linear Statistical Inference and its Applications*, vol. 2 (Wiley, 1973).

**30. **S. M. Kay, * Fundamentals of Statistical Signal Processing* (Prentice Hall PTR, 1993).

**31. **T. M. Cover and J. A. Thomas, * Elements of Information Theory* (John Wiley & Sons, 2012).

**32. **J. Chao, E. S. Ward, and R. J. Ober, “Fisher information theory for parameter estimation in single molecule microscopy: tutorial,” J. Opt. Soc. Am. A **33**(7), B36–B57 (2016). [CrossRef]

**33. **R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. **86**(2), 1185–1200 (2004). [CrossRef]

**34. **S. Ram, E. S. Ward, and R. J. Ober, “A stochastic analysis of performance limits for optical microscopes,” Multidim. Syst. Sign. Process **17**(1), 27–57 (2006). [CrossRef]

**35. **S. Liu, E. B. Kromann, W. D. Krueger, J. Bewersdorf, and K. A. Lidke, “Three dimensional single molecule localization using a phase retrieved pupil function,” Opt. Express **21**(24), 29462–29487 (2013). [CrossRef]

**36. **L. Holtzer, T. Meckel, and T. Schmidt, “Nanometric three-dimensional tracking of individual quantum dots in cells,” Appl. Phys. Lett. **90**(5), 053902 (2007). [CrossRef]

**37. **M. A. Thompson, M. D. Lew, M. Badieirostami, and W. Moerner, “Localizing and tracking single nanoscale emitters in three dimensions with high spatiotemporal resolution using a double-helix point spread function,” Nano Lett. **10**(1), 211–218 (2010). [CrossRef]

**38. **S. Stallinga and B. Rieger, “Position and orientation estimation of fixed dipole emitters using an effective hermite point spread function model,” Opt. Express **20**(6), 5896–5921 (2012). [CrossRef]

**39. **A. Marar and P. Kner, “Three-dimensional nanoscale localization of point-like objects using self-interference digital holography,” Opt. Lett. **45**(2), 591–594 (2020). [CrossRef]

**40. **J. W. Goodman, * Introduction to Fourier Optics* (Roberts and Company Publishers, 2005).

**41. **L. Gao, “Optimization of the excitation light sheet in selective plane illumination microscopy,” Biomed. Opt. Express **6**(3), 881–890 (2015). [CrossRef]

**42. **A. Marar, “FINCH-CRLB,” https://github.com/Knerlab/FINCH-CRLB (2020).

**43. **B. E. Saleh and M. C. Teich, * Fundamentals of Photonics* (John Wiley & Sons, 1991).

**44. **B. Katz, J. Rosen, R. Kelner, and G. Brooker, “Enhanced resolution and throughput of fresnel incoherent correlation holography (finch) using dual diffractive lenses on a spatial light modulator (slm),” Opt. Express **20**(8), 9109–9121 (2012). [CrossRef]

**45. **M. K. Kim, “Adaptive optics by incoherent digital holography,” Opt. Lett. **37**(13), 2694–2696 (2012). [CrossRef]

**46. **A. Marar and P. Kner, “Holographic-SMLM,” https://github.com/Knerlab/Holographic-SMLM (2020).