## Abstract

Structured illumination microscopy is able to improve the spatial resolution of wide-field fluorescence imaging by applying sinusoidal stripe pattern illumination to the sample. The corresponding computational image reconstruction requires precise knowledge of the pattern’s parameters, which are its phase ($\varphi $) and wave vector ($\mathbf{p}$). Here, a computationally inexpensive method for estimation of $\mathbf{p}$ from the raw data is proposed and illustrated with simulations. The method estimates $\mathbf{p}$ through a selective discrete Fourier transform at tunable subpixel precision. This results in an accurate $\mathbf{p}$ estimation for all the illumination patterns and subsequently improves the superresolution image recovery by a factor of 10 around sharp edges as compared to an integer pixel approach. The technique as presented here is of major interest to the large variety of custom-build systems that are used. The feasibility of the presented method is proven in comparison with published data.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. INTRODUCTION

Wide-field fluorescence microscopy is limited in its maximum spatial resolution to what is commonly known as the Abbe diffraction limit [1]. The diffraction limit is a result of the finite range of spatial frequencies that a microscope’s objective can capture depending on its aperture. The 2D image formation under plain illumination can be formulated as a convolution ($\otimes $), giving $D(\mathbf{r})=[S(\mathbf{r})I]\otimes h(\mathbf{r})$, with the acquired image $D$, the sample $S$, the plain illumination intensity $I$, the point spread function (PSF) $h$ and the spatial coordinate $\mathbf{r}$. In the Fourier transform of this expression, the convolution is replaced by a multiplication and vice versa $\tilde{D}(\mathbf{k})=[\tilde{I}(\mathbf{k})\otimes \tilde{S}(\mathbf{k})]\tilde{h}(\mathbf{k})$, where tilde ($\sim $) indicates the Fourier transform, $\mathbf{k}$ is the Fourier space coordinate or spatial frequency, and $\tilde{h}$ is called the optical transfer function (OTF), which has the property of limiting the highest spatial frequency of the sample represented in the image to a cutoff frequency ${\Vert \mathbf{k}\Vert}_{2}\le {k}_{\mathrm{max}}$ and thus the maximum resolution of the image. Introducing nonuniform illumination of the sample will mix high spatial frequency information lying beyond the OTF’s support into its passband [2]. In structured illumination microscopy (SIM), this idea can be used by implementing a sinusoidal stripe pattern illumination. It is described by a wave vector $\mathbf{p}={p}_{x}{\widehat{k}}_{x}+{p}_{y}{\widehat{k}}_{y}$, with the unit vectors ${\widehat{k}}_{x}$ and ${\widehat{k}}_{y}$, that determines a pattern angle

a fringe spacing of and a phase $\varphi $ yielding the pattern’s shift along the wave vector. Computational image reconstruction upon raw data acquisition may now separate spatial frequency components shifted from their original position into the passband and those naturally lying within the passband [3]. Relocating shifted components to their original position eventually results in a larger information content in the Fourier domain and equivalently in a higher resolution of the reconstructed image. For a single SIM reconstruction, multiple raw images of the sample with different illumination patterns are needed. In the 2D case, a typical configuration of three different illumination wave vectors $\mathbf{p}$ (with constant fringe period and three different orientations) and three different phases $\varphi $ each yielding a total of nine images can be used to reconstruct one image. Image reconstruction in SIM for the described stripe pattern illumination is well known and can be derived from the more general 3D case [4]. Software implementations for SIM reconstruction are readily available [5,6]. Given a sinusoidal illumination pattern in the object plane, its wave vector $\mathbf{p}$ and phase $\varphi $ need to be known for each of the raw data images to perform the well-established direct reconstruction. Estimation of $\mathbf{p}$ and $\varphi $ is a critical task in SIM and is addressed here because a wrong parameter estimation for one of the nine raw data images is sufficient to corrupt the overall reconstruction. Common methods used to estimate the phase rely on precise knowledge of the wave vector [7–10] or require the pattern phases to be equally distributed [4,6]. This means that an incorrect estimation of $\mathbf{p}$ will inevitably lead to a wrong value for $\varphi $ if no further assumptions on the phase distribution can be made and thus prevent correct image reconstruction. Furthermore, some of the methods for parameter estimation involve iterative steps that come with an increased computational cost [6,7]. This problem is usually addressed by gaining experimental control over the illumination pattern, using expensive hardware like spatial light modulators, vibration-free suspension systems, and piezo actuators to introduce predefined phase shifts. The idea of the proposed method is to determine the wave vector $\mathbf{p}$, based on the raw data images in a noniterative way at a precision that is sufficient to apply a direct solution for the phase estimation as presented by Wicker [8].As can be seen in Fig. 1, the sinusoidal illumination pattern results in three peaks in the Fourier spectrum of each input image, and $\mathbf{p}$ is estimated by localizing these peaks. Because the input images are real valued, one peak will be located at the center (zero frequency), and the other two peaks will be distributed symmetrically around the center along the vector $\mathbf{p}$ in the Fourier space. In essence, the algorithm presented here localizes one of the outer peaks with subpixel precision in the absolute value map of the Fourier transform. First a conventional Fourier transform as given by the fast Fourier transform (FFT) is applied on the raw image. To rule out contributions of the low frequencies, a mask is applied. Frequencies smaller than 95% of the actual pattern frequency are masked. This corresponds to a rough estimate in an actual setup. The maximum in the masked FFT image is identified with integer pixel precision offered by the FFT routine. Then the Fourier transform in a small region around the peak position with a higher sampling is calculated, where the peak can be localized with improved precision. As opposed to padding the raw image with zeros prior to applying the FFT, which increases the precision of the peak localization in the same way albeit at a high computational cost, the proposed method calculates the Fourier transform in a significantly smaller area at a specific sampling rate around the pre-estimated peak location. This reduces the computational cost to a fraction of an approach based on zero padding [11,12].

In Section 2, the theoretical background for sinusoidal SIM, including current methods of parameter estimation and the proposed subpixel peak detection for SIM, is described. In Section 3, the presented method is applied to simulated data, and the results are compared to published data. Finally, Section 4 provides concluding remarks on where the presented method integrates within published methods for parameter estimation.

## 2. THEORY

The theoretical background is described widely using the notation of Wicker *et al.* [7,8]. First, the general framework for SIM using a sinusoidal illumination pattern is presented. Then an overview of current parameter estimation methods is outlined, and the proposed approach for computationally inexpensive peak localization at subpixel precision is presented.

#### A. Sinusoidal SIM

Given a sinusoidal illumination pattern, the image formation can be described as

whereIt can be seen that $\tilde{D}(\mathbf{k})$ contains frequency components or bands ${\tilde{S}}_{m}(\mathbf{k})=\tilde{S}(\mathbf{k}-m\mathbf{p})$ that are shifted by $m\mathbf{p}$ with respect to their original position in frequency space prior to multiplication with the OTF. In this way, high spatial frequency components of the sample that are not observable under plain illumination are transferred into the image, although the bands are shifted and summed up. By taking $N$ images of the sample with different phases ${\varphi}_{n}$ with $n=1,\dots ,N$, the resulting Fourier transforms of the images ${\tilde{D}}_{n}(\mathbf{k})$ can be written as

with the acquired images in the vector $\tilde{\mathbf{D}}={[{\tilde{D}}_{1},\dots ,{\tilde{D}}_{N}]}^{\u22ba}$, the matrix ${\mathbf{M}}_{nm}=\mathrm{exp}(im{\varphi}_{n})$, and a vector $\tilde{\mathbf{C}}(\mathbf{k})={[{\tilde{C}}_{-1},{\tilde{C}}_{0},{\tilde{C}}_{+}1]}^{\u22ba}$. If the inverse of $\mathbf{M}$ (${\mathbf{M}}^{-1}$) exists, the different components can be separated byThe final image is the inverse Fourier transform of $\widehat{\tilde{S}}(\mathbf{k})$, the final estimate in the Fourier domain. It is obtained by shifting each band to its original position and recombining them using a generalized Wiener filter:

The Wiener filter reduces the degrading influence of the OTF and weights the bands in regions where they overlap according to their expected SNR. The Wiener parameter $w$ is determined empirically, $A(\mathbf{k})$ is an apodization function decreasing linearly from unity at the center to zero near the end of the extended OTF support, shaping the overall spectrum to prevent ringing artifacts in the final image, and the asterisk ($*$) indicates the complex conjugate. Because the resolution improvement only takes place in the direction of $\mathbf{p}$, the process of image acquisition and band separation is repeated for different orientations $d$ to obtain isotropic resolution enhancement. As can be seen from Eqs. (7) and (8), the phases ${\varphi}_{n}$ and the wave vectors $\mathbf{p}$ need to be known to perform the reconstruction.

#### B. Conventional Parameter Estimation Techniques

Following Gustafsson *et al.* [4] and Müller *et al.* [6], if an equidistant phase distribution of ${\varphi}_{n}$ can be assumed, retrieval of $\mathbf{p}$ is straightforward. Then Eq. (7) can be applied by using these relative phases in $\mathbf{M}$ that differ from the true phases only by a constant offset ${\varphi}_{\mathrm{\Delta}}$. Given a large enough OTF support with respect to $\mathbf{p}$, and the OTF corrected components ${\tilde{C}}_{m}^{\prime}=\frac{{\tilde{C}}_{m}{\tilde{h}}^{*}}{{|\tilde{h}|}^{2}}$, ${\tilde{C}}_{\pm 1}^{\prime}$ will have an overlapping region ${\mathbf{k}}^{\prime}$ with ${\tilde{C}}_{0}^{\prime}$, where they will be different by a complex factor ${a}_{m}^{\prime}$ only for the correct $\mathbf{p}$ (see Fig. 2). Thus, by maximizing the cross correlation ($\star $) for $m\ne 0$,

Furthermore, ${\varphi}_{\mathrm{\Delta}}=\mathrm{arg}({a}_{m}^{\prime})$, with $\mathrm{arg}(\xb7)$ giving the angle of a complex number, and ${a}_{m}=|{a}_{m}^{\prime}|$ for ${a}_{0}=1$. The workflow described so far is applied when equidistant phase steps within one pattern orientation are provided in the experimental setup.

However, if the assumptions about the phase distribution cannot be made but $\mathbf{p}$ is known, the phase of the illumination pattern can be determined based on the phase of the peak at $\mathbf{p}$ in ${\tilde{D}}_{n}$ as

as described by Shroff*et al.*[9,10]. Alternatively the spectrum’s autocorrelation can be evaluated at $\mathbf{p}$, and the phase can be determined as

A related method is mentioned for the sake of completeness. It determines the phase values by an iterative optimization. The idea is that two separated bands ${\tilde{C}}_{i}(\mathbf{k})$ and ${\tilde{C}}_{j}(\mathbf{k}-l\mathbf{p})$ ($l$ being an integer) should not have common information in the region where they overlap for $i\ne j+l$ if they are separated correctly. They have similar components, if $\mathbf{M}$ in Eq. (7) contains the wrong phase values. The phases in the separation matrix $\mathbf{M}$ are then found by minimizing a cost function that evaluates the cross correlation in the overlapping regions [7].

#### C. Subpixel Peak Detection

Because it may not be possible to do a parameter estimation based on a given prior knowledge like the phase distribution or the pattern wave vector, these parameters need to be extracted from the acquired raw data [13]. We address the situation where no assumption can be made about phase distribution, and the values of $\mathbf{p}$ are unknown to begin with. We propose to reconstruct $\mathbf{p}$ using the approach we describe as follows and then use the reconstructed value of $\mathbf{p}$ with Eq. (12) for reconstructing the phase value $\varphi $. This way the precision of the peak localization does not only govern the accuracy for the determined value of $\mathbf{p}$ but also the fidelity of the estimated $\varphi $. One approach is to localize the peaks in $\tilde{D}(\mathbf{k})$ that come from the sinusoidal illumination pattern. For a discretized image consisting of ${N}_{x}\times {N}_{y}$ pixels (${N}_{x,y}$ being even integers) with indices $\mathbf{x}=[1,\dots ,{N}_{x}]$ and $\mathbf{y}=[1,\dots ,{N}_{y}]$ and spatial coordinates from $(-\frac{{N}_{x,y}}{2})$ to $(\frac{{N}_{x,y}}{2}-1)$ px in steps of 1 px in $\widehat{x}$ and $\widehat{y}$ direction, the conventional FFT returns an array of the same size with indices $\mathbf{u}=\mathbf{x}$ and $\mathbf{v}=\mathbf{y}$, which correspond to frequency coordinates from $(-\frac{1}{2}){\mathrm{px}}^{-1}$ to $(\frac{1}{2}-\frac{1}{{N}_{x,y}}){\mathrm{px}}^{-1}$ in steps of $(\frac{1}{{N}_{x,y}}){\mathrm{px}}^{-1}$ in the ${\widehat{k}}_{x}$ and ${\widehat{k}}_{y}$ directions, ${\widehat{k}}_{x}$ and ${\widehat{k}}_{y}$ being frequency domain analogs of $\widehat{x}$ and $\widehat{y}$. The localization precision of a maximum in the FFT thus depends on the step size $\frac{1}{{N}_{x,y}}$. This can be improved by padding the image with zeros prior to the Fourier transform. Depending on the padding size, this approach may become computationally expensive.

Our approach to this problem is to first localize the peaks in $\tilde{D}(\mathbf{k})$ without any padding, giving the indices $({u}_{0},{v}_{0})$ with a localization precision equal to the step size $\frac{1}{{N}_{x,y}}$ in the frequency domain. In the second step, the Fourier transform of the input image is calculated in a selected region in the frequency domain around the position of the peak with a chosen oversampling [11,12] using a twofold matrix multiplication:

## 3. SIMULATIONS

To test the subpixel precision estimation of the illumination pattern, raw data images for two different samples (Siemens star and pirate; see Fig. 3) have been simulated, similar to what was done by Wicker *et al.* [7,8]. In the first step, the samples (pixel size of 65 nm) have been scaled such that their brightest pixel would have a photon count of $5\times {10}^{1}$, $5\times {10}^{2}$, $5\times {10}^{3}$, and $5\times {10}^{4}$ (four data sets), and the darkest photon count of zero, to add noise of the Poisson distribution in the last step. No further offset or noise was added. The illumination patterns of 200 nm fringe spacing are generated based on 20 randomly distributed orientation angles $\gamma $. For each orientation, a set of three phases (0°, 120°, and 240°, with a random variation of a Gaussian distribution at a standard deviation of 10° for each phase step) is generated. To maintain the maximum possible photon count in each raw data image, the sinusoidal illumination patterns are scaled such that they only vary in a range from zero to one. This way 60 raw data images are simulated per set. Each image is convolved with a point spread function, simulated using a 2D distribution based on the Bessel function of first kind and first order [14], given a numerical aperture of the imaging objective of $\mathrm{NA}=1.4$. The emission wavelength was set to ${\lambda}_{\mathrm{em}}=515\text{\hspace{0.17em}}\mathrm{nm}$. Finally, noise is simulated based on the Poisson distribution. For each raw data image, the parameter estimation was performed as described previously for upsampling factors of $\alpha =1\u201310$. First the peaks in the Fourier domain that come from the sinusoidal illumination pattern in real space were localized applying Eq. (13). From those peak positions, the orientation and fringe period of the illumination pattern in each raw data image are calculated using Eqs. (1) and (2).

The orientation $\gamma $ as calculated from the peak location in Fourier space deviates from the actual orientation $\widehat{\gamma}$ by $\mathrm{\Delta}\gamma =|\widehat{\gamma}-\gamma |$. Although only 20 different values for $\gamma $ were used for the simulations, for each individual raw data image, the deviation $\mathrm{\Delta}\gamma $ was calculated because the pattern phase and noise may change the result of the peak localization. Thus, each raw data image yields a value for $\gamma $ and $\mathrm{\Delta}\gamma $. In Fig. 4(a), the mean (solid line) and standard deviation (shaded area) of the 60 values for $\mathrm{\Delta}\gamma $ in a data set are presented for parameter estimation based on peak localization at different oversampling factors $\alpha $ and for the two different samples, for data sets simulated with a maximum photon count of 50. This result does not show significant variations for the other selected photon counts and is thus exemplary.

Similar to the orientation, the fringe spacing $L$ as calculated per Eq. (2) has a deviation from the initially set 200 nm. This deviation is calculated as $\mathrm{\Delta}L=\frac{|L-200\text{\hspace{0.17em}}\mathrm{nm}|}{200\text{\hspace{0.17em}}\mathrm{nm}}$ to free the measure from scaling. The evaluation in Fig. 4(b) shows the mean (solid line) and standard deviation (shaded area) of the 60 values for $\mathrm{\Delta}L$ in a data set. This evaluation is presented for parameter estimation based on peak localization at different oversampling factors $\alpha $ and for two different samples. The underlying data sets are simulated using a maximum photon count of 50. As for the orientation deviation, this result is exemplary for all selected photon counts. The pattern phase for each raw data image is calculated with Eq. (12) and the determined value for $\mathbf{p}$. The values for the phases are evaluated following [7,8]. For each orientation $\gamma $, the deviation of the three phases was calculated as $\delta \varphi =\varphi -\widehat{\varphi}$. The relative error ${E}_{\gamma}^{r}$ for one orientation is then calculated as the standard deviation of all three $\delta {\varphi}_{\gamma}$ in a subset. As described in Refs. [7,8], this way a global phase offset is rejected. The mean (solid line) and standard deviation (shaded area) of ${E}_{\gamma}^{r}$ are shown in Fig. 5 as a function of $\alpha $.

A second evaluation of the estimated phases is shown in Fig. 6. Instead of the relative error ${E}_{\gamma}^{r}$, the absolute error ${E}_{\gamma}^{a}$ is presented. It is calculated as the mean of the absolute values of all three $\delta {\varphi}_{\gamma}$ in a subset. This way a global phase offset is not rejected in the evaluation.

Finally, Fig. 7 shows the influence of the proposed parameter estimation at subpixel precision. A raw data set was simulated as described earlier with three pattern angles of 11°, 71°, and 131° (similar to Shroff *et al.* [9]) and three phases each at a maximum photon count of $5\times {10}^{4}$. The data sets used to generate actual image reconstructions only contain nine raw images. Although the pattern angles are selected such that an isotropic resolution improvement can be achieved, the selected phases are still displaced from an equidistant distribution as described previously. In addition to the SIM raw data, a wide-field image under plain illumination was simulated, and the result of a Wiener deconvolution is shown in Fig. 7(a). The SIM reconstruction based on the actual input parameters yields the optimal result in Fig. 7(b) with the expected resolution enhancement compared to Fig. 7(a). Figures 7(c) and 7(d) show the reconstruction results based on parameter estimation for oversampling factors of 1 and 10. Figures 7(e) and 7(f) show the deviations of the results in Figs. 7(c) and 7(d) from Fig. 7(b), respectively. It is the absolute value of the difference of the values in corresponding pixels, and it can be seen that a reconstruction based on subpixel precision parameter estimation at an oversampling factor of 10 yields a result whose deviation from the optimal reconstruction can be reduced by almost 1 order of magnitude around sharp edges in the image. For the presented data, an oversampling larger than 10 does not improve the result of the reconstruction in terms of the shown deviation. The same evaluation has been done for the Siemens star as presented in Fig. 8. Here the absolute deviations of the reconstructions for one- and tenfold oversampling are shown in Figs. 8(a) and 8(b) similar to Figs. 7(e) and 7(f).

## 4. RESULTS AND DISCUSSION

It can be shown that the proposed subpixel precision localization of a maximum in the Fourier domain is suitable to determine the wave vector of a sinusoidal pattern imposed on a structure in real space (Fig. 4). This method achieves an improvement in precision as a method based on zero padding would do, although at a fraction of the computational cost (time and memory), especially at increased oversampling factors. The benefit of this approach can be directly shown when doing phase estimation on 2D SIM data using Eq. (12).

In the introduction of the noniterative phase estimation [8], its feasibility was demonstrated in comparison with an iterative approach [7] that would determine the relative phases in a data set, and a reference to [2] and [15] is given for the estimation of the global phase offset to get the absolute phases. Figure 5 shows that the quality of the phase estimation does not change significantly by increasing the precision of the wave vector estimation in the tested range, and the results, at least for the pirate sample, correspond well to published results in Figs. 2(a) and 2(b) of Ref. [8] for the single-step approach at maximum expected number of photons of $5\times {10}^{1}$, $5\times {10}^{2}$, $5\times {10}^{3}$, and $5\times {10}^{4}$. Disregarding a global phase offset, localizing the pattern wave vector to conventional pixel precision is sufficient, as the error drops below 1° for realistic SNRs.

However, the absolute phase error as shown in Fig. 6 suggests that if Eq. (12) is to be applied for phase estimation, the presented method is of major importance. The high-precision calculation of the wave vector based on subpixel peak localization enables the phase estimation without an additional step to find a global phase offset. A difference in the behavior of both samples is visible. The Siemens star sample enables good phase detection even at lower oversampling. Phase estimation, except for the lowest simulated SNR, performs well. The more generalized sample pirate shows significant improvement with an increased oversampling.

Applying subpixel precision peak localization in the Fourier domain to determine the orientation and fringe spacing of sinusoidal patterns in real space is presented and its feasibility demonstrated on simulated data. Especially the application of the results in an established method for phase estimation shows that this is a way for parameter estimation in sinusoidal structured illumination microscopy for comparably fine (i.e., 200 nm) illumination patterns.

The benefits are twofold because the whole work flow of determining the necessary parameters in SIM reconstruction does not require prior knowledge of the phase distribution or the pattern wave vector, nor does it depend on iterative processes. Furthermore, it was demonstrated that the deviation from the optimal reconstruction can be reduced by about 1 order of magnitude, thus producing a more accurate result using the subpixel peak detection method. In addition, the savings in computational time for SIM reconstruction provided by the proposed subpixel method will be beneficial for real-time reconstruction of the SIM images if the pattern parameters need to be estimated prior to reconstruction.

## Funding

H2020 European Research Council (ERC) (336716); Norwegian Centre for International Cooperation in Education, SIU Norway (INCP-2014/10024); University Grants Commission (UGC), India.

## Acknowledgment

The authors would like to thank Krishna Agarwal for scientific review of the manuscript. BSA acknowledges ERC and the Norwegian Centre for International Cooperation in Education, SIU-Norway. KK acknowledges University Grant Commission, India funding.

## REFERENCES

**1. **E. Abbe, “Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung,” Archiv für mikroskopische Anatomie **9**, 413–418 (1873). [CrossRef]

**2. **M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. **198**, 82–87 (2000). [CrossRef]

**3. **R. Heintzmann and C. G. Cremer, “Laterally modulated excitation microscopy: Improvement of resolution by using a diffraction grating,” Proc. SPIE **3568**, 185–196 (1999). [CrossRef]

**4. **M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. **94**, 4957–4970 (2008). [CrossRef]

**5. **P. Křížek, T. Lukeš, M. Ovesný, K. Fliegel, and G. M. Hagen, “SIMToolbox: A MATLAB toolbox for structured illumination fluorescence microscopy,” Bioinformatics **32**, 318–320 (2015).

**6. **M. Müller, V. Mönkemöller, S. Hennig, W. Hübner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Comms. **7**, 10980 (2016). [CrossRef]

**7. **K. Wicker, O. Mandula, G. Best, R. Fiolka, and R. Heintzmann, “Phase optimisation for structured illumination microscopy,” Opt. Express **21**, 2032–2049 (2013). [CrossRef]

**8. **K. Wicker, “Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space,” Opt. Express **21**, 24692–24701 (2013). [CrossRef]

**9. **S. A. Shroff, J. R. Fienup, and D. R. Williams, “Phase-shift estimation in sinusoidally illuminated images for lateral superresolution,” J. Opt. Soc. Am. A **26**, 413–424 (2009). [CrossRef]

**10. **S. A. Shroff, J. R. Fienup, and D. R. Williams, “Lateral superresolution using a posteriori phase shift estimation for a moving object: Experimental results,” J. Opt. Soc. Am. A **27**, 1770–1782 (2010). [CrossRef]

**11. **R. Soummer, L. Pueyo, A. Sivaramakristnan, and R. J. Vanderbei, “Fast computation of Lyot-style coronagraph propagation,” Astrophys. J. Suppl. Ser. **167**, 81–99 (2006). [CrossRef]

**12. **M. Singh and K. Khare, “Accurate efficient carrier estimation for single-shot digital holographic imaging,” Opt. Lett. **41**, 4871–4874 (2016). [CrossRef]

**13. **L. Condat, J. Boulanger, N. Pustelnik, S. Sahnoun, and L. Sengmanivong, “A 2-D spectral analysis method to estimate the modulation parameters in structured illumination microscopy,” in *IEEE 11th International Symposium on Biomedical Imaging (ISBI)*, 2014, vol. 11, pp. 604–607.

**14. **E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photonics **6**, 312–315 (2012). [CrossRef]

**15. **J. T. Frohn, “Super-resolution fluorescence microscopy by structured light illumination” Ph.D. dissertation (Swiss Federal Institute of Technology, 2000).