## Abstract

Structured illumination (SI) has long been regarded as a nonquantitative technique for obtaining sectioned microscopic images. Its lack of quantitative results has restricted the use of SI sectioning to qualitative imaging experiments, and has also limited researchers’ ability to compare SI against competing sectioning methods such as confocal microscopy. We show how to modify the standard SI sectioning algorithm to make the technique quantitative, and provide formulas for calculating the noise in the sectioned images. The results indicate that, for an illumination source providing the same spatially-integrated photon flux at the object plane, and for the same effective slice thicknesses, SI sectioning can provide higher SNR images than confocal microscopy for an equivalent setup when the modulation contrast exceeds about 0.09.

© 2011 Optical Society of America

## Corrections

Ting Ai Chen, Nathan Hagen, Liang Gao, and Tomasz S. Tkaczyk, "Quantitative sectioning and noise analysis for structured illumination microscopy: erratum," Opt. Express**23**, 27633-27634 (2015)

https://www.osapublishing.org/oe/abstract.cfm?uri=oe-23-21-27633

## 1. Introduction

Structured illumination (SI) is an optical sectioning technique compatible with widefield imaging microscopy, and has been shown to provide a depth resolution comparable to confocal microscopy [1]. Since its invention [2], SI microscopy has been widely used as a sectioning tool in bioimaging research, both at the cellular level — such as in 3D imaging of cellular nuclear periphery [3], cellular fenestrations [4], tubulin and kinesin dynamics [5] and at the tissue level — such as in imaging of autofluorescence aggregation in human eye [6], zebrafish development [7], rat colonic mucosa [8]. In addition, SI has also been used as a super-resolution technique to break the diffraction limit [5, 9–11]. SI thus has the ability to maintain the high light collection capability of widefield imaging [12,13] while also removing out-of-plane light. It has, however, been criticized as being a non-quantitative technique, and for producing noisy data in comparison to the sectioned images derived from confocal microscopes. The analysis below shows that SI can easily be made quantitative by properly scaling the standard sectioning algorithm, and we also provide analytical expressions for the resulting noise in SI-sectioned images. Although Somekh *et al.* [14, 15] provide numerical simulations of the noise properties of SI-sectioned images, they use a non-quantitative algorithm and do not include the effects of out-of-focus light on the noise.

Quantitative sectioned images allow one to perform photon counting as if the regions above and below the sectioned layer were not present. While the resulting photon number estimate will be noisier than would be the case for imaging the slice without out-of-focus layers present, the mean value of the correctly scaled algorithm will equal to the mean photon count one would obtain with a standard widefield microscope. This permits researchers to use standard methods [16] of correcting for the objective lens’ numerical aperture, optical transmission, and detector quantum efficiency to determine photon counts at the sectioned plane relative to the number of photoelectrons detected at the sensor plane. Allowing quantitative data to be obtained with SI sectioning thus gives researchers the ability to perform measurements of radiance, absolute reflectance, fluorophore quantum yield, and absolute fluorophore concentration within volumetric media [17, 18].

Finally, the analytical formulas for noise also enable us to roughly define an operational range for the modulation contrast at the section plane, such that for any contrast above this value one can expect SI to provide higher SNR data than confocal microscopy. For any contrast below this value, confocal imaging will out-perform SI.

## 2. Sectioning algorithm

The general approach in SI is to illuminate the object with a sinusoidal illumination pattern of the form [2]

at each of three spatial phases*ϕ*

_{1}= 0,

*ϕ*

_{2}= 2

*π*/3, and

*ϕ*

_{3}= 4

*π*/3. Although other structured forms are possible [19], this one is particularly easy to implement. The quantity

*m*is the modulation contrast (a number varying from 0 to 1), and

*ν*is the modulation spatial frequency. The factor of 1/2 placed in front, not present in previous work, is used here in order to represent the fact that half of the illumination light is absorbed or reflected by the grid placed in the illumination path. If we take the limit

*m*→ 0, we obtain standard widefield illumination of half the intensity that one would obtain without the grid in place. Note that

*s*represents a

*normalized*illumination amplitude, ranging from 0 to 1.

Ignoring the effects of optical blurring, the resulting modulated images *g _{i}*(

*x, y*) are given by

*f*(

*x,y*) and out-of-focus light

*d*(

*x,y*), both scaled to what one would obtain with standard widefield imaging. For fluorescence imaging, the absolute brightness

*f*of the object contains the illumination irradiance

*I*, the fluorophore quantum yield

*q*, and a factor Ω resulting from integrating the angular distribution of fluorescence emission over the numerical aperture of the imaging optics:

*f*

_{fluor}=

*Iq*Ω. For brightfield imaging,

*f*is simply the illumination

*I*multiplied by the object reflectance

*R: f*

_{bright}=

*IR*. Thus, the expression for

*g*is valid for the cases of both fluorescence imaging and brightfield imaging, but with a subtle difference in what

_{i}*f*means for each case.

In order to obtain an optically sectioned image *i*(*x,y*) at the focal plane, the most common algorithm used is [1, 2, 8, 20–31]

Since the algorithm (3) operates on each pixel independently, we have dropped the spatial arguments (*x,y*) as unnecessary. (These can be added back into each equation at any point.) Inserting Eqs. (1) into (2) and applying trigonometric identities, we obtain the result

*d*(

*x,y*) does not change with a shift in the illumination pattern. A consequence of this result is that in order to obtain quantitative results for the sectioned image, one must estimate the modulation contrast

*m*. A further assumption required is that of linearity, which in fluorescence imaging is limited to weakly fluorescent structures [1].

The scale factor in front of the square root differs from that given in previous studies. The factor of 2 in the numerator appears as a result of the 1/2 scaling introduced into our definition of *s _{i}*(

*x, y*) and thus is new. All previous authors have also assumed ideal modulation (

*m*= 1). This is an assumption which introduces a large error into the quantitative result. Moreover, since the modulation

*m*(

*x,y*) is in general spatially varying, the error introduced is generally not a simple scalar factor for the whole image. As a whole, the literature shows wide disagreement over the appropriate scale factor to place in front of the square root. Refs [20,22,26,28,32,33] use $\sqrt{2}/3$, which is appropriate when

*m*= 1 and the factor of 1/2 in

*s*(

*x,y*) is not used. Refs [2, 23–25,27,29] use a scale factor of 1, which is the most appropriate choice for a non-quantitative approach, while other authors use alternative factors such as $1/(3\sqrt{2})$ [8], $1/\sqrt{2}$ [1, 31], or $3\sqrt{2}$ [21] without explanation.

In practice, one finds that even for ideal samples *m* cannot achieve the maximum value of 1. The modulation contrast, however, remains excellent (*m* > 0.5) in thin samples in which the sectioned plane is taken near the surface, but poor (*m* < 0.1) in dense tissue samples (in which multiple scattering is present) and in deeper layers of thinly scattering media.

## 3. Widefield image algorithm

In addition to the sectioned image algorithm (5), it is well known that one can form a widefield image representation *i _{w}*(

*x, y*) from the modulated images by

*s*(

*x,y*).) Once again inserting the formulas for the illumination and modulated images, Eqs. (1) and (2), we obtain The widefield image is a simple sum of the sectioned plane and the out-of-focus contribution.

For quantitative work, we also want to estimate the variance of the widefield image, for which we insert Eqs. (1), (2), and (6) into the standard variance formula $\text{var}({i}_{w})=\u3008{i}_{w}^{2}\u3009-{\u3008{i}_{w}\u3009}^{2}$ and solve. Here the angle brackets 〈·〉 represent an expectation value. The first term in the variance formula is

*d*and

*f*are independent of one another, so that terms such as 〈

*d*+

*f*〉 = 〈

*d*〉 + 〈

*f*〉 and 〈

*df*〉 = 〈

*d*〉〈

*f*〉. Because the

*s*represent a normalized illumination distribution, the stochastic properties of the system are present only within the out-of-focus light

_{i}*d*and the slice’s light distribution

*f*, and not in the illumination

*s*.

The second moment $\u3008{i}_{w}^{2}\u3009$ thus separates into four terms, each of which can be considered separately. Using trigonometric identities, we obtain for the modulation factor in each term

giving the result*d*

^{2}〉 = 〈

*d*〉

^{2}+ var(

*d*).

The second term in the variance formula is easily obtained from Eq. (7) as

*i*〉 = 〈

_{w}*d*〉 + 〈

*f*〉, the variance of the widefield image is also a simple sum of the component variances.

## 4. Variance and SNR of sectioned images

Next we can try to follow the same procedure for the sectioning algorithm (5) to obtain the variance of the sectioned image, var (*i*) = 〈*i*^{2}〉 − 〈*i*〉^{2}. The second term in the variance formula is easily obtained from Eqs. (4) and (5) as 〈*i*〉^{2} = 〈*f*〉^{2}. The first term can be obtained by inserting Eq. (6) to give

*g*provides different samples

_{i}*d*and

_{i}*f*of the stochastic out-of-focus and slice light distribution, such that terms like $\u3008{d}_{1}^{2}-{d}_{1}{d}_{2}\u3009$ will be nonzero. That is, since

_{i}*d*

_{1}is independent of

*d*

_{2}, we can write 〈

*d*

_{1}

*d*

_{2}〉 = 〈

*d*

_{1}〉〈

*d*

_{2}〉, whereas $\u3008{d}_{1}^{2}\u3009={\u3008{d}_{1}\u3009}^{2}+\text{var}({d}_{1})$. Once all cross-terms are eliminated from inside an expectation value, one can then take 〈

*d*〉 → 〈

_{i}*d*〉 and var(

*d*) = var(

_{i}*d*), and likewise for the

*f*as well. Thus, the first quadratic term inside the expectation value of Eq. (13) is

_{i}*d*〉 → 〈

_{i}*d*〉 and $\u3008{d}_{i}^{2}\u3009\to {\u3008d\u3009}^{2}+\text{var}(d)$, we have

*f*. (Recall that

*f*represents the light obtained from a single standard widefield image of just the planar slice itself.) Both terms contain a dependence on the modulation contrast, so that as the modulation approaches zero (

*m*→ 0), the variance in the sectioned image increases without bound, as we should expect.

The theoretical expression for the sectioned image variance Eq. (14) indicates that when the illumination produces ideal modulation contrast at the slice plane (*m* = 1), the sectioning algorithm amplifies noise in the slice image by a factor of
$\sqrt{2}$ (in standard deviation) relative to the stochastic noise in *f*. For most cases, a large out-of-focus contribution is present, and this not only reduces the modulation contrast but adds to the noise as well. Then *m* becomes small, and in this regime the variance approximates to

## 5. Estimating the modulation contrast

The quantitative sectioning algorithm (5) requires knowing the modulation contrast *m* in order to properly scale the result. This value is generally not known *a priori* and so must be estimated, but one can use the modulated images themselves to provide the estimated value *m̂*. For each of the three modulated images, we obtain a “modulation map” by normalizing the modulated images using the widefield algorithm as

*μ*can also be written as ${\mu}_{i}=\frac{1}{2}+\frac{m}{2}\text{cos}\left[\nu x-{\varphi}_{i}\right]$, subtracting 1/2 from

_{i}*μ*produces a result proportional to

_{i}*m*, so that

*m*is thus given by

As an example, we measured the three modulated images *g*_{1}, *g*_{2}, and *g*_{3} on a fluorescent bead sample and use the algorithm in Eq. (15) to obtain the estimated modulation *m̂* (*x,y*) at every pixel in the image. Our experimental setup (see Sec. 6.1) achieves approximately uniform modulation across the image, and so if we first threshold the image *i*_{w} to prevent noisy pixels from skewing the estimate, we obtain a histogram of *m̂* across the image, as shown in Fig. 1. The histogram suggests that the pixel-by-pixel estimate of *m* will be quite noisy, so that a much more accurate estimate can be achieved by averaging *m̂* across the image. Or, if there is a significant contribution of outliers, one can use the histogram median as a more robust estimate. For Fig. 1, the mean and median values are *m̂* = 0.48 and 0.49 respectively. We can note, however, that taking the mean or median are only valid for homogeneous samples.

From a theoretical standpoint, the roughly Gaussian shape to the histogram is expected, but the long tail at the lower values of *m̂* is not, and may be the result of some beads aggregating together to create a thicker layer at some locations within the image. If the thickness exceeds the sectioning depth, then the modulation contrast will drop.

While Fig. 1 shows that the *median* modulation contrast can be accurately estimated from a single image, the use of Eq. (15) to estimate *m*(*x,y*) requires exceedingly high signal-to-noise ratio images in order to achieve a reasonable accuracy for every pixel in the image. Its practical use, therefore, requires collecting a large number of photons (perhaps by summing a sequence of static images) or some kind of spatial processing in order to reduce the effects of noise.

## 6. Experimental results

In order to test our theoretical results, we conducted several experiments on a Zeiss Axio Imager Z1 microscope equipped with an Apotome module, a Zeiss AxioCam MRm monochromatic camera (1388 × 1040 pixels), and an HB-100 mercury lamp illumination source. The objective lens used for all of the experiments is a Zeiss Plan-Apochromatic 20× objective (NA = 0.8). In order compare measurements against theory, we use the ratio *r* of measured noise to the estimated shot noise. Whereas the measured noise is obtained by taking the standard deviation of a sequence of 1000 measurements, the photon shot noise standard deviation *σ*_{p} is estimated by taking the square root of the mean number of photons collected. For a standard widefield measurement, this number should be close to 1, but for SI-sectioned images the noise is larger than one would expect from the shot noise alone, so that the theory predicts *r* > 1.

The first step of the experiment involves measuring the camera gain in order to scale digital counts to detected photelectrons. This involves imaging a uniformly illuminated field (created by Köhler illumination) at the microscope sample stage with different illumination intensities. To remove the effects of pixel response nonuniformity, we implemented the following procedures: [34]

- At each illumination intensity, two wide field images
*I*_{1}and*I*_{2}are acquired. - The standard deviation
*σ*_{c}is calculated for a 200 × 200 pixel area in the difference image*I*_{1}−*I*_{2}. - The signal variance is calculated by $\text{var}({f}_{\text{c}})={\sigma}_{\text{c}}^{2}/2$. The scaling factor of 2 accounts for the increased noise due to the image subtraction operation in Step 2.

*f*

_{c}and

*σ*

_{c}here indicate the measured intensity and standard deviation in units of counts and not photons. The resulting measured signal vs. variance at different illumination intensities is shown in Fig. 2, and gives an estimated gain

*g*= 4.1 photons/count.

In order to provide a baseline reference for later SI noise measurements, we imaged a microscope slide containing a sparse layer of fluorescent beads (Molecular Probes Fluosphere F8853, peak emission at 515 nm, 2 *μ*m diameter) in wide-field mode (*i.e.* without the structured grid placed in the illumination path). To prepare a uniformly distributed sample, the fluorescent beads were suspended by vortex mixing and sonicated. The suspensions was then dropped onto a microscope slide and sealed with a cover slip. A total of 1000 images of the sample were acquired in a time sequenced experiment. The measured mean fluorescent intensity 〈*f*〉 of the fluorescent beads is 4.33 × 10^{4} counts (obtained by summing all pixels at the bead location), and the standard deviation *σ _{n}* of fluorescent intensity is 112 counts. Thus the ratio of measured noise to the photon shot noise is:

*r*is very close to 1, we can say that the imaging system is shot-noise limited.

*6.1. Sectioned imaging of 2**μ*m *fluorescent beads without out-of-focus light*

We first measured the axial PSF of the SI-sectioned measurements by imaging sub-resolution green fluorescent nanoparticles (175 nm diameter spheres, from Invitrogen), using the software-recommended VL grid (17.5 lines/mm) on the Apotome. The FWHM of resulting measured axial PSF is 3.6 *μ*m, indicating that our fluorescent bead sample (peak emission at 515 nm, 2 *μ*m diameter) is sufficiently thin that no out-of-focus light will be present. For the quantitative algorithm, we estimate the modulation contrast for this setup using the data shown in Fig. 1, giving *m̂* = 0.49 from the median of the distribution. A total of 1000 sectioned images were acquired in a time sequenced experiment. The mean intensity 〈*f*〉and its standard deviation *σ*_{m} are calculated for 5 different beads, with results shown in Table 1. The ratio of measured noise to the photon noise is calculated as:

*r*, assuming Poisson noise, is obtained by

The calculated widefield image *i*_{w} of the same sample is obtained by algorithm (6). The mean intensity 〈*i*_{w}〉, standard deviation *σ*_{m} and ratio *r* are calculated for the same five beads, with results shown in Table 1. The measured value of the noise ratio *r* = 1.04 closely corresponds to the theoretical result of *r* = 1 obtained from Eq. (12).

*6.2. Sectioned imaging of 6**μ*m *fluorescent beads containing out-of-focus light*

In order to measure noise amplification in SI in the case when out-of-focus light is present, we imaged a sample of 6 *μ*m diameter green fluorescent beads. Since the sectioning thickness of the Apotome in this setup is 3.6 *μ*m, there will be some out-of-focus light present. A total of 1000 sectioned images were acquired in a time sequenced experiment. The mean intensity 〈*f*〉 and its standard deviation *σ*_{m} are calculated for five different beads, with results shown in Table 2. The theoretical value of the ratio *r*, which we write as *r̂*, is given by

*i*

_{w}〉 = 〈

*d*〉 + 〈

*f*〉, let

*m̂*= 0.49, and substitute the mean 〈

*i*

_{w}〉 with its measured value

*i*

_{w}, we have

*r̂*values for the five beads are shown in Table 2, and indicate a close correspondence with the experimentally measured value for

*r*. Note that the variation in

*r̂*among the five beads selected may be due to a variation in the relative amount of fluorescence emitted by the bead from within the sectioned plane to that emitted from outside the sectioned plane.

## 7. Conclusion

Although it has often been argued that structure illumination sectioning microscopy is a non-quantitative technique, we have shown that a quantitative version of the algorithm can be obtained by adding a proper scaling factor. Quantitative scaling does require that one estimate the modulation contrast *m*, and this adds an extra step of complexity, but Eq. (15) provides a simple means of obtaining such an estimate. A consequence of ignoring this scaling factor, as the sectioning algorithms have up to this point, is that in *z*-stack volumetric images (*x,y,z*) the deeper layers will appear artificially darkened. A result of the SI sectioning approach, however, is that since it removes out-of-focus light from the sectioned image after detection, it suffers from the shot noise of both the section image *and* all out-of-focus planes. While this has long been known, little has been known about the quantitative correlation between the noise amplification in SI microscopy and the out-of-focus light or other imaging parameters.

The theoretical analysis given above has made no assumptions about the properties of the noise other than to assume that the variance is the primary quantity of interest, and thus the results remain valid across all noise regimes (read noise limited, shot noise limited, etc.).

The noise amplification indicated by the variance result may be taken as an argument that SI-sectioning is a poor substitute for confocal sectioning due to the loss in SNR. But this is not the whole story. The compatibility of SI with widefield imaging also allows orders of magnitude greater light throughput than that achievable by confocal microscopy, such that one can use lower-intensity light sources and still obtain 100–200× increases in photon collection above that of scanning laser illumination [12,13]. In this case, taking 150 as a representative value for increased light collection, SI sectioning can provide better SNR than confocal sectioning when

An additional advantage SI microscopy has is the ability to reject any residual DC light, such as that generated by stray light or reflections within the optical system, though this comes at an SNR penalty. Whether SI-sectioning or confocal sectioning produces better SNR images is dependent on the microscope setup and the object under analysis, but our theoretical results provide support for the common empirical observation that SI-sectioning gives lower quality results when imaging deep within tissue.

## Acknowledgments

This work was supported in part by the National Institute of Health under grants R01-CA124319 and R21-EB009186.

## References and links

**1. **D. Karadaglić and T. Wilson, “Image formation in structured illumination wide-field fluorescence microscopy,” Micron **39**, 808–818 (2008). [CrossRef]

**2. **M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. **22**, 1905–1907 (1997). [CrossRef]

**3. **L. Schermelleh, P. M. Carlton, S. Haase, L. Shao, L. Winoto, P. Kner, B. Burke, M. C. Cardoso, D. A. Agard, M. G. L. Gustafsson, H. Leonhardt, and J. W. Sedat, “Subdiffraction multicolor imaging of the nuclear periphery with 3D structured illumination microscopy,” Science **320**, 1332–1336 (2008). [CrossRef] [PubMed]

**4. **V. C. Cogger, G. P. McNerney, T. Nyunt, L. D. DeLeve, P. McCourt, B. Smedsrød, D. G. L. Couteur, and T. R. Huser, “Three-dimensional structured illumination microscopy of liver sinusoidal endothelial cell fenestrations,” J. Struct. Biol. **171**, 382–388 (2010). [CrossRef] [PubMed]

**5. **P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Met. **6**, 339–344 (2009). [CrossRef]

**6. **G. Best, R. Amberger, D. Baddeley, T. Ach, S. Dithmar, R. Heintzmann, and C. Cremer, “Structured illumination microscopy of autofluorescent aggregations in human tissue,” Micron **42**, 330–335 (2011). [CrossRef]

**7. **P. J. Keller, A. D. Schmidt, A. Santella, K. Khairy, Z. Bao, J. Wittbrodt, and E. H. K. Stelzer, “Fast, high-contrast imaging of animal development with scanned light sheet-based structured-illumination microscopy,” Nat. Meth. **7**, 637–645 (2010). [CrossRef]

**8. **N. Bozinovic, C. Ventalon, T. Ford, and J. Mertz, “Fluorescence endomicroscopy with structured illumination,” Opt. Express **16**, 8016–8026 (2008). [CrossRef] [PubMed]

**9. **R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” in EUROPTO Conference on Optical Microscopy, Proc. SPIE **3568**, 185–196 (1999). [CrossRef]

**10. **M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. **198**, 82–87 (2000). [CrossRef] [PubMed]

**11. **L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. **38**, 807–813 (2009). [CrossRef] [PubMed]

**12. **J. M. Murray, P. L. Appleton, J. R. Swedlow, and J. C. Waters, “Evaluating performance in three-dimensional fluorescence microscopy,” J. Microsc. **228**, 390–405 (2007). [CrossRef] [PubMed]

**13. **L. Gao, N. Bedard, N. Hagen, R. T. Kester, and T. S. Tkaczyk, “Depth-resolved image mapping spectrometer (IMS) with structured illumination,” Opt. Express **19**, 17439–17452 (2011). [CrossRef] [PubMed]

**14. **M. G. Somekh, K. Hsu, and M. C. Pitter, “Resolution in structured illumination microscopy: a probabilistic approach,” J. Opt. Soc. Am. A **25**, 1319–1329 (2008). [CrossRef]

**15. **M. G. Somekh, K. Hsu, and M. C. Pitter, “Stochastic transfer function for structured illumination microscopy,” J. Opt. Soc. Am. A **26**, 1630–1637 (2009). [CrossRef]

**16. **K. M. Kedziora, J. H. M. Prehn, J. Dobruck, and T. Bernas, “Method of calibration of a fluorescence microscope for quantitative studies,” J. Microsc. **244**, 101–111 (2011). [CrossRef] [PubMed]

**17. **J. C. Waters, “Accuracy and precision in quantitative fluorescence microscopy,” J. Cell Biol. **1858**, 1135–1148 (2009). [CrossRef]

**18. **A. Esposito, S. Schlachter, G. S. Schierle, A. D. Elder, A. Diaspro, F. S. Wouters, C. F. Kaminski, and A. I. Iliev, “Quantitative fluorescence microscopy techniques,” Methods Mol. Biol. **586**, 117–142 (2009). [CrossRef] [PubMed]

**19. **R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. **45**, 5037–5045 (2006). [CrossRef] [PubMed]

**20. **M. A. A. Neil, T. Wilson, and R. Juškaitis, “A light efficient optical sectioning microscope,” J. Microsc. **189**, 114–117 (1998). [CrossRef]

**21. **M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juvskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. **203**, 246–257 (2001). [CrossRef] [PubMed]

**22. **L. H. Schaefer, D. Schuster, and J. Schaffer, “Structured illumination microscopy: artefact analysis and reduction utilizing a parameter optimization approach,” J. Microsc. **216**, 165–174 (2004). [CrossRef] [PubMed]

**23. **L. G. Krzewina and M. K. Kim, “Single-exposure optical sectioning by color structured illumination microscopy,” Opt. Lett. **31**, 477–479 (2006). [CrossRef] [PubMed]

**24. **A. L. Barlow and C. J. Guerin, “Quantization of widefield fluorescence images using structured illumination and image analysis software,” Microsc. Res. Tech. **70**, 76–84 (2007). [CrossRef]

**25. **F. Chasles, B. Dubertret, and A. C. Boccara, “Optimization and characterization of a structured illumination microscope,” Opt. Express **15**, 16130–16141 (2007). [CrossRef] [PubMed]

**26. **S. D. Konecky, A. Mazhar, D. Cuccia, A. J. Durkin, J. C. Schotland, and B. J. Tromberg, “Quantitative optical tomography of sub-surface heterogeneities using spatially modulated structured light,” Opt. Express **17**, 14780–14790 (2009). [CrossRef] [PubMed]

**27. **M. F. Langhorst, J. Schaffer, and B. Goetze, “Structure brings clarity: structured illumination microscopy in cell biology,” Biotechnol. J. **4**, 858–865 (2009). [CrossRef] [PubMed]

**28. **T. A. Erickson, A. Mazhar, D. Cuccia, A. J. Durkin, and J. W. Tunnell, “Lookup-table method for imaging optical properties with structured illumination beyond the diffusion theory regime,” J. Biomed. Opt. **15**, 036013 (2010). [CrossRef] [PubMed]

**29. **K. Wicker and R. Heintzmann, “Single-shot optical sectioning using polarization-coded structured illumination,” J. Opt. **12**, 084010 (2010). [CrossRef]

**30. **T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. **242**, 111–116 (2010). [CrossRef] [PubMed]

**31. **S. Gruppetta and S. Chetty, “Theoretical study of multispectral structured illumination for depth resolved imaging of non-stationary objects: focus on retinal imaging,” Biomed. Opt. Express **2**, 255–263 (2011). [CrossRef] [PubMed]

**32. **M. A. A. Neil, R. Juškaitis, and T. Wilson, “Real time 3D fluorescence microscopy by two beam interference illumination,” Opt. Commun. **153**, 1–4 (1998). [CrossRef]

**33. **V. Poher, H. X. Zhang, G. T. Kennedy, C. Griffin, S. Oddos, E. Gu, D. S. Elson, M. Girkin, P. M. W. French, M. D. Dawson, and M. A. A. Neil, “Optical sectioning microscopes with no moving parts using a micro-stripe array light emitting diode,” Opt. Express **15** (2007). [CrossRef] [PubMed]

**34. **L. Mortara and A. Fowler, “Evaluations of charge-coupled device (CCD) performance for astronomical use,” in *Solid state imagers for astronomy*, Proc. SPIE **290**, 28–33 (1981).