Structured illumination (SI) has long been regarded as a nonquantitative technique for obtaining sectioned microscopic images. Its lack of quantitative results has restricted the use of SI sectioning to qualitative imaging experiments, and has also limited researchers’ ability to compare SI against competing sectioning methods such as confocal microscopy. We show how to modify the standard SI sectioning algorithm to make the technique quantitative, and provide formulas for calculating the noise in the sectioned images. The results indicate that, for an illumination source providing the same spatially-integrated photon flux at the object plane, and for the same effective slice thicknesses, SI sectioning can provide higher SNR images than confocal microscopy for an equivalent setup when the modulation contrast exceeds about 0.09.
© 2011 Optical Society of America
CorrectionsTing Ai Chen, Nathan Hagen, Liang Gao, and Tomasz S. Tkaczyk, "Quantitative sectioning and noise analysis for structured illumination microscopy: erratum," Opt. Express 23, 27633-27634 (2015)
Structured illumination (SI) is an optical sectioning technique compatible with widefield imaging microscopy, and has been shown to provide a depth resolution comparable to confocal microscopy . Since its invention , SI microscopy has been widely used as a sectioning tool in bioimaging research, both at the cellular level — such as in 3D imaging of cellular nuclear periphery , cellular fenestrations , tubulin and kinesin dynamics  and at the tissue level — such as in imaging of autofluorescence aggregation in human eye , zebrafish development , rat colonic mucosa . In addition, SI has also been used as a super-resolution technique to break the diffraction limit [5, 9–11]. SI thus has the ability to maintain the high light collection capability of widefield imaging [12,13] while also removing out-of-plane light. It has, however, been criticized as being a non-quantitative technique, and for producing noisy data in comparison to the sectioned images derived from confocal microscopes. The analysis below shows that SI can easily be made quantitative by properly scaling the standard sectioning algorithm, and we also provide analytical expressions for the resulting noise in SI-sectioned images. Although Somekh et al. [14, 15] provide numerical simulations of the noise properties of SI-sectioned images, they use a non-quantitative algorithm and do not include the effects of out-of-focus light on the noise.
Quantitative sectioned images allow one to perform photon counting as if the regions above and below the sectioned layer were not present. While the resulting photon number estimate will be noisier than would be the case for imaging the slice without out-of-focus layers present, the mean value of the correctly scaled algorithm will equal to the mean photon count one would obtain with a standard widefield microscope. This permits researchers to use standard methods  of correcting for the objective lens’ numerical aperture, optical transmission, and detector quantum efficiency to determine photon counts at the sectioned plane relative to the number of photoelectrons detected at the sensor plane. Allowing quantitative data to be obtained with SI sectioning thus gives researchers the ability to perform measurements of radiance, absolute reflectance, fluorophore quantum yield, and absolute fluorophore concentration within volumetric media [17, 18].
Finally, the analytical formulas for noise also enable us to roughly define an operational range for the modulation contrast at the section plane, such that for any contrast above this value one can expect SI to provide higher SNR data than confocal microscopy. For any contrast below this value, confocal imaging will out-perform SI.
2. Sectioning algorithm
The general approach in SI is to illuminate the object with a sinusoidal illumination pattern of the form 19], this one is particularly easy to implement. The quantity m is the modulation contrast (a number varying from 0 to 1), and ν is the modulation spatial frequency. The factor of 1/2 placed in front, not present in previous work, is used here in order to represent the fact that half of the illumination light is absorbed or reflected by the grid placed in the illumination path. If we take the limit m → 0, we obtain standard widefield illumination of half the intensity that one would obtain without the grid in place. Note that s represents a normalized illumination amplitude, ranging from 0 to 1.
Ignoring the effects of optical blurring, the resulting modulated images gi(x, y) are given by19]. SI for superresolution, for example, relies on the Moire effect to detect light emitted outside the conventional bandwidth limit.
Since the algorithm (3) operates on each pixel independently, we have dropped the spatial arguments (x,y) as unnecessary. (These can be added back into each equation at any point.) Inserting Eqs. (1) into (2) and applying trigonometric identities, we obtain the resultEq. (3) by the inverse of this factor: 1].
The scale factor in front of the square root differs from that given in previous studies. The factor of 2 in the numerator appears as a result of the 1/2 scaling introduced into our definition of si(x, y) and thus is new. All previous authors have also assumed ideal modulation (m = 1). This is an assumption which introduces a large error into the quantitative result. Moreover, since the modulation m(x,y) is in general spatially varying, the error introduced is generally not a simple scalar factor for the whole image. As a whole, the literature shows wide disagreement over the appropriate scale factor to place in front of the square root. Refs [20,22,26,28,32,33] use , which is appropriate when m = 1 and the factor of 1/2 in s(x,y) is not used. Refs [2, 23–25,27,29] use a scale factor of 1, which is the most appropriate choice for a non-quantitative approach, while other authors use alternative factors such as , [1, 31], or  without explanation.
In practice, one finds that even for ideal samples m cannot achieve the maximum value of 1. The modulation contrast, however, remains excellent (m > 0.5) in thin samples in which the sectioned plane is taken near the surface, but poor (m < 0.1) in dense tissue samples (in which multiple scattering is present) and in deeper layers of thinly scattering media.
3. Widefield image algorithm
In addition to the sectioned image algorithm (5), it is well known that one can form a widefield image representation iw(x, y) from the modulated images byEqs. (1) and (2), we obtain
For quantitative work, we also want to estimate the variance of the widefield image, for which we insert Eqs. (1), (2), and (6) into the standard variance formula and solve. Here the angle brackets 〈·〉 represent an expectation value. The first term in the variance formula is
The second moment thus separates into four terms, each of which can be considered separately. Using trigonometric identities, we obtain for the modulation factor in each term
The second term in the variance formula is easily obtained from Eq. (7) as
4. Variance and SNR of sectioned images
Next we can try to follow the same procedure for the sectioning algorithm (5) to obtain the variance of the sectioned image, var (i) = 〈i2〉 − 〈i〉2. The second term in the variance formula is easily obtained from Eqs. (4) and (5) as 〈i〉2 = 〈f〉2. The first term can be obtained by inserting Eq. (6) to giveEqs. (1) and (2) into this formula, but here one must be careful. Each modulated image gi provides different samples di and fi of the stochastic out-of-focus and slice light distribution, such that terms like will be nonzero. That is, since d1 is independent of d2, we can write 〈d1d2〉 = 〈d1〉〈d2〉, whereas . Once all cross-terms are eliminated from inside an expectation value, one can then take 〈di〉 → 〈d〉 and var(di) = var(d), and likewise for the fi as well. Thus, the first quadratic term inside the expectation value of Eq. (13) is Eq. (13) and combining gives Eqs. (10) and (11). Incorporating this result into the variance formula obtains
The theoretical expression for the sectioned image variance Eq. (14) indicates that when the illumination produces ideal modulation contrast at the slice plane (m = 1), the sectioning algorithm amplifies noise in the slice image by a factor of (in standard deviation) relative to the stochastic noise in f. For most cases, a large out-of-focus contribution is present, and this not only reduces the modulation contrast but adds to the noise as well. Then m becomes small, and in this regime the variance approximates to
5. Estimating the modulation contrast
The quantitative sectioning algorithm (5) requires knowing the modulation contrast m in order to properly scale the result. This value is generally not known a priori and so must be estimated, but one can use the modulated images themselves to provide the estimated value m̂. For each of the three modulated images, we obtain a “modulation map” by normalizing the modulated images using the widefield algorithm asEqs. (5) and (15) to give
As an example, we measured the three modulated images g1, g2, and g3 on a fluorescent bead sample and use the algorithm in Eq. (15) to obtain the estimated modulation m̂ (x,y) at every pixel in the image. Our experimental setup (see Sec. 6.1) achieves approximately uniform modulation across the image, and so if we first threshold the image iw to prevent noisy pixels from skewing the estimate, we obtain a histogram of m̂ across the image, as shown in Fig. 1. The histogram suggests that the pixel-by-pixel estimate of m will be quite noisy, so that a much more accurate estimate can be achieved by averaging m̂ across the image. Or, if there is a significant contribution of outliers, one can use the histogram median as a more robust estimate. For Fig. 1, the mean and median values are m̂ = 0.48 and 0.49 respectively. We can note, however, that taking the mean or median are only valid for homogeneous samples.
From a theoretical standpoint, the roughly Gaussian shape to the histogram is expected, but the long tail at the lower values of m̂ is not, and may be the result of some beads aggregating together to create a thicker layer at some locations within the image. If the thickness exceeds the sectioning depth, then the modulation contrast will drop.
While Fig. 1 shows that the median modulation contrast can be accurately estimated from a single image, the use of Eq. (15) to estimate m(x,y) requires exceedingly high signal-to-noise ratio images in order to achieve a reasonable accuracy for every pixel in the image. Its practical use, therefore, requires collecting a large number of photons (perhaps by summing a sequence of static images) or some kind of spatial processing in order to reduce the effects of noise.
6. Experimental results
In order to test our theoretical results, we conducted several experiments on a Zeiss Axio Imager Z1 microscope equipped with an Apotome module, a Zeiss AxioCam MRm monochromatic camera (1388 × 1040 pixels), and an HB-100 mercury lamp illumination source. The objective lens used for all of the experiments is a Zeiss Plan-Apochromatic 20× objective (NA = 0.8). In order compare measurements against theory, we use the ratio r of measured noise to the estimated shot noise. Whereas the measured noise is obtained by taking the standard deviation of a sequence of 1000 measurements, the photon shot noise standard deviation σp is estimated by taking the square root of the mean number of photons collected. For a standard widefield measurement, this number should be close to 1, but for SI-sectioned images the noise is larger than one would expect from the shot noise alone, so that the theory predicts r > 1.
The first step of the experiment involves measuring the camera gain in order to scale digital counts to detected photelectrons. This involves imaging a uniformly illuminated field (created by Köhler illumination) at the microscope sample stage with different illumination intensities. To remove the effects of pixel response nonuniformity, we implemented the following procedures: 
- At each illumination intensity, two wide field images I1 and I2 are acquired.
- The standard deviation σc is calculated for a 200 × 200 pixel area in the difference image I1 − I2.
- The signal variance is calculated by . The scaling factor of 2 accounts for the increased noise due to the image subtraction operation in Step 2.
In order to provide a baseline reference for later SI noise measurements, we imaged a microscope slide containing a sparse layer of fluorescent beads (Molecular Probes Fluosphere F8853, peak emission at 515 nm, 2 μm diameter) in wide-field mode (i.e. without the structured grid placed in the illumination path). To prepare a uniformly distributed sample, the fluorescent beads were suspended by vortex mixing and sonicated. The suspensions was then dropped onto a microscope slide and sealed with a cover slip. A total of 1000 images of the sample were acquired in a time sequenced experiment. The measured mean fluorescent intensity 〈f〉 of the fluorescent beads is 4.33 × 104 counts (obtained by summing all pixels at the bead location), and the standard deviation σn of fluorescent intensity is 112 counts. Thus the ratio of measured noise to the photon shot noise is:
6.1. Sectioned imaging of 2μm fluorescent beads without out-of-focus light
We first measured the axial PSF of the SI-sectioned measurements by imaging sub-resolution green fluorescent nanoparticles (175 nm diameter spheres, from Invitrogen), using the software-recommended VL grid (17.5 lines/mm) on the Apotome. The FWHM of resulting measured axial PSF is 3.6 μm, indicating that our fluorescent bead sample (peak emission at 515 nm, 2 μm diameter) is sufficiently thin that no out-of-focus light will be present. For the quantitative algorithm, we estimate the modulation contrast for this setup using the data shown in Fig. 1, giving m̂ = 0.49 from the median of the distribution. A total of 1000 sectioned images were acquired in a time sequenced experiment. The mean intensity 〈f〉and its standard deviation σm are calculated for 5 different beads, with results shown in Table 1. The ratio of measured noise to the photon noise is calculated as:Table 1. That is, use of the sectioning algorithm with the three modulated images on the (planar) sample has reduced the SNR by a factor of 2.41 from that of a single (unmodulated) widefield image.
The calculated widefield image iw of the same sample is obtained by algorithm (6). The mean intensity 〈iw〉, standard deviation σm and ratio r are calculated for the same five beads, with results shown in Table 1. The measured value of the noise ratio r = 1.04 closely corresponds to the theoretical result of r = 1 obtained from Eq. (12).
6.2. Sectioned imaging of 6μm fluorescent beads containing out-of-focus light
In order to measure noise amplification in SI in the case when out-of-focus light is present, we imaged a sample of 6 μm diameter green fluorescent beads. Since the sectioning thickness of the Apotome in this setup is 3.6 μm, there will be some out-of-focus light present. A total of 1000 sectioned images were acquired in a time sequenced experiment. The mean intensity 〈f〉 and its standard deviation σm are calculated for five different beads, with results shown in Table 2. The theoretical value of the ratio r, which we write as r̂, is given byTable 2, and indicate a close correspondence with the experimentally measured value for r. Note that the variation in r̂ among the five beads selected may be due to a variation in the relative amount of fluorescence emitted by the bead from within the sectioned plane to that emitted from outside the sectioned plane.
Although it has often been argued that structure illumination sectioning microscopy is a non-quantitative technique, we have shown that a quantitative version of the algorithm can be obtained by adding a proper scaling factor. Quantitative scaling does require that one estimate the modulation contrast m, and this adds an extra step of complexity, but Eq. (15) provides a simple means of obtaining such an estimate. A consequence of ignoring this scaling factor, as the sectioning algorithms have up to this point, is that in z-stack volumetric images (x,y,z) the deeper layers will appear artificially darkened. A result of the SI sectioning approach, however, is that since it removes out-of-focus light from the sectioned image after detection, it suffers from the shot noise of both the section image and all out-of-focus planes. While this has long been known, little has been known about the quantitative correlation between the noise amplification in SI microscopy and the out-of-focus light or other imaging parameters.
The theoretical analysis given above has made no assumptions about the properties of the noise other than to assume that the variance is the primary quantity of interest, and thus the results remain valid across all noise regimes (read noise limited, shot noise limited, etc.).
The noise amplification indicated by the variance result may be taken as an argument that SI-sectioning is a poor substitute for confocal sectioning due to the loss in SNR. But this is not the whole story. The compatibility of SI with widefield imaging also allows orders of magnitude greater light throughput than that achievable by confocal microscopy, such that one can use lower-intensity light sources and still obtain 100–200× increases in photon collection above that of scanning laser illumination [12,13]. In this case, taking 150 as a representative value for increased light collection, SI sectioning can provide better SNR than confocal sectioning when
An additional advantage SI microscopy has is the ability to reject any residual DC light, such as that generated by stray light or reflections within the optical system, though this comes at an SNR penalty. Whether SI-sectioning or confocal sectioning produces better SNR images is dependent on the microscope setup and the object under analysis, but our theoretical results provide support for the common empirical observation that SI-sectioning gives lower quality results when imaging deep within tissue.
This work was supported in part by the National Institute of Health under grants R01-CA124319 and R21-EB009186.
References and links
1. D. Karadaglić and T. Wilson, “Image formation in structured illumination wide-field fluorescence microscopy,” Micron 39, 808–818 (2008). [CrossRef]
2. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22, 1905–1907 (1997). [CrossRef]
3. L. Schermelleh, P. M. Carlton, S. Haase, L. Shao, L. Winoto, P. Kner, B. Burke, M. C. Cardoso, D. A. Agard, M. G. L. Gustafsson, H. Leonhardt, and J. W. Sedat, “Subdiffraction multicolor imaging of the nuclear periphery with 3D structured illumination microscopy,” Science 320, 1332–1336 (2008). [CrossRef] [PubMed]
4. V. C. Cogger, G. P. McNerney, T. Nyunt, L. D. DeLeve, P. McCourt, B. Smedsrød, D. G. L. Couteur, and T. R. Huser, “Three-dimensional structured illumination microscopy of liver sinusoidal endothelial cell fenestrations,” J. Struct. Biol. 171, 382–388 (2010). [CrossRef] [PubMed]
5. P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Met. 6, 339–344 (2009). [CrossRef]
6. G. Best, R. Amberger, D. Baddeley, T. Ach, S. Dithmar, R. Heintzmann, and C. Cremer, “Structured illumination microscopy of autofluorescent aggregations in human tissue,” Micron 42, 330–335 (2011). [CrossRef]
7. P. J. Keller, A. D. Schmidt, A. Santella, K. Khairy, Z. Bao, J. Wittbrodt, and E. H. K. Stelzer, “Fast, high-contrast imaging of animal development with scanned light sheet-based structured-illumination microscopy,” Nat. Meth. 7, 637–645 (2010). [CrossRef]
9. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” in EUROPTO Conference on Optical Microscopy, Proc. SPIE 3568, 185–196 (1999). [CrossRef]
13. L. Gao, N. Bedard, N. Hagen, R. T. Kester, and T. S. Tkaczyk, “Depth-resolved image mapping spectrometer (IMS) with structured illumination,” Opt. Express 19, 17439–17452 (2011). [CrossRef] [PubMed]
14. M. G. Somekh, K. Hsu, and M. C. Pitter, “Resolution in structured illumination microscopy: a probabilistic approach,” J. Opt. Soc. Am. A 25, 1319–1329 (2008). [CrossRef]
15. M. G. Somekh, K. Hsu, and M. C. Pitter, “Stochastic transfer function for structured illumination microscopy,” J. Opt. Soc. Am. A 26, 1630–1637 (2009). [CrossRef]
17. J. C. Waters, “Accuracy and precision in quantitative fluorescence microscopy,” J. Cell Biol. 1858, 1135–1148 (2009). [CrossRef]
18. A. Esposito, S. Schlachter, G. S. Schierle, A. D. Elder, A. Diaspro, F. S. Wouters, C. F. Kaminski, and A. I. Iliev, “Quantitative fluorescence microscopy techniques,” Methods Mol. Biol. 586, 117–142 (2009). [CrossRef] [PubMed]
20. M. A. A. Neil, T. Wilson, and R. Juškaitis, “A light efficient optical sectioning microscope,” J. Microsc. 189, 114–117 (1998). [CrossRef]
21. M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juvskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203, 246–257 (2001). [CrossRef] [PubMed]
22. L. H. Schaefer, D. Schuster, and J. Schaffer, “Structured illumination microscopy: artefact analysis and reduction utilizing a parameter optimization approach,” J. Microsc. 216, 165–174 (2004). [CrossRef] [PubMed]
24. A. L. Barlow and C. J. Guerin, “Quantization of widefield fluorescence images using structured illumination and image analysis software,” Microsc. Res. Tech. 70, 76–84 (2007). [CrossRef]
26. S. D. Konecky, A. Mazhar, D. Cuccia, A. J. Durkin, J. C. Schotland, and B. J. Tromberg, “Quantitative optical tomography of sub-surface heterogeneities using spatially modulated structured light,” Opt. Express 17, 14780–14790 (2009). [CrossRef] [PubMed]
28. T. A. Erickson, A. Mazhar, D. Cuccia, A. J. Durkin, and J. W. Tunnell, “Lookup-table method for imaging optical properties with structured illumination beyond the diffusion theory regime,” J. Biomed. Opt. 15, 036013 (2010). [CrossRef] [PubMed]
29. K. Wicker and R. Heintzmann, “Single-shot optical sectioning using polarization-coded structured illumination,” J. Opt. 12, 084010 (2010). [CrossRef]
31. S. Gruppetta and S. Chetty, “Theoretical study of multispectral structured illumination for depth resolved imaging of non-stationary objects: focus on retinal imaging,” Biomed. Opt. Express 2, 255–263 (2011). [CrossRef] [PubMed]
32. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Real time 3D fluorescence microscopy by two beam interference illumination,” Opt. Commun. 153, 1–4 (1998). [CrossRef]
33. V. Poher, H. X. Zhang, G. T. Kennedy, C. Griffin, S. Oddos, E. Gu, D. S. Elson, M. Girkin, P. M. W. French, M. D. Dawson, and M. A. A. Neil, “Optical sectioning microscopes with no moving parts using a micro-stripe array light emitting diode,” Opt. Express 15 (2007). [CrossRef] [PubMed]
34. L. Mortara and A. Fowler, “Evaluations of charge-coupled device (CCD) performance for astronomical use,” in Solid state imagers for astronomy, Proc. SPIE 290, 28–33 (1981).