Structured illumination microscopy (SIM) has grown into a family of methods which achieve optical sectioning, resolution beyond the Abbe limit, or a combination of both effects in optical microscopy. SIM techniques rely on illumination of a sample with patterns of light which must be shifted between each acquired image. The patterns are typically created with physical gratings or masks, and the final optically sectioned or high resolution image is obtained computationally after data acquisition. We used a flexible, high speed ferroelectric liquid crystal microdisplay for definition of the illumination pattern coupled with widefield detection. Focusing on optical sectioning, we developed a unique and highly accurate calibration approach which allowed us to determine a mathematical model describing the mapping of the illumination pattern from the microdisplay to the camera sensor. This is important for higher performance image processing methods such as scaled subtraction of the out of focus light, which require knowledge of the illumination pattern position in the acquired data. We evaluated the signal to noise ratio and the sectioning ability of the reconstructed images for several data processing methods and illumination patterns with a wide range of spatial frequencies. We present our results on a thin fluorescent layer sample and also on biological samples, where we achieved thinner optical sections than either confocal laser scanning or spinning disk microscopes.
©2012 Optical Society of America
Structured illumination microscopy (SIM) works by acquiring a set of images at a given focal plane using widefield detection where each image in the set is made with a different position of an illumination mask but with no mask in the detection path . Subsequent image processing is always needed to yield an optically sectioned image [1–3] or an image with resolution beyond the Abbe limit [4, 5].
We focused on improvements to the “SIM for optical sectioning” application. The most familiar implementation of this technique was introduced in 1997 by Neil et al. . Their method works by projecting a line illumination pattern onto a sample, followed by acquisition of a set of three widefield images with the pattern shifted by relative spatial phases 0, 2π/3, and 4π/3. An optically sectioned image can be recovered computationally as
In previous structured illumination microscopes, the illumination patterns have typically been created using physical gratings. This limits imaging speeds, as the grating must be precisely shifted, and the position must be stable before image acquisition to prevent artifacts. Moreover, use of a single physical grating implies that a particular fringe projection system will be optimized for only a few microscope objectives (usually, high magnification, high NA objectives). However, it has been found that an optimized grating frequency can achieve significantly thinner optical sections than confocal laser scanning microscopes (CLSMs) with pinholes set to 1 Airy unit (AU) . This has been established theoretically as well .
In the quest for higher imaging rates and increased flexibility, investigators have turned to spatial light modulators (SLMs) for pattern creation. SIM for sectioning systems employing widefield detection have used digital micromirror devices (DMDs) [7–9], or transmissive liquid crystal SLMs .
To increase both the flexibility and the optical sectioning performance of structured illumination microscopy, we used a reflective ferroelectric liquid crystal-on-silicon (LCOS) microdisplay to create the illumination pattern. Use of the microdisplay allows us to utilize a truly arbitrary pattern for structured illumination, including arrays of lines, dots, or random patterns, and thus, to find the most suitable scanning pattern for a given sample. This flexibility allows us to easily compromise between scanning speed (i.e., the number of patterns), the desired signal to noise ratio (SNR), and the optical section thickness. Similar LCOS microdisplays have been used previously in SIM [11, 12], and in programmable array microscopy (PAM) [13, 14].
We present a unique, highly accurate calibration procedure that allows us to determine a one-to-one mapping between pixels of the microdisplay used to create the illumination pattern and pixels of the camera chip. In this way we can recreate a digital illumination mask in the acquired data. Knowledge of the exact position of the illumination pattern in each camera image allowed us to apply higher performance data processing methods for image reconstruction, i.e., scaled subtraction of the out of focus light [1, 15].
In the context of “SIM for optical sectioning” systems with no mask in the detection path, the scaled subtraction approach has previously only been suggested as a possible processing method . We evaluated its 3D imaging performance compared to two other more simplistic techniques, one applying a maximum minus minimum projection approach , and one applying homodyne detection [3, 15]. Scaled subtraction allowed us to obtain better results (i.e., better suppression of out of focus signals with higher SNR) compared to methods which process the data without any information about which part of the sample was illuminated.
The properties of our LCOS-based SIM system are demonstrated using a thin fluorescent layer sample and with biological samples. For illumination we used line grid patterns with a wide range of spatial frequencies. We also compared the results with a Leica SP5 CLSM and an Andor Revolution spinning disk system.
2.1 Microscope setup
Our setup is shown in Fig. 1(a) . We used an IX71 microscope equipped with 100 × /1.45 NA and 60 × /1.35 NA objectives (both oil immersion, Olympus, Hamburg, Germany). We used two detectors: a conventional CCD camera (Clara) and an EMCCD (Ixon 885, both from Andor, Belfast, Northern Ireland). For illumination, we used a 532 nm solid state laser (1000 mW, Dragon laser, ChangChun, China). The laser was introduced into the microscope using a 0.39 NA multimode optical fiber and a 1 inch, 75 mm focal length achromatic lens for collimation (Thor Labs, Newton, New Jersey). To scramble the coherence of the laser and reduce speckle, we used a laser speckle reducer (Optotune, Dietikon, Switzerland) based on an electroactive polymer. Fluorescence was isolated using an appropriate filter set for Cy3 (Chroma, Bellows Falls, Vermont).
The microscope's illumination tube lens and objective collect the light from pixels in the ON state and image the microdisplay onto the sample, see Fig. 1(a). For the illumination tube lens we have chosen a 150 mm focal length lens (Thor Labs) as it images the microdisplay so that it just fills the field of view of the microscope. Because Olympus objectives are designed to use tube lenses with a focal length of 180 mm, the effective demagnification of the microdisplay into the sample is a factor of (150/180) × MAG, where MAG is the magnification of the objective. Using a 100 × /1.45 NA objective, a single 13.6 × 13.6 μm microdisplay pixel will be imaged into the sample with a nominal size of 163 × 163 nm.
At 532 nm, the Abbe limit for our 1.45 NA objective is λ/2NA = 183.4 nm, larger than the 163 nm microdisplay pixel size we used. Nyquist-limited sampling of the specimen by the SIM pattern would imply an optimal microdisplay pixel size of < λ/4NA = 91.7 nm as imaged in the sample. However, we did not observe obvious patterned artifacts in the reconstructed images and so judged that a pixel size of 163 nm was adequate. However, these relationships can easily be changed by choosing a different illumination tube lens and objective. A separate matter is the CCD pixel size. With a 100 × objective the camera pixel size was 80 nm in the sample, implying Nyquist-limited imaging at the fluorescence wavelength (~550 nm).
2.2 LCOS microdisplay operation
The ferroelectric LCOS microdisplay (type 3DM, Forth Dimension Displays, Dalgety Bay, Scotland) used in our setup offers several characteristics advantageous for structured illumination microscopy including high fill factor (93%), small pixels (13.6 × 13.6 μm), high contrast (> 1000:1 at f/3.2) and high speed (40 μs on/off switch time, ~3.2 kHz maximum pattern refresh rate).
This device functions as an addressable array of quarter-wave plates with a reflective backing. We used it as a programmable spatial light modulator in a binary imaging mode. Pixels that are in the ON state rotate the polarization of light by ~70 degrees after two passes through the liquid crystal material (manufacturer's specification for operation at room temperature). Pixels in the OFF state reflect the light without changing the state of polarization. If vertically polarized illumination light is reflected onto the display using a polarizing beam splitter (PBS) cube (Thor Labs) then after reflection off the display, only horizontally polarized light (corresponding to the pixels in the ON state) is transmitted though the PBS cube towards the microscope, see Fig. 1(b). This allows us to create any desired binary illumination pattern.
Microdisplays are typically used for video projection, meaning that grayscale (or full color) images must be produced. This is usually accomplished using a bitplane weighting approach . For our purposes, i.e., creating a binary mask, a drive sequence with equally weighted bitplanes is required. We chose the longest available bitplane duration, 300 μs.
The LCOS microdisplay requires that the state of every pixel is reversed after each image, i.e., after each 300 μs bitplane. During such compensation cycles, the light source must be switched off. We accomplished this by directly switching the lasers off using synchronizing signals derived from the 3DM microdisplay controller.
The 3DM microdisplay controller is equipped with user-definable inputs and outputs which we used to synchronize image acquisition. Signals derived from the camera's exposure output signal were used to begin and end a particular SIM pattern, and to advance to the next pattern in the set of illumination patterns. Each 300 μs bitplane (i.e., each structured illumination pattern) is repeated a number of times and is integrated by the camera for the length of the chosen exposure (typically 100 ms). The camera therefore integrates an equal number of dark and illuminated bitplanes.
2.3 Illumination patterns
Most strategies in structured illumination microscopy assume that a set of illumination masks required for image reconstruction consist of N equal movements of the same pattern such that the sum of all of the masks results in homogenous illumination. Let us define the mark-to-area ratio (MAR)  of the pattern as the fraction of pixels which are considered to be illuminated in a unit area of the pattern.
The illumination masks used in our experiments consisted of line grid patterns. Lines were t microdisplay pixels thick (“on pixels”) with a gap of N – t microdisplay pixels (“off” pixels) in between. The line grid was shifted by one pixel between each frame to obtain a new illumination mask. The mark-to-area ratio corresponds to MAR = t / N.
2.4 Optical sectioning from structured illumination data
Several computational approaches for obtaining optically sectioned images from structured illumination data are reviewed in . Essentially, there are two approaches for data processing. The first type reconstructs optically sectioned images without any information except for the number of illumination patterns N. The second type requires, in addition, knowledge of the exact position of the illumination mask in the camera image.
Let represent intensity values of the computed sectioned image, are intensity values of the camera image captured at a given frame n in the sequence of N illumination patterns, and indicates thepixel position in the camera image. A widefield image can be recovered from SIM data as an average of all images:
The following two more simplistic methods reconstruct optical sections from SIM data as follows:
The approach described by Eq. (3) applies maximum minus minimum projection . Here it is assumed that the “max” term contains mainly contributions from parts of the sample that are in focus and the “min” term mainly contributions from out of focus regions. The method in Eq. (4) is a form of homodyne detection , which is a technique based on detecting frequency-modulated signals by interference with a reference signal.Eq. (5) represents the conjugate (mostly confocal) light and second term the non-conjugate (mostly out of focus) light . The denominators in Eq. (5) correct for the number of pattern positions, e.g., one pixel vs. multi-pixel shifts of the pattern. The variable β corrects for small pixel-to-pixel variations in mask intensity and can be set to one if this is not an issue. We did not notice such effects and so set throughout.
2.5 One-to-one correspondence mapping between microdisplay and camera
The illumination pattern position in the camera image might be determined by analyzing the raw images. In our hands this proved both difficult and inaccurate, particularly with sparse samples. In the following two sections we introduce a procedure that allows us to determine a mathematical model describing the one-to-one mapping between the microdisplay and the camera sensor and thus to create a digital illumination mask in the camera image. Having such a model allows one to use arbitrary illumination patterns, to determine the exact pattern position in the camera image even with sparse samples, and to correct for distortions of the illumination pattern in the acquired data.
The illumination patterns created on the microdisplay are projected to the camera chip as follows. An optical ray originating at a point in the plane of the microdisplay passes through an illumination tube lens, the microscope objective, and illuminates the sample. Fluorescence from the sample is collected by the objective, and imaged by the microscope at the point on the camera sensor. A block diagram is shown in Fig. 2 .
Let us assume for the moment that there are no distortions in the path of the optical ray, i.e., , and let us express the position of points and in projective (also called homogeneous) coordinates: , . Using projective coordinates allows us to describe affine transformations (e.g., translation, rotation, scaling, reflection) by a single matrix multiplication. It can be shown  that any point (or pixel) from the microdisplay plane can be unambiguously mapped to the camera sensor (and vise versa) using a linear projective transformation (also called homography)
Unfortunately, the illumination pattern created on the microdisplay is slightly distorted when it is imaged on the camera. Therefore, we correct the mapping in Eq. (6) for two distortion components. First, radial distortion (i.e., barrel or pincushion distortion) bends the optical ray from its ideal position and second, decentering displaces the principal point from the optical axis. Radial distortion is usually modeled by an even power polynomial. The corrected image coordinates are obtained by one-to-one mapping as19–21].
2.6 Camera-microdisplay calibration
Camera-microdisplay calibration is a procedure that allows us to determine numerical values of the projective matrix H in Eq. (6), and the distortion coefficients and the location of the principal point in Eq. (7). Calibration proceeds by finding corresponding pairs of points between the microdisplay and the camera image. Each such correspondence provides one Eq. (6) and one Eq. (7). This results in a system of equations, see the Appendix, which can be solved by least squares methods. Typically hundreds of points are used to correct for uncertainty in the measurements.
As the microdisplay lets us to create an arbitrarily configured “known scene”, we established the calibration using a chessboard pattern with a box size of 8 × 8 microdisplay pixels, see Fig. 3(a) . Four orientation markers were placed in the chessboard center to define the coordinate system of the microdisplay. The chessboard illumination pattern was projected to the camera sensor using a thin fluorescent film sample. The corresponding camera image is shown in Fig. 3(b). As reference points, we used the known positions of the corners on the microdisplay. Corners in the camera image were detected automatically with subpixel precision using a corner detector described by Noble in .
The final mapping between the microdisplay and the camera was estimated from about 3600 corresponding corner points. We found that, when using a 100 × objective, the residual error of mapping any arbitrary point from the microdisplay to the camera image using the lens model corrected for radial distortion and decentering components, cf. Equations (6) and (7), is 0.12 ± 0.08 pixels (~10 ± 6 nm referenced to the pattern position in the sample). When using a simpler lens model without distortion correction, the residual error of mapping an arbitrary point was 0.20 ± 0.13 pixels (~16 ± 10 nm referenced to the pattern position in the sample). This represents a barrel distortion of about 0.09% for the full camera image. The accuracy of mapping points for each lens model was determined by measuring point-to-point distances of the corners detected in the camera image and the corresponding corners mapped from the microdisplay into the camera image.
2.7 Data acquisition and processing
We acquired image sequences using Andor IQ software, which was used together with an input/output computer card (PCIM-DDA06/16, Measurement Computing, Massachusetts) to move a Z stage (NanoScan Z100, Prior Scientific, Cambridge, UK).
All data processing was performed offline in Matlab (The Mathworks, Natick, Massachusetts). Image intensities of the raw data were first scaled into the interval [0, 1] based on the camera acquisition bit depth.
To use scaled subtraction method, cf. Equation (5), we first smoothed the digital mask using a Gaussian filter with a sigma that approximates the measured point spread function (PSF) of the microscope.
We sometimes noticed slight patterned artifacts in the reconstructed images, which we attributed to minor fluctuations in the intensity of the laser. To correct this, we normalized each image of a sequence (used for reconstruction of one optical section) such that the average intensity of all images in this sequence is the same; this procedure was suggested by Cole et al.  and gave satisfactory results for both florescent planes and biological samples.
3.1 Tunable optical sectioning ability
We first determined the optical sectioning ability of the LCOS microdisplay-based structured illumination microscope. This was done by focusing through a thin fluorescent layer sample while scanning with various illumination patterns. The thin fluorescent layer sample was prepared by spreading 1 μl of 40 nm orange fluorescent beads on a coverslip and allowing them to dry. The sample was then mounted in moviol and sealed with clear nail polish. Sectioned images were computed for each set of illumination patterns using the Max-Min approach, cf. Equation (3), homodyne detection, cf. Equation (4), and scaled subtraction, cf. Equation (5). The average intensity of each reconstructed image was determined as a function of axial position of the sample. The resulting peak-shaped curves with their maxima in the focal plane, see example data in Fig. 4(a) , were fitted to a bimodal Gaussian function plus a constant offset using non-linear least squares methods and normalized such that the maximum of each curve was set to one [24, 25], see example data in Fig. 4(b). From this data we computed the full width at half maximum (FWHM), which corresponds to the optical sectioning thickness and offset. The offset originates from a combination of cross-talk between line pattern “slits” and the effect of noise , and is expressed as a percentage of the maximum intensity response. The sectioning data in Fig. 4 are slightly asymmetric, this is usually attributed to spherical aberrations. For these experiments, we used a 532 nm laser, 100 × /1.45 NA oil immersion objective and a Z-increment of 50 nm.
The tunable optical sectioning ability of the SIM system is shown in Fig. 5 , where we have plotted the fitted FWHM vs. MAR and offset vs. MAR and line thickness of the illumination pattern for the three processing methods. The line thickness is indicated by color. We can observe that the sectioning strength improves (lower FWHM) as the MAR of the pattern increases but at the cost of increased offset (lower signal). A similar trend has also been observed in [2, 13, 24]. However, the scaled subtraction method is effective in removing the offset which is present when using the other two methods. In practice, this allows SIM to achieve optical sectioning thicknesses well below those available in CLSM. The thinnest optical section recorded was 299 nm (MAR = 1/3, line spacing 489 nm, diffraction limited line thickness). The system can therefore approach nearly isotropic resolution in x, y and z. Our results are compatible with those of Neil, et al. , who determined that optimal optical sectioning would be achieved with a line pattern with a spacing of λ/NA (~380 nm at λ = 550 nm and NA 1.45). In Fig. 5, we do not show the measured values for patterns with a spacing below λ/NA (i.e., MAR = 1/2 for lines of one microdisplay pixel thick, corresponding to a line spacing of 326 nm) because of very low pattern contrast in these cases.
We also evaluated the optical sectioning ability of a CLSM (Leica SP5 with 63 × /1.4 NA oil immersion objective, 561 nm laser, 1 AU pinhole) and of a spinning disk microscope (Andor Revolution, 60 × /1.4 NA oil immersion objective, 561 nm laser) using the same fluorescent layer sample. For the CLSM, the measured FWHM was 966 nm and offset was 0.05. For the spinning disk system the measured FWHM was 1.632 μm and offset was 0.11. These values are plotted as dashed lines in Fig. 5.
3.2 Comparison of different processing methods and scanning patterns
To illustrate the possible tradeoffs between the optical sectioning ability of the three processing methods and the spatial frequency of the illumination patterns, we imaged a relatively thick biological sample (a fluorescent pollen grain about 50 μm thick, type 30-4264, Carolina Biological), see images in Fig. 6 . In order to compare the optical sectioning performance between the different processing methods and SIM illumination patterns, we estimated the signal to noise ratio (SNR, i.e., the ratio of the average signal to the standard deviation of the background) and the signal to background ratio (SBR, i.e., the ratio of the in focus foreground to the out of focus background) in the reconstructed images.
We calculated SNR and SBR as follows. The signal in the reconstructed image was segmented using an iterative threshold selection method based on a k-means algorithm . The background mask for SNR estimation was determined such that the in focus signal and the out of focus light was not included. To create the background mask far away from the sample, the signal mask was first morphologically dilated 15 times using a 3 × 3 structuring element and the result was inverted. The background mask for SBR estimation was established as the complement of the signal and noise masks derived for the SNR calculation in order to evaluate only the contribution from out of focus light.
Figure 6 shows a comparison of single optical sections in the XY plane, as well as XZ and YZ projections of the reconstructed data. In this experiment, we used a 60 × /1.35 NA oil immersion objective and a line grid illumination pattern with two spatial frequencies. The nominal line thickness was 272 nm at the sample plane with a line spacing of 1.63 μm (MAR = 1/6), or 8.16 μm (MAR = 1/30).
It has been predicted theoretically that sparse (low MAR) patterns improve the sectioning ability of a SIM system when imaging thick samples . The YZ projections in Fig. 6 show that the low frequency illumination patterns indeed image the deepest parts of the sample better than the high frequency pattern. This difference is most remarkable when using scaled subtraction for reconstruction. However, we can also observe that the coarser pattern yields images that contain noticeably more out of focus signal than the fine pattern. We also found that the scaled subtraction method used together with a digital mask acquired via the calibration scheme described in Sections 2.5 and 2.6 has both the highest SNR and SBR of the three tested methods. The reconstructed images also show that illumination patterns with low spatial frequency produce higher SNR but thicker optical sections (i.e., lower SBR) whereas for high spatial frequencies we observe thinner optical sections but lower SNR.
3.3 Comparing different optically sectioning microscopes
Finally, we compared the LCOS-based SIM system to more established optical sectioning microscopes. We imaged a similar pollen grain under conditions as close as we could achieve with the equipment available. Images in Fig. 7 show a comparison of widefield, CLSM, spinning disk, and microdisplay-based structured illumination microscopes. The illumination pattern for SIM was a line grid with MAR = 1/16 and a line thickness of 272 nm in the sample plane (60 × /1.35 NA oil immersion objective). We used scaled subtraction, cf. Equation (5), for processing the SIM data. The widefield image was computed from the SIM data using Eq. (2).
We can observe from the intensity profiles that the SIM system outperforms both the CLSM and spinning disk microscopes in terms of rejection of out of plane fluorescence. Similar results have been observed before for spinning disk microscopes , which do not reject out of focus signals as well as the SIM or CLSM systems. However, as the SIM system has to acquire several images to reconstruct a single optically sectioned image, image acquisition is slower than in spinning disk microscopes, which integrate all the pinhole positions in a single camera exposure. This limits the usefulness of the present system for live cell imaging, where rapidly moving structures would result in unwanted artifacts in the reconstructed images. However, our system should be well suited to high resolution scanning of fixed specimens.
The principle of optical sectioning by scaled subtraction of the out of focus light, cf. Equation (5), has been previously used in a different context in programmable array microscopes (PAMs) [13, 14, 17, 24, 25, 27] based on DMD or LCOS microdisplays (for a review see ). Similar to spinning disk confocal microscopy, in a PAM, optical sectioning is achieved by both producing the illumination pattern and descanning the fluorescence image using the same mask, in this case a microdisplay. The optically sectioned image created by the shifting scanning pattern is integrated by a CCD camera. As in our approach, PAMs also offer tunable optical sectioning ability. Though flexible and potentially very fast, PAMs suffer from reduced sensitivity compared to SIM with widefield detection, as descanning the image using the microdisplay results in both optical losses (the LCOS microdisplay used here, and in previous PAMs [13, 14], is about 60% reflective) and diffractive losses (i.e., diffraction of fluorescence signals into higher orders which might not be collected by the imaging lenses used to form the image on the CCD; this problem is more severe with DMD microdisplays than with LCOS devices).
We used the microdisplay to illuminate the sample by directly imaging the display onto the sample plane rather than forming a fringe pattern based on laser interference as is usually done in SIM to achieve resolution beyond the Abbe limit [11, 12, 29]. Because of this, any arbitrary pattern or binary image can be imaged onto the sample with very high fidelity. This is useful for applications such as fluorescence recovery after photobleaching (FRAP), where arbitrary shapes can be used for bleaching . However, patterns with very high spatial frequencies (approaching the limit defined by the NA of the objective) suffer from poor contrast according to the contrast transfer function of the microscope , resulting in reconstructed images with low SNR. This is not the case in SIM with coherent illumination, as is usually used when enhancing lateral resolution.
Using the microdisplay in the image plane as an arbitrary binary mask also means that we utilize illumination light very inefficiently, but this is not a fundamental problem given a bright enough light source (i.e., a laser with adequate power).
Structured illumination microscopy is currently a rapidly expanding field, in which spatial light modulators (SLMs) are preferred for their flexibility and speed compared to physical gratings. Here we introduced an approach which determined the mapping of the illumination pattern created on the microdisplay to the camera image. This allowed us to apply scaled subtraction of the out of focus light for reconstruction of optically sectioned images. Together these methods achieved greatly improved optical sectioning performance compared to both CLSMs and spinning disk microscopes, but with reduced cost and improved flexibility. The system presented here also offers improved sensitivity compared to previous SLM-based systems such as PAM, which rely on descanning the fluorescence signal. We anticipate SIM will continue to grow, especially as applications are found in combination with other imaging modes, such as light sheet microscopy  and superresolution microscopy [11, 12, 29].
Appendix: Camera-microdisplay calibration
For camera-microdisplay calibration we need to determine numerical values of the projective matrix H in Eq. (6), as well as the distortion coefficients and the location of the principal point in Eq. (7). To do this we have adapted a method from machine vision applications . The goal is to find a set of corresponding points between the microdisplay and the camera image. One such a correspondence is shown in Fig. 2.
To estimate the projective matrix H, we can rewrite Eq. (6) as:
The computation of the projective matrix H has to be coupled with estimation of parameters of the lens distortion model in Eq. (7). This is done by minimizing the sum of squared distances between points mapped by Eqs. (6) and (7) with respect to the unknown parameters and .
Appendix: Effect of Camera-microdisplay calibration in the case of sparse samples
To further illustrate the effect of the camera-microdisplay calibration, we imaged a very sparse sample (fluorescent beads, 200 nm diameter) and a more densely labeled sample. For the dense sample, we labeled actin in paraformaldehyde-fixed HepG2 hepatocyte cells using Atto-532-phalloidin (Atto-tec, Siegen, Germany). The data in Fig. 8 show widefield images, a maximum intensity projection of a single SIM pattern position, an overlay of the previous image with the calibrated SIM pattern, and a maximum intensity projection of the reconstructed, optically sectioned image determined by the scaled subtraction method, cf. Eq. (5). With a densely labeled sample, the SIM pattern can be seen in the camera image and its position might be determined in some way. However, in the case of a sparse sample, there is no trace of the pattern in the camera images. As such, it would be very difficult to impossible to determine the pattern position, and therefore, not feasible to use scaled subtraction for SIM reconstruction. Use of the calibration and mapping approach introduced in Sections 2.5 and 2.6 allows us to determine the position of any arbitrary SIM pattern in the camera image, even in the case of sparse samples.
This work was supported by Grant Agency of the Czech Republic projects 304/09/1047, P205/12/P392, and P302/12/G157 and by the projects Prvouk/1LF/1 and UNCE 204022 from the Charles University.
References and links
1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.
3. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef] [PubMed]
5. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). [CrossRef]
8. T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007). [CrossRef] [PubMed]
10. S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt. 8(7), S461–S466 (2006). [CrossRef]
11. P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009). [CrossRef] [PubMed]
12. L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011). [CrossRef] [PubMed]
13. G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009). [CrossRef] [PubMed]
14. G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).
16. D. Armitage, I. Underwood, and S.-T. Wu, Introduction to Microdisplays (John Wiley and Sons, 2006), p. 377.
17. R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001). [CrossRef] [PubMed]
19. M. Šonka, V. Hlaváč, and R. Boyle, Image Processing Analysis and Machine Vision 2nd ed. (PWS Publishing, 1998), p. 770.
20. D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).
21. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992). [CrossRef]
22. J. A. Noble, “Descriptions of image surfaces,” (University of Oxford, Oxford, 1989).
23. M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001). [CrossRef] [PubMed]
24. Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999). [CrossRef] [PubMed]
25. P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998). [CrossRef]
27. P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE 7932(1), 79320G (2011). [CrossRef]
28. P. Křížek and G. M. Hagen, “Spatial light modulators in fluorescence microscopy,” in Microscopy: Science, Technology, Applications and Education 4th ed., A. Méndez-Vilas and J. Díaz, eds. (Formatex, 2010), pp. 1366–1377.
30. T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011). [CrossRef] [PubMed]