Abstract

Structured illumination microscopy (SIM) has grown into a family of methods which achieve optical sectioning, resolution beyond the Abbe limit, or a combination of both effects in optical microscopy. SIM techniques rely on illumination of a sample with patterns of light which must be shifted between each acquired image. The patterns are typically created with physical gratings or masks, and the final optically sectioned or high resolution image is obtained computationally after data acquisition. We used a flexible, high speed ferroelectric liquid crystal microdisplay for definition of the illumination pattern coupled with widefield detection. Focusing on optical sectioning, we developed a unique and highly accurate calibration approach which allowed us to determine a mathematical model describing the mapping of the illumination pattern from the microdisplay to the camera sensor. This is important for higher performance image processing methods such as scaled subtraction of the out of focus light, which require knowledge of the illumination pattern position in the acquired data. We evaluated the signal to noise ratio and the sectioning ability of the reconstructed images for several data processing methods and illumination patterns with a wide range of spatial frequencies. We present our results on a thin fluorescent layer sample and also on biological samples, where we achieved thinner optical sections than either confocal laser scanning or spinning disk microscopes.

©2012 Optical Society of America

1. Introduction

Structured illumination microscopy (SIM) works by acquiring a set of images at a given focal plane using widefield detection where each image in the set is made with a different position of an illumination mask but with no mask in the detection path [1]. Subsequent image processing is always needed to yield an optically sectioned image [13] or an image with resolution beyond the Abbe limit [4, 5].

We focused on improvements to the “SIM for optical sectioning” application. The most familiar implementation of this technique was introduced in 1997 by Neil et al. [3]. Their method works by projecting a line illumination pattern onto a sample, followed by acquisition of a set of three widefield images with the pattern shifted by relative spatial phases 0, 2π/3, and 4π/3. An optically sectioned image can be recovered computationally as

Ic=[(I1I2)2+(I1I3)2+(I2I3)2]1/2,
where Ic is an optically sectioned image, and I1, I2 and I3 are the three images acquired with different pattern positions. Microscopes according to this definition have been used rather extensively but with limited flexibility in tuning the optical section thickness, the choice of illumination patterns, or the data processing method.

In previous structured illumination microscopes, the illumination patterns have typically been created using physical gratings. This limits imaging speeds, as the grating must be precisely shifted, and the position must be stable before image acquisition to prevent artifacts. Moreover, use of a single physical grating implies that a particular fringe projection system will be optimized for only a few microscope objectives (usually, high magnification, high NA objectives). However, it has been found that an optimized grating frequency can achieve significantly thinner optical sections than confocal laser scanning microscopes (CLSMs) with pinholes set to 1 Airy unit (AU) [6]. This has been established theoretically as well [2].

In the quest for higher imaging rates and increased flexibility, investigators have turned to spatial light modulators (SLMs) for pattern creation. SIM for sectioning systems employing widefield detection have used digital micromirror devices (DMDs) [79], or transmissive liquid crystal SLMs [10].

To increase both the flexibility and the optical sectioning performance of structured illumination microscopy, we used a reflective ferroelectric liquid crystal-on-silicon (LCOS) microdisplay to create the illumination pattern. Use of the microdisplay allows us to utilize a truly arbitrary pattern for structured illumination, including arrays of lines, dots, or random patterns, and thus, to find the most suitable scanning pattern for a given sample. This flexibility allows us to easily compromise between scanning speed (i.e., the number of patterns), the desired signal to noise ratio (SNR), and the optical section thickness. Similar LCOS microdisplays have been used previously in SIM [11, 12], and in programmable array microscopy (PAM) [13, 14].

We present a unique, highly accurate calibration procedure that allows us to determine a one-to-one mapping between pixels of the microdisplay used to create the illumination pattern and pixels of the camera chip. In this way we can recreate a digital illumination mask in the acquired data. Knowledge of the exact position of the illumination pattern in each camera image allowed us to apply higher performance data processing methods for image reconstruction, i.e., scaled subtraction of the out of focus light [1, 15].

In the context of “SIM for optical sectioning” systems with no mask in the detection path, the scaled subtraction approach has previously only been suggested as a possible processing method [15]. We evaluated its 3D imaging performance compared to two other more simplistic techniques, one applying a maximum minus minimum projection approach [15], and one applying homodyne detection [3, 15]. Scaled subtraction allowed us to obtain better results (i.e., better suppression of out of focus signals with higher SNR) compared to methods which process the data without any information about which part of the sample was illuminated.

The properties of our LCOS-based SIM system are demonstrated using a thin fluorescent layer sample and with biological samples. For illumination we used line grid patterns with a wide range of spatial frequencies. We also compared the results with a Leica SP5 CLSM and an Andor Revolution spinning disk system.

2. Methods

2.1 Microscope setup

Our setup is shown in Fig. 1(a) . We used an IX71 microscope equipped with 100 × /1.45 NA and 60 × /1.35 NA objectives (both oil immersion, Olympus, Hamburg, Germany). We used two detectors: a conventional CCD camera (Clara) and an EMCCD (Ixon 885, both from Andor, Belfast, Northern Ireland). For illumination, we used a 532 nm solid state laser (1000 mW, Dragon laser, ChangChun, China). The laser was introduced into the microscope using a 0.39 NA multimode optical fiber and a 1 inch, 75 mm focal length achromatic lens for collimation (Thor Labs, Newton, New Jersey). To scramble the coherence of the laser and reduce speckle, we used a laser speckle reducer (Optotune, Dietikon, Switzerland) based on an electroactive polymer. Fluorescence was isolated using an appropriate filter set for Cy3 (Chroma, Bellows Falls, Vermont).

 figure: Fig. 1

Fig. 1 Structured illumination microscope: a) the microscope setup, b) principle of LCOS microdisplay operation.

Download Full Size | PPT Slide | PDF

The microscope's illumination tube lens and objective collect the light from pixels in the ON state and image the microdisplay onto the sample, see Fig. 1(a). For the illumination tube lens we have chosen a 150 mm focal length lens (Thor Labs) as it images the microdisplay so that it just fills the field of view of the microscope. Because Olympus objectives are designed to use tube lenses with a focal length of 180 mm, the effective demagnification of the microdisplay into the sample is a factor of (150/180) × MAG, where MAG is the magnification of the objective. Using a 100 × /1.45 NA objective, a single 13.6 × 13.6 μm microdisplay pixel will be imaged into the sample with a nominal size of 163 × 163 nm.

At 532 nm, the Abbe limit for our 1.45 NA objective is λ/2NA = 183.4 nm, larger than the 163 nm microdisplay pixel size we used. Nyquist-limited sampling of the specimen by the SIM pattern would imply an optimal microdisplay pixel size of < λ/4NA = 91.7 nm as imaged in the sample. However, we did not observe obvious patterned artifacts in the reconstructed images and so judged that a pixel size of 163 nm was adequate. However, these relationships can easily be changed by choosing a different illumination tube lens and objective. A separate matter is the CCD pixel size. With a 100 × objective the camera pixel size was 80 nm in the sample, implying Nyquist-limited imaging at the fluorescence wavelength (~550 nm).

2.2 LCOS microdisplay operation

The ferroelectric LCOS microdisplay (type 3DM, Forth Dimension Displays, Dalgety Bay, Scotland) used in our setup offers several characteristics advantageous for structured illumination microscopy including high fill factor (93%), small pixels (13.6 × 13.6 μm), high contrast (> 1000:1 at f/3.2) and high speed (40 μs on/off switch time, ~3.2 kHz maximum pattern refresh rate).

This device functions as an addressable array of quarter-wave plates with a reflective backing. We used it as a programmable spatial light modulator in a binary imaging mode. Pixels that are in the ON state rotate the polarization of light by ~70 degrees after two passes through the liquid crystal material (manufacturer's specification for operation at room temperature). Pixels in the OFF state reflect the light without changing the state of polarization. If vertically polarized illumination light is reflected onto the display using a polarizing beam splitter (PBS) cube (Thor Labs) then after reflection off the display, only horizontally polarized light (corresponding to the pixels in the ON state) is transmitted though the PBS cube towards the microscope, see Fig. 1(b). This allows us to create any desired binary illumination pattern.

Microdisplays are typically used for video projection, meaning that grayscale (or full color) images must be produced. This is usually accomplished using a bitplane weighting approach [16]. For our purposes, i.e., creating a binary mask, a drive sequence with equally weighted bitplanes is required. We chose the longest available bitplane duration, 300 μs.

The LCOS microdisplay requires that the state of every pixel is reversed after each image, i.e., after each 300 μs bitplane. During such compensation cycles, the light source must be switched off. We accomplished this by directly switching the lasers off using synchronizing signals derived from the 3DM microdisplay controller.

The 3DM microdisplay controller is equipped with user-definable inputs and outputs which we used to synchronize image acquisition. Signals derived from the camera's exposure output signal were used to begin and end a particular SIM pattern, and to advance to the next pattern in the set of illumination patterns. Each 300 μs bitplane (i.e., each structured illumination pattern) is repeated a number of times and is integrated by the camera for the length of the chosen exposure (typically 100 ms). The camera therefore integrates an equal number of dark and illuminated bitplanes.

2.3 Illumination patterns

Most strategies in structured illumination microscopy assume that a set of illumination masks required for image reconstruction consist of N equal movements of the same pattern such that the sum of all of the masks results in homogenous illumination. Let us define the mark-to-area ratio (MAR) [1] of the pattern as the fraction of pixels which are considered to be illuminated in a unit area of the pattern.

The illumination masks used in our experiments consisted of line grid patterns. Lines were t microdisplay pixels thick (“on pixels”) with a gap of N – t microdisplay pixels (“off” pixels) in between. The line grid was shifted by one pixel between each frame to obtain a new illumination mask. The mark-to-area ratio corresponds to MAR = t / N.

The illumination masks can also be created from dots in a square or hexagonal grid [15], or using randomized illumination patterns [17, 18]. However, we do not consider these matters here.

2.4 Optical sectioning from structured illumination data

Several computational approaches for obtaining optically sectioned images from structured illumination data are reviewed in [1]. Essentially, there are two approaches for data processing. The first type reconstructs optically sectioned images without any information except for the number of illumination patterns N. The second type requires, in addition, knowledge of the exact position of the illumination mask in the camera image.

Let ICi represent intensity values of the computed sectioned image, In are intensity values of the camera image captured at a given frame n in the sequence of N illumination patterns, and (x,y) indicates thex,ypixel position in the camera image. A widefield image can be recovered from SIM data as an average of all images:

IWF(x,y)=1Nn=1NIn(x,y).

The following two more simplistic methods reconstruct optical sections from SIM data as follows:

IC1(x,y)=maxn=1,,NIn(x,y)minn=1,,NIn(x,y),
IC2(x,y)=|n=1NIn(x,y)exp(2πinN)|.

The approach described by Eq. (3) applies maximum minus minimum projection [15]. Here it is assumed that the “max” term contains mainly contributions from parts of the sample that are in focus and the “min” term mainly contributions from out of focus regions. The method in Eq. (4) is a form of homodyne detection [3], which is a technique based on detecting frequency-modulated signals by interference with a reference signal.

For the next method it is essential to know the exact illumination mask position in the acquired data. This method applies scaled subtraction of the out of focus light [1, 15, 17], and is defined as:

IC3(x,y)=β[n=1NIn(x,y)Masknon(x,y)n=1NMasknon(x,y)n=1NIn(x,y)Masknoff(x,y)n=1NMasknoff(x,y)],
β=Nn=1NMasknon(x,y)(n=1NMasknon(x,y))2Nn=1N(Masknon(x,y))2(n=1NMasknon(x,y))2,
where Masknon[0,1] is the intensity of the digital illumination pattern in the camera image for a given frame n, and Masknoff=1Masknon. The first term of Eq. (5) represents the conjugate (mostly confocal) light and second term the non-conjugate (mostly out of focus) light [17]. The denominators in Eq. (5) correct for the number of pattern positions, e.g., one pixel vs. multi-pixel shifts of the pattern. The variable β corrects for small pixel-to-pixel variations in mask intensity and can be set to one if this is not an issue. We did not notice such effects and so set β=1 throughout.

2.5 One-to-one correspondence mapping between microdisplay and camera

The illumination pattern position in the camera image might be determined by analyzing the raw images. In our hands this proved both difficult and inaccurate, particularly with sparse samples. In the following two sections we introduce a procedure that allows us to determine a mathematical model describing the one-to-one mapping between the microdisplay and the camera sensor and thus to create a digital illumination mask in the camera image. Having such a model allows one to use arbitrary illumination patterns, to determine the exact pattern position in the camera image even with sparse samples, and to correct for distortions of the illumination pattern in the acquired data.

The illumination patterns created on the microdisplay are projected to the camera chip as follows. An optical ray originating at a point in the plane of the microdisplay passes through an illumination tube lens, the microscope objective, and illuminates the sample. Fluorescence from the sample is collected by the objective, and imaged by the microscope at the point on the camera sensor. A block diagram is shown in Fig. 2 .

 figure: Fig. 2

Fig. 2 Principle of mapping the illumination pattern from the microdisplay to a camera sensor. The rotation of the microdisplay and barrel distortions in the camera image are greatly exaggerated for clarity. Point x in the microdisplay is detected at point u^ in the camera. We wish to know the position of any arbitrary microdisplay point in the camera image. To do so, we must determine the projective matrix H, see Eq. (6), and the coefficients of the polynomial modeling the geometric distortions, see Eq. (7).

Download Full Size | PPT Slide | PDF

Let us assume for the moment that there are no distortions in the path of the optical ray, i.e., , and let us express the position of points and in projective (also called homogeneous) coordinates: , . Using projective coordinates allows us to describe affine transformations (e.g., translation, rotation, scaling, reflection) by a single matrix multiplication. It can be shown [19] that any point (or pixel) from the microdisplay plane can be unambiguously mapped to the camera sensor (and vise versa) using a linear projective transformation (also called homography)

αu˜=Hx˜,
where H is a constant projective matrix, and is an appropriate scaling factor. The projective matrix H is unique for a given setup and has to be determined, see Section 2.6.

Unfortunately, the illumination pattern created on the microdisplay is slightly distorted when it is imaged on the camera. Therefore, we correct the mapping in Eq. (6) for two distortion components. First, radial distortion (i.e., barrel or pincushion distortion) bends the optical ray from its ideal position and second, decentering displaces the principal point u^p from the optical axis. Radial distortion is usually modeled by an even power polynomial. The corrected image coordinates u=[u,v] are obtained by one-to-one mapping as

u=u^+(u^u^p)k=1Kρkr2k,
where u^=[u^,v^] are the measured uncorrected image coordinates, u^p=[u^p,v^p] is the measured position of the principal point, r=u^u^p2 is the radial distance from the principal point, 2 stands for the Euclidean norm, and are coefficients of the polynomial modeling the radial distortions, where typically . The location of the principal point u^p and the coefficients are unknown and have to be determined, see Section 2.6. Further details about this topic, including more complex lens distortion models, can be found in references [1921].

2.6 Camera-microdisplay calibration

Camera-microdisplay calibration is a procedure that allows us to determine numerical values of the projective matrix H in Eq. (6), and the distortion coefficients and the location of the principal point u^p in Eq. (7). Calibration proceeds by finding corresponding pairs of points between the microdisplay and the camera image. Each such correspondence provides one Eq. (6) and one Eq. (7). This results in a system of equations, see the Appendix, which can be solved by least squares methods. Typically hundreds of points are used to correct for uncertainty in the measurements.

As the microdisplay lets us to create an arbitrarily configured “known scene”, we established the calibration using a chessboard pattern with a box size of 8 × 8 microdisplay pixels, see Fig. 3(a) . Four orientation markers were placed in the chessboard center to define the coordinate system of the microdisplay. The chessboard illumination pattern was projected to the camera sensor using a thin fluorescent film sample. The corresponding camera image is shown in Fig. 3(b). As reference points, we used the known positions of the corners on the microdisplay. Corners in the camera image were detected automatically with subpixel precision using a corner detector described by Noble in [22].

 figure: Fig. 3

Fig. 3 Chessboard calibration image with four orientation markers defining the coordinate system: a) microdisplay image with known positions of corners and markers and b) the raw camera image (100 × /1.45 NA objective; camera is rotated with respect to the display) of the same area with detected corners and markers. Correspondences are indicated by numbers.

Download Full Size | PPT Slide | PDF

The final mapping between the microdisplay and the camera was estimated from about 3600 corresponding corner points. We found that, when using a 100 × objective, the residual error of mapping any arbitrary point from the microdisplay to the camera image using the lens model corrected for radial distortion and decentering components, cf. Equations (6) and (7), is 0.12 ± 0.08 pixels (~10 ± 6 nm referenced to the pattern position in the sample). When using a simpler lens model without distortion correction, the residual error of mapping an arbitrary point was 0.20 ± 0.13 pixels (~16 ± 10 nm referenced to the pattern position in the sample). This represents a barrel distortion of about 0.09% for the full camera image. The accuracy of mapping points for each lens model was determined by measuring point-to-point distances of the corners detected in the camera image and the corresponding corners mapped from the microdisplay into the camera image.

2.7 Data acquisition and processing

We acquired image sequences using Andor IQ software, which was used together with an input/output computer card (PCIM-DDA06/16, Measurement Computing, Massachusetts) to move a Z stage (NanoScan Z100, Prior Scientific, Cambridge, UK).

All data processing was performed offline in Matlab (The Mathworks, Natick, Massachusetts). Image intensities of the raw data were first scaled into the interval [0, 1] based on the camera acquisition bit depth.

To use scaled subtraction method, cf. Equation (5), we first smoothed the digital mask using a Gaussian filter with a sigma that approximates the measured point spread function (PSF) of the microscope.

We sometimes noticed slight patterned artifacts in the reconstructed images, which we attributed to minor fluctuations in the intensity of the laser. To correct this, we normalized each image of a sequence (used for reconstruction of one optical section) such that the average intensity of all images in this sequence is the same; this procedure was suggested by Cole et al. [23] and gave satisfactory results for both florescent planes and biological samples.

3. Results

3.1 Tunable optical sectioning ability

We first determined the optical sectioning ability of the LCOS microdisplay-based structured illumination microscope. This was done by focusing through a thin fluorescent layer sample while scanning with various illumination patterns. The thin fluorescent layer sample was prepared by spreading 1 μl of 40 nm orange fluorescent beads on a coverslip and allowing them to dry. The sample was then mounted in moviol and sealed with clear nail polish. Sectioned images were computed for each set of illumination patterns using the Max-Min approach, cf. Equation (3), homodyne detection, cf. Equation (4), and scaled subtraction, cf. Equation (5). The average intensity of each reconstructed image was determined as a function of axial position of the sample. The resulting peak-shaped curves with their maxima in the focal plane, see example data in Fig. 4(a) , were fitted to a bimodal Gaussian function plus a constant offset using non-linear least squares methods and normalized such that the maximum of each curve was set to one [24, 25], see example data in Fig. 4(b). From this data we computed the full width at half maximum (FWHM), which corresponds to the optical sectioning thickness and offset. The offset originates from a combination of cross-talk between line pattern “slits” and the effect of noise [1], and is expressed as a percentage of the maximum intensity response. The sectioning data in Fig. 4 are slightly asymmetric, this is usually attributed to spherical aberrations. For these experiments, we used a 532 nm laser, 100 × /1.45 NA oil immersion objective and a Z-increment of 50 nm.

 figure: Fig. 4

Fig. 4 Example of SIM data used to determine optical sectioning parameters of the system. The illumination pattern was a line grid with MAR = 1/7 (line spacing 981 nm) and line thickness of one microdisplay pixel (diffraction limited in the sample plane). Data were acquired using a 100 × /1.45 NA oil immersion objective, 532 nm laser light source, EMCCD camera, and a Z-increment of 50 nm. Shown are a) processed data and b) data after normalization with the fitted bimodal Gaussian functions.

Download Full Size | PPT Slide | PDF

The tunable optical sectioning ability of the SIM system is shown in Fig. 5 , where we have plotted the fitted FWHM vs. MAR and offset vs. MAR and line thickness of the illumination pattern for the three processing methods. The line thickness is indicated by color. We can observe that the sectioning strength improves (lower FWHM) as the MAR of the pattern increases but at the cost of increased offset (lower signal). A similar trend has also been observed in [2, 13, 24]. However, the scaled subtraction method is effective in removing the offset which is present when using the other two methods. In practice, this allows SIM to achieve optical sectioning thicknesses well below those available in CLSM. The thinnest optical section recorded was 299 nm (MAR = 1/3, line spacing 489 nm, diffraction limited line thickness). The system can therefore approach nearly isotropic resolution in x, y and z. Our results are compatible with those of Neil, et al. [3], who determined that optimal optical sectioning would be achieved with a line pattern with a spacing of λ/NA (~380 nm at λ = 550 nm and NA 1.45). In Fig. 5, we do not show the measured values for patterns with a spacing below λ/NA (i.e., MAR = 1/2 for lines of one microdisplay pixel thick, corresponding to a line spacing of 326 nm) because of very low pattern contrast in these cases.

 figure: Fig. 5

Fig. 5 Tunable optical sectioning of the LCOS-based structured illumination microscope for the three examined data processing methods, cf. Equations (3)(5). The top row shows measured FWHM vs. MAR and the bottom row measured offset vs. MAR. The illumination pattern was a line grid with MAR ∈ [0.05, 0.9] and line thickness of one to six microdisplay pixels (163 nm – 1.08 μm in the sample plane). Data were acquired using a 100 × /1.45 NA oil immersion objective, 532 nm laser, EMCCD camera, and a Z-increment of 50 nm. The horizontal dashed lines indicate the measured values for a CLSM with its pinhole set to 1 AU (FWHM = 966 nm, offset = 0.05). and for a spinning disk system (FWHM = 1.632 μm, offset = 0.11).

Download Full Size | PPT Slide | PDF

We also evaluated the optical sectioning ability of a CLSM (Leica SP5 with 63 × /1.4 NA oil immersion objective, 561 nm laser, 1 AU pinhole) and of a spinning disk microscope (Andor Revolution, 60 × /1.4 NA oil immersion objective, 561 nm laser) using the same fluorescent layer sample. For the CLSM, the measured FWHM was 966 nm and offset was 0.05. For the spinning disk system the measured FWHM was 1.632 μm and offset was 0.11. These values are plotted as dashed lines in Fig. 5.

3.2 Comparison of different processing methods and scanning patterns

To illustrate the possible tradeoffs between the optical sectioning ability of the three processing methods and the spatial frequency of the illumination patterns, we imaged a relatively thick biological sample (a fluorescent pollen grain about 50 μm thick, type 30-4264, Carolina Biological), see images in Fig. 6 . In order to compare the optical sectioning performance between the different processing methods and SIM illumination patterns, we estimated the signal to noise ratio (SNR, i.e., the ratio of the average signal to the standard deviation of the background) and the signal to background ratio (SBR, i.e., the ratio of the in focus foreground to the out of focus background) in the reconstructed images.

 figure: Fig. 6

Fig. 6 Comparison of scanning patterns and processing methods. Images of a pollen grain were acquired using a 60 × /1.35 NA oil objective, 532 nm laser, EMCCD camera and a Z-increment of 500 nm. The intensity profiles are along the vertical lines in the corresponding XY images.

Download Full Size | PPT Slide | PDF

We calculated SNR and SBR as follows. The signal in the reconstructed image was segmented using an iterative threshold selection method based on a k-means algorithm [19]. The background mask for SNR estimation was determined such that the in focus signal and the out of focus light was not included. To create the background mask far away from the sample, the signal mask was first morphologically dilated 15 times using a 3 × 3 structuring element and the result was inverted. The background mask for SBR estimation was established as the complement of the signal and noise masks derived for the SNR calculation in order to evaluate only the contribution from out of focus light.

Figure 6 shows a comparison of single optical sections in the XY plane, as well as XZ and YZ projections of the reconstructed data. In this experiment, we used a 60 × /1.35 NA oil immersion objective and a line grid illumination pattern with two spatial frequencies. The nominal line thickness was 272 nm at the sample plane with a line spacing of 1.63 μm (MAR = 1/6), or 8.16 μm (MAR = 1/30).

It has been predicted theoretically that sparse (low MAR) patterns improve the sectioning ability of a SIM system when imaging thick samples [1]. The YZ projections in Fig. 6 show that the low frequency illumination patterns indeed image the deepest parts of the sample better than the high frequency pattern. This difference is most remarkable when using scaled subtraction for reconstruction. However, we can also observe that the coarser pattern yields images that contain noticeably more out of focus signal than the fine pattern. We also found that the scaled subtraction method used together with a digital mask acquired via the calibration scheme described in Sections 2.5 and 2.6 has both the highest SNR and SBR of the three tested methods. The reconstructed images also show that illumination patterns with low spatial frequency produce higher SNR but thicker optical sections (i.e., lower SBR) whereas for high spatial frequencies we observe thinner optical sections but lower SNR.

3.3 Comparing different optically sectioning microscopes

Finally, we compared the LCOS-based SIM system to more established optical sectioning microscopes. We imaged a similar pollen grain under conditions as close as we could achieve with the equipment available. Images in Fig. 7 show a comparison of widefield, CLSM, spinning disk, and microdisplay-based structured illumination microscopes. The illumination pattern for SIM was a line grid with MAR = 1/16 and a line thickness of 272 nm in the sample plane (60 × /1.35 NA oil immersion objective). We used scaled subtraction, cf. Equation (5), for processing the SIM data. The widefield image was computed from the SIM data using Eq. (2).

 figure: Fig. 7

Fig. 7 Comparison of different optically sectioning microscopes. The first row shows maximum intensity projections of the acquired images, the second row one optical section, and the third row the intensity profile along the indicated yellow lines. The acquisition parameters were as follows: CLSM − Leica SP5, 63 × /1.4 NA objective, 561 nm laser, XY pixel size 50 nm, pinhole 1AU; Spinning disk − Andor Revolution, Olympus 60 × /1.4 NA objective, Andor Ixon Ultra EMCCD camera, 561 nm laser, XY pixel size 222 nm; SIM − LCOS-based structured illumination, Olympus 60 × /1.35 NA objective, Andor Clara CCD camera, 532 nm laser, XY pixel size 107.5 nm. The Z-increment in all cases was 500 nm. The scanning pattern for SIM was a line grid with MAR = 1/16 and line thickness 272 nm in the sample plane. The SIM images were reconstructed using the scaled subtraction method, cf. Equation (5). The widefield image was computed from the SIM data by averaging the raw images, cf. Equation (2). For comparison, the images were resampled such that they all have a pixel size of 107.5 nm.

Download Full Size | PPT Slide | PDF

We can observe from the intensity profiles that the SIM system outperforms both the CLSM and spinning disk microscopes in terms of rejection of out of plane fluorescence. Similar results have been observed before for spinning disk microscopes [26], which do not reject out of focus signals as well as the SIM or CLSM systems. However, as the SIM system has to acquire several images to reconstruct a single optically sectioned image, image acquisition is slower than in spinning disk microscopes, which integrate all the pinhole positions in a single camera exposure. This limits the usefulness of the present system for live cell imaging, where rapidly moving structures would result in unwanted artifacts in the reconstructed images. However, our system should be well suited to high resolution scanning of fixed specimens.

4. Discussion

The principle of optical sectioning by scaled subtraction of the out of focus light, cf. Equation (5), has been previously used in a different context in programmable array microscopes (PAMs) [13, 14, 17, 24, 25, 27] based on DMD or LCOS microdisplays (for a review see [28]). Similar to spinning disk confocal microscopy, in a PAM, optical sectioning is achieved by both producing the illumination pattern and descanning the fluorescence image using the same mask, in this case a microdisplay. The optically sectioned image created by the shifting scanning pattern is integrated by a CCD camera. As in our approach, PAMs also offer tunable optical sectioning ability. Though flexible and potentially very fast, PAMs suffer from reduced sensitivity compared to SIM with widefield detection, as descanning the image using the microdisplay results in both optical losses (the LCOS microdisplay used here, and in previous PAMs [13, 14], is about 60% reflective) and diffractive losses (i.e., diffraction of fluorescence signals into higher orders which might not be collected by the imaging lenses used to form the image on the CCD; this problem is more severe with DMD microdisplays than with LCOS devices).

We used the microdisplay to illuminate the sample by directly imaging the display onto the sample plane rather than forming a fringe pattern based on laser interference as is usually done in SIM to achieve resolution beyond the Abbe limit [11, 12, 29]. Because of this, any arbitrary pattern or binary image can be imaged onto the sample with very high fidelity. This is useful for applications such as fluorescence recovery after photobleaching (FRAP), where arbitrary shapes can be used for bleaching [13]. However, patterns with very high spatial frequencies (approaching the limit defined by the NA of the objective) suffer from poor contrast according to the contrast transfer function of the microscope [2], resulting in reconstructed images with low SNR. This is not the case in SIM with coherent illumination, as is usually used when enhancing lateral resolution.

Using the microdisplay in the image plane as an arbitrary binary mask also means that we utilize illumination light very inefficiently, but this is not a fundamental problem given a bright enough light source (i.e., a laser with adequate power).

5. Conclusion

Structured illumination microscopy is currently a rapidly expanding field, in which spatial light modulators (SLMs) are preferred for their flexibility and speed compared to physical gratings. Here we introduced an approach which determined the mapping of the illumination pattern created on the microdisplay to the camera image. This allowed us to apply scaled subtraction of the out of focus light for reconstruction of optically sectioned images. Together these methods achieved greatly improved optical sectioning performance compared to both CLSMs and spinning disk microscopes, but with reduced cost and improved flexibility. The system presented here also offers improved sensitivity compared to previous SLM-based systems such as PAM, which rely on descanning the fluorescence signal. We anticipate SIM will continue to grow, especially as applications are found in combination with other imaging modes, such as light sheet microscopy [30] and superresolution microscopy [11, 12, 29].

Appendix: Camera-microdisplay calibration

For camera-microdisplay calibration we need to determine numerical values of the projective matrix H in Eq. (6), as well as the distortion coefficients and the location of the principal point u^p in Eq. (7). To do this we have adapted a method from machine vision applications [19]. The goal is to find a set of corresponding points between the microdisplay and the camera image. One such a correspondence is shown in Fig. 2.

To estimate the projective matrix H, we can rewrite Eq. (6) as:

[αuαvα]=[h11h12h13h21h22h23h31h32h33][xy1],
which can be further expanded and rearranged into the form:
[xy1000uxuyu000xy1vxvyv][h11h12h33]=0.
Here one pair of corresponding points generates two rows in the left-hand matrix as indicated. To generate a solution, at least four correspondences between the microdisplay and the camera image are required. Typically many more points are used to correct for uncertainty in the measurements. We solved the over-determined system of linear equations using the singular value decomposition (SVD) method.

The computation of the projective matrix H has to be coupled with estimation of parameters of the lens distortion model in Eq. (7). This is done by minimizing the sum of squared distances between points mapped by Eqs. (6) and (7) with respect to the unknown parameters and u^p.

Appendix: Effect of Camera-microdisplay calibration in the case of sparse samples

To further illustrate the effect of the camera-microdisplay calibration, we imaged a very sparse sample (fluorescent beads, 200 nm diameter) and a more densely labeled sample. For the dense sample, we labeled actin in paraformaldehyde-fixed HepG2 hepatocyte cells using Atto-532-phalloidin (Atto-tec, Siegen, Germany). The data in Fig. 8 show widefield images, a maximum intensity projection of a single SIM pattern position, an overlay of the previous image with the calibrated SIM pattern, and a maximum intensity projection of the reconstructed, optically sectioned image determined by the scaled subtraction method, cf. Eq. (5). With a densely labeled sample, the SIM pattern can be seen in the camera image and its position might be determined in some way. However, in the case of a sparse sample, there is no trace of the pattern in the camera images. As such, it would be very difficult to impossible to determine the pattern position, and therefore, not feasible to use scaled subtraction for SIM reconstruction. Use of the calibration and mapping approach introduced in Sections 2.5 and 2.6 allows us to determine the position of any arbitrary SIM pattern in the camera image, even in the case of sparse samples.

 figure: Fig. 8

Fig. 8 Comparison of sparse and dense samples in SIM. Top row: Atto-532 labeled actin in a HepG2 cell; MAR = 2/10. Bottom row: 200 nm diameter fluorescent beads; MAR = 1/5. In both cases we used a 532 nm laser for illumination and a 100 × /1.45 NA oil immersion objective. Shown are widefield images, a maximum intensity projection of a single SIM pattern position, an overlaid SIM pattern (green stripes) determined for the calibrated camera, and a maximum intensity projection of the reconstructed data obtained by scaled subtraction.

Download Full Size | PPT Slide | PDF

Acknowledgments

This work was supported by Grant Agency of the Czech Republic projects 304/09/1047, P205/12/P392, and P302/12/G157 and by the projects Prvouk/1LF/1 and UNCE 204022 from the Charles University.

References and links

1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.

2. T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242(2), 111–116 (2011). [CrossRef]   [PubMed]  

3. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef]   [PubMed]  

4. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]   [PubMed]  

5. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). [CrossRef]  

6. F. Chasles, B. Dubertret, and A. C. Boccara, “Optimization and characterization of a structured illumination microscope,” Opt. Express 15(24), 16130–16140 (2007). [CrossRef]   [PubMed]  

7. T. Fukano and A. Miyawaki, “Whole-field fluorescence microscope with digital micromirror device: imaging of biological samples,” Appl. Opt. 42(19), 4119–4124 (2003). [CrossRef]   [PubMed]  

8. T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007). [CrossRef]   [PubMed]  

9. D. M. Rector, D. M. Ranken, and J. S. George, “High-performance confocal system for microscopic or endoscopic applications,” Methods 30(1), 16–27 (2003). [CrossRef]   [PubMed]  

10. S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt. 8(7), S461–S466 (2006). [CrossRef]  

11. P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009). [CrossRef]   [PubMed]  

12. L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011). [CrossRef]   [PubMed]  

13. G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009). [CrossRef]   [PubMed]  

14. G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

15. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006). [CrossRef]   [PubMed]  

16. D. Armitage, I. Underwood, and S.-T. Wu, Introduction to Microdisplays (John Wiley and Sons, 2006), p. 377.

17. R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001). [CrossRef]   [PubMed]  

18. C. Ventalon and J. Mertz, “Dynamic speckle illumination microscopy with translated versus randomized speckle patterns,” Opt. Express 14(16), 7198–7209 (2006). [CrossRef]   [PubMed]  

19. M. Šonka, V. Hlaváč, and R. Boyle, Image Processing Analysis and Machine Vision 2nd ed. (PWS Publishing, 1998), p. 770.

20. D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).

21. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992). [CrossRef]  

22. J. A. Noble, “Descriptions of image surfaces,” (University of Oxford, Oxford, 1989).

23. M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001). [CrossRef]   [PubMed]  

24. Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999). [CrossRef]   [PubMed]  

25. P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998). [CrossRef]  

26. R. Wolleschensky, B. Zimmermann, and M. Kempe, “High-speed confocal fluorescence imaging with a novel line scanning microscope,” J. Biomed. Opt. 11(6), 064011 (2006). [CrossRef]   [PubMed]  

27. P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE 7932(1), 79320G (2011). [CrossRef]  

28. P. Křížek and G. M. Hagen, “Spatial light modulators in fluorescence microscopy,” in Microscopy: Science, Technology, Applications and Education 4th ed., A. Méndez-Vilas and J. Díaz, eds. (Formatex, 2010), pp. 1366–1377.

29. L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009). [CrossRef]   [PubMed]  

30. T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.
  2. T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242(2), 111–116 (2011).
    [Crossref] [PubMed]
  3. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997).
    [Crossref] [PubMed]
  4. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000).
    [Crossref] [PubMed]
  5. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999).
    [Crossref]
  6. F. Chasles, B. Dubertret, and A. C. Boccara, “Optimization and characterization of a structured illumination microscope,” Opt. Express 15(24), 16130–16140 (2007).
    [Crossref] [PubMed]
  7. T. Fukano and A. Miyawaki, “Whole-field fluorescence microscope with digital micromirror device: imaging of biological samples,” Appl. Opt. 42(19), 4119–4124 (2003).
    [Crossref] [PubMed]
  8. T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007).
    [Crossref] [PubMed]
  9. D. M. Rector, D. M. Ranken, and J. S. George, “High-performance confocal system for microscopic or endoscopic applications,” Methods 30(1), 16–27 (2003).
    [Crossref] [PubMed]
  10. S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt. 8(7), S461–S466 (2006).
    [Crossref]
  11. P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009).
    [Crossref] [PubMed]
  12. L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011).
    [Crossref] [PubMed]
  13. G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
    [Crossref] [PubMed]
  14. G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).
  15. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006).
    [Crossref] [PubMed]
  16. D. Armitage, I. Underwood, and S.-T. Wu, Introduction to Microdisplays (John Wiley and Sons, 2006), p. 377.
  17. R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001).
    [Crossref] [PubMed]
  18. C. Ventalon and J. Mertz, “Dynamic speckle illumination microscopy with translated versus randomized speckle patterns,” Opt. Express 14(16), 7198–7209 (2006).
    [Crossref] [PubMed]
  19. M. Šonka, V. Hlaváč, and R. Boyle, Image Processing Analysis and Machine Vision 2nd ed. (PWS Publishing, 1998), p. 770.
  20. D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).
  21. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
    [Crossref]
  22. J. A. Noble, “Descriptions of image surfaces,” (University of Oxford, Oxford, 1989).
  23. M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
    [Crossref] [PubMed]
  24. Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999).
    [Crossref] [PubMed]
  25. P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998).
    [Crossref]
  26. R. Wolleschensky, B. Zimmermann, and M. Kempe, “High-speed confocal fluorescence imaging with a novel line scanning microscope,” J. Biomed. Opt. 11(6), 064011 (2006).
    [Crossref] [PubMed]
  27. P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE 7932(1), 79320G (2011).
    [Crossref]
  28. P. Křížek and G. M. Hagen, “Spatial light modulators in fluorescence microscopy,” in Microscopy: Science, Technology, Applications and Education 4th ed., A. Méndez-Vilas and J. Díaz, eds. (Formatex, 2010), pp. 1366–1377.
  29. L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009).
    [Crossref] [PubMed]
  30. T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011).
    [Crossref] [PubMed]

2011 (4)

T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242(2), 111–116 (2011).
[Crossref] [PubMed]

L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011).
[Crossref] [PubMed]

P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE 7932(1), 79320G (2011).
[Crossref]

T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011).
[Crossref] [PubMed]

2009 (3)

L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009).
[Crossref] [PubMed]

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
[Crossref] [PubMed]

P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009).
[Crossref] [PubMed]

2007 (3)

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

F. Chasles, B. Dubertret, and A. C. Boccara, “Optimization and characterization of a structured illumination microscope,” Opt. Express 15(24), 16130–16140 (2007).
[Crossref] [PubMed]

T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007).
[Crossref] [PubMed]

2006 (4)

R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006).
[Crossref] [PubMed]

S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt. 8(7), S461–S466 (2006).
[Crossref]

C. Ventalon and J. Mertz, “Dynamic speckle illumination microscopy with translated versus randomized speckle patterns,” Opt. Express 14(16), 7198–7209 (2006).
[Crossref] [PubMed]

R. Wolleschensky, B. Zimmermann, and M. Kempe, “High-speed confocal fluorescence imaging with a novel line scanning microscope,” J. Biomed. Opt. 11(6), 064011 (2006).
[Crossref] [PubMed]

2003 (2)

D. M. Rector, D. M. Ranken, and J. S. George, “High-performance confocal system for microscopic or endoscopic applications,” Methods 30(1), 16–27 (2003).
[Crossref] [PubMed]

T. Fukano and A. Miyawaki, “Whole-field fluorescence microscope with digital micromirror device: imaging of biological samples,” Appl. Opt. 42(19), 4119–4124 (2003).
[Crossref] [PubMed]

2001 (2)

R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001).
[Crossref] [PubMed]

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

2000 (1)

M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000).
[Crossref] [PubMed]

1999 (2)

R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999).
[Crossref]

Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999).
[Crossref] [PubMed]

1998 (1)

P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998).
[Crossref]

1997 (1)

1992 (1)

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

1966 (1)

D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).

Arndt-Jovin, D.

R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001).
[Crossref] [PubMed]

Arndt-Jovin, D. J.

P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE 7932(1), 79320G (2011).
[Crossref]

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
[Crossref] [PubMed]

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999).
[Crossref] [PubMed]

Barisas, B. G.

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
[Crossref] [PubMed]

Benedetti, P. A.

Betzig, E.

T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011).
[Crossref] [PubMed]

Boccara, A. C.

Brown, D. C.

D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).

Caarls, W.

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
[Crossref] [PubMed]

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

Chasles, F.

Chhun, B. B.

P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009).
[Crossref] [PubMed]

Cohen, P.

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Cole, M. J.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Cremer, C.

R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999).
[Crossref]

Davidson, M. W.

T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011).
[Crossref] [PubMed]

Dayel, M. J.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

DeBeule, P. A. A.

P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE 7932(1), 79320G (2011).
[Crossref]

deVries, A. H. B.

P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE 7932(1), 79320G (2011).
[Crossref]

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
[Crossref] [PubMed]

Dowling, K.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Dubertret, B.

French, P. M. W.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Fritsch, C.

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
[Crossref] [PubMed]

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

Fukano, T.

T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007).
[Crossref] [PubMed]

T. Fukano and A. Miyawaki, “Whole-field fluorescence microscope with digital micromirror device: imaging of biological samples,” Appl. Opt. 42(19), 4119–4124 (2003).
[Crossref] [PubMed]

Galbraith, C. G.

T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011).
[Crossref] [PubMed]

Galbraith, J. A.

T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011).
[Crossref] [PubMed]

Gao, L.

T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011).
[Crossref] [PubMed]

Gemkow, M. J.

Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999).
[Crossref] [PubMed]

George, J. S.

D. M. Rector, D. M. Ranken, and J. S. George, “High-performance confocal system for microscopic or endoscopic applications,” Methods 30(1), 16–27 (2003).
[Crossref] [PubMed]

Griffis, E. R.

P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009).
[Crossref] [PubMed]

Gustafsson, M. G. L.

L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011).
[Crossref] [PubMed]

P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009).
[Crossref] [PubMed]

M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000).
[Crossref] [PubMed]

Hagen, G. M.

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
[Crossref] [PubMed]

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

Hanley, Q. S.

R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001).
[Crossref] [PubMed]

Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999).
[Crossref] [PubMed]

P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998).
[Crossref]

Heintzmann, R.

L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009).
[Crossref] [PubMed]

R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006).
[Crossref] [PubMed]

R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001).
[Crossref] [PubMed]

R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999).
[Crossref]

Herniou, M.

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Hill, A.

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

Hirvonen, L. M.

L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009).
[Crossref] [PubMed]

Jones, R.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Jovin, T. M.

P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE 7932(1), 79320G (2011).
[Crossref]

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
[Crossref] [PubMed]

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001).
[Crossref] [PubMed]

Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999).
[Crossref] [PubMed]

P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998).
[Crossref]

Juskaitis, R.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Juškaitis, R.

Kempe, M.

R. Wolleschensky, B. Zimmermann, and M. Kempe, “High-speed confocal fluorescence imaging with a novel line scanning microscope,” J. Biomed. Opt. 11(6), 064011 (2006).
[Crossref] [PubMed]

Kner, P.

L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011).
[Crossref] [PubMed]

P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009).
[Crossref] [PubMed]

Lenne, P. F.

S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt. 8(7), S461–S466 (2006).
[Crossref]

Lever, M. J.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Lidke, K. A.

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
[Crossref] [PubMed]

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

Mandula, O.

L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009).
[Crossref] [PubMed]

Matsuda, M.

T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007).
[Crossref] [PubMed]

Mertz, J.

Milkie, D. E.

T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011).
[Crossref] [PubMed]

Miyawaki, A.

T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007).
[Crossref] [PubMed]

T. Fukano and A. Miyawaki, “Whole-field fluorescence microscope with digital micromirror device: imaging of biological samples,” Appl. Opt. 42(19), 4119–4124 (2003).
[Crossref] [PubMed]

Monneret, S.

S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt. 8(7), S461–S466 (2006).
[Crossref]

Neil, M. A. A.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997).
[Crossref] [PubMed]

Ohba, Y.

T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007).
[Crossref] [PubMed]

Parsons-Karavassilis, D.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Planchon, T. A.

T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011).
[Crossref] [PubMed]

Ranken, D. M.

D. M. Rector, D. M. Ranken, and J. S. George, “High-performance confocal system for microscopic or endoscopic applications,” Methods 30(1), 16–27 (2003).
[Crossref] [PubMed]

Rauzi, M.

S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt. 8(7), S461–S466 (2006).
[Crossref]

Rector, D. M.

D. M. Rector, D. M. Ranken, and J. S. George, “High-performance confocal system for microscopic or endoscopic applications,” Methods 30(1), 16–27 (2003).
[Crossref] [PubMed]

Rego, E. H.

L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011).
[Crossref] [PubMed]

Rieger, B.

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

Sawano, A.

T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007).
[Crossref] [PubMed]

Shao, L.

L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011).
[Crossref] [PubMed]

Siegel, J.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Sucharov, L. O. D.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Thomas, M.

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

van Geest, B.

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

vanVliet, L. J.

P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998).
[Crossref]

Ventalon, C.

Verbeek, P. W.

P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998).
[Crossref]

Verveer, P. J.

Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999).
[Crossref] [PubMed]

P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998).
[Crossref]

Webb, S. E. D.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Weng, J.

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Wicker, K.

L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009).
[Crossref] [PubMed]

Wilson, T.

T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242(2), 111–116 (2011).
[Crossref] [PubMed]

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997).
[Crossref] [PubMed]

Winoto, L.

P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009).
[Crossref] [PubMed]

Wolleschensky, R.

R. Wolleschensky, B. Zimmermann, and M. Kempe, “High-speed confocal fluorescence imaging with a novel line scanning microscope,” J. Biomed. Opt. 11(6), 064011 (2006).
[Crossref] [PubMed]

Zimmermann, B.

R. Wolleschensky, B. Zimmermann, and M. Kempe, “High-speed confocal fluorescence imaging with a novel line scanning microscope,” J. Biomed. Opt. 11(6), 064011 (2006).
[Crossref] [PubMed]

Appl. Opt. (2)

Cell Struct. Funct. (1)

T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007).
[Crossref] [PubMed]

Eur. Biophys. J. (1)

L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

J. Biomed. Opt. (1)

R. Wolleschensky, B. Zimmermann, and M. Kempe, “High-speed confocal fluorescence imaging with a novel line scanning microscope,” J. Biomed. Opt. 11(6), 064011 (2006).
[Crossref] [PubMed]

J. Microsc. (6)

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001).
[Crossref] [PubMed]

Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999).
[Crossref] [PubMed]

P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998).
[Crossref]

T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242(2), 111–116 (2011).
[Crossref] [PubMed]

M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000).
[Crossref] [PubMed]

R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001).
[Crossref] [PubMed]

J. Opt. A, Pure Appl. Opt. (1)

S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt. 8(7), S461–S466 (2006).
[Crossref]

Methods (1)

D. M. Rector, D. M. Ranken, and J. S. George, “High-performance confocal system for microscopic or endoscopic applications,” Methods 30(1), 16–27 (2003).
[Crossref] [PubMed]

Microsc. Res. Tech. (1)

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009).
[Crossref] [PubMed]

Nat. Methods (3)

P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009).
[Crossref] [PubMed]

L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011).
[Crossref] [PubMed]

T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011).
[Crossref] [PubMed]

Opt. Express (2)

Opt. Lett. (1)

Photogramm. Eng. (1)

D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).

Proc. SPIE (3)

P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE 7932(1), 79320G (2011).
[Crossref]

R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999).
[Crossref]

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

Other (5)

M. Šonka, V. Hlaváč, and R. Boyle, Image Processing Analysis and Machine Vision 2nd ed. (PWS Publishing, 1998), p. 770.

D. Armitage, I. Underwood, and S.-T. Wu, Introduction to Microdisplays (John Wiley and Sons, 2006), p. 377.

R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.

P. Křížek and G. M. Hagen, “Spatial light modulators in fluorescence microscopy,” in Microscopy: Science, Technology, Applications and Education 4th ed., A. Méndez-Vilas and J. Díaz, eds. (Formatex, 2010), pp. 1366–1377.

J. A. Noble, “Descriptions of image surfaces,” (University of Oxford, Oxford, 1989).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Structured illumination microscope: a) the microscope setup, b) principle of LCOS microdisplay operation.
Fig. 2
Fig. 2 Principle of mapping the illumination pattern from the microdisplay to a camera sensor. The rotation of the microdisplay and barrel distortions in the camera image are greatly exaggerated for clarity. Point x in the microdisplay is detected at point u ^ in the camera. We wish to know the position of any arbitrary microdisplay point in the camera image. To do so, we must determine the projective matrix H, see Eq. (6), and the coefficients of the polynomial modeling the geometric distortions, see Eq. (7).
Fig. 3
Fig. 3 Chessboard calibration image with four orientation markers defining the coordinate system: a) microdisplay image with known positions of corners and markers and b) the raw camera image (100 × /1.45 NA objective; camera is rotated with respect to the display) of the same area with detected corners and markers. Correspondences are indicated by numbers.
Fig. 4
Fig. 4 Example of SIM data used to determine optical sectioning parameters of the system. The illumination pattern was a line grid with MAR = 1/7 (line spacing 981 nm) and line thickness of one microdisplay pixel (diffraction limited in the sample plane). Data were acquired using a 100 × /1.45 NA oil immersion objective, 532 nm laser light source, EMCCD camera, and a Z-increment of 50 nm. Shown are a) processed data and b) data after normalization with the fitted bimodal Gaussian functions.
Fig. 5
Fig. 5 Tunable optical sectioning of the LCOS-based structured illumination microscope for the three examined data processing methods, cf. Equations (3)(5). The top row shows measured FWHM vs. MAR and the bottom row measured offset vs. MAR. The illumination pattern was a line grid with MAR ∈ [0.05, 0.9] and line thickness of one to six microdisplay pixels (163 nm – 1.08 μm in the sample plane). Data were acquired using a 100 × /1.45 NA oil immersion objective, 532 nm laser, EMCCD camera, and a Z-increment of 50 nm. The horizontal dashed lines indicate the measured values for a CLSM with its pinhole set to 1 AU (FWHM = 966 nm, offset = 0.05). and for a spinning disk system (FWHM = 1.632 μm, offset = 0.11).
Fig. 6
Fig. 6 Comparison of scanning patterns and processing methods. Images of a pollen grain were acquired using a 60 × /1.35 NA oil objective, 532 nm laser, EMCCD camera and a Z-increment of 500 nm. The intensity profiles are along the vertical lines in the corresponding XY images.
Fig. 7
Fig. 7 Comparison of different optically sectioning microscopes. The first row shows maximum intensity projections of the acquired images, the second row one optical section, and the third row the intensity profile along the indicated yellow lines. The acquisition parameters were as follows: CLSM − Leica SP5, 63 × /1.4 NA objective, 561 nm laser, XY pixel size 50 nm, pinhole 1AU; Spinning disk − Andor Revolution, Olympus 60 × /1.4 NA objective, Andor Ixon Ultra EMCCD camera, 561 nm laser, XY pixel size 222 nm; SIM − LCOS-based structured illumination, Olympus 60 × /1.35 NA objective, Andor Clara CCD camera, 532 nm laser, XY pixel size 107.5 nm. The Z-increment in all cases was 500 nm. The scanning pattern for SIM was a line grid with MAR = 1/16 and line thickness 272 nm in the sample plane. The SIM images were reconstructed using the scaled subtraction method, cf. Equation (5). The widefield image was computed from the SIM data by averaging the raw images, cf. Equation (2). For comparison, the images were resampled such that they all have a pixel size of 107.5 nm.
Fig. 8
Fig. 8 Comparison of sparse and dense samples in SIM. Top row: Atto-532 labeled actin in a HepG2 cell; MAR = 2/10. Bottom row: 200 nm diameter fluorescent beads; MAR = 1/5. In both cases we used a 532 nm laser for illumination and a 100 × /1.45 NA oil immersion objective. Shown are widefield images, a maximum intensity projection of a single SIM pattern position, an overlaid SIM pattern (green stripes) determined for the calibrated camera, and a maximum intensity projection of the reconstructed data obtained by scaled subtraction.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I c = [ ( I 1 I 2 ) 2 + ( I 1 I 3 ) 2 + ( I 2 I 3 ) 2 ] 1/2 ,
I WF ( x,y )= 1 N n=1 N I n ( x,y ) .
I C1 ( x,y )= max n=1,,N I n ( x,y ) min n=1,,N I n ( x,y ) ,
I C2 ( x,y )=| n=1 N I n ( x,y ) exp( 2πi n N ) | .
I C3 ( x,y )=β[ n=1 N I n ( x,y )Mas k n on ( x,y ) n=1 N Mas k n on ( x,y ) n=1 N I n ( x,y )Mas k n off ( x,y ) n=1 N Mas k n off ( x,y ) ] ,
β= N n=1 N Mas k n on ( x,y ) ( n=1 N Mas k n on ( x,y ) ) 2 N n=1 N ( Mas k n on ( x,y ) ) 2 ( n=1 N Mas k n on ( x,y ) ) 2 ,
α u ˜ =H x ˜ ,
u= u ^ +( u ^ u ^ p ) k=1 K ρ k r 2k ,
[ αu αv α ]=[ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ][ x y 1 ] ,
[ x y 1 0 0 0 ux uy u 0 0 0 x y 1 vx vy v ][ h 11 h 12 h 33 ]=0 .

Metrics