We introduce and demonstrate a new high performance image reconstruction method for super-resolution structured illumination microscopy based on maximum a posteriori probability estimation (MAP-SIM). Imaging performance is demonstrated on a variety of fluorescent samples of different thickness, labeling density and noise levels. The method provides good suppression of out of focus light, improves spatial resolution, and allows reconstruction of both 2D and 3D images of cells even in the case of weak signals. The method can be used to process both optical sectioning and super-resolution structured illumination microscopy data to create high quality super-resolution images.
© 2014 Optical Society of America
Recently, new methods have been developed which circumvent the diffraction limit of optical microscopes. These include stimulated emission depletion microscopy  (STED), photoactivated localization microscopy [2,3] (PALM, FPALM), stochastic optical reconstruction microscopy  (STORM), super-resolution optical fluctuation imaging [5–7] (SOFI), and super-resolution structured illumination microscopy (SR-SIM) [8–10]. SR-SIM offers high photon efficiency, potentially high imaging rates, relatively low hardware requirements, and compatibility with most dyes and fluorescent proteins, making it an attractive method for a broad range of studies in cell biology.
SR-SIM uses illumination patterns with high spatial frequency (close to the resolution limit of the microscope) to illuminate the sample. High frequency information contained in the sample is encoded, through aliasing, into the acquired images. By acquiring multiple images with illumination patterns of different phases and orientations, aliased components can be separated and a high-resolution image reconstructed [8,9]. Two-dimensional SR-SIM enables a twofold resolution improvement in the lateral dimension [9,11,12], but does not provide optical sectioning. If a three-dimensional illumination pattern is used, resolution can also be improved in the axial direction [13,14].
Structured illumination microscopy has also been used for optical sectioning, but without lateral resolution enhancement (OS-SIM) . Optically sectioned images can be calculated by taking the root mean square of the differences of the acquired images (square-law method), or by a form of homodyne detection . Several other methods are also possible . When combined with optimized illumination patterns, OS-SIM can achieve an axial resolution of ~300 nm [17,18]. This is about two to three fold better than is achievable in confocal laser scanning microscopy (CLSM) and is comparable to the axial resolution reported in 3D SR-SIM .
Recently, new concepts in structured illumination have appeared such as combining OS-SIM and SR-SIM by weighting Fourier space image components , or use of random speckle patterns for illumination the sample (blind-SIM) . Orieux et al. suggested a framework for SR-SIM based on Bayesian estimation for 2D image reconstruction , and we previously showed that Bayesian estimation methods have several advantages over the square-law method and can achieve a performance comparable to SR-SIM methods .
Here we propose an image reconstruction method for SIM which provides resolution improvement in all three dimensions using two-dimensional illumination patterns. Our method, maximum a posteriori probability SIM (MAP-SIM) is based on combining, via spectral merging in the frequency domain, maximum a posteriori probability estimation (for resolution improvement) and homodyne detection (for optical sectioning). We used a microscope setup in which the illumination pattern is generated by a spatial light modulator (SLM) together with incoherent illumination. MAP-SIM does not require precise knowledge of the point spread function (PSF) which must be carefully measured in most SR-SIM approaches. Additionally, MAP-SIM does not require precise knowledge of the pattern positions in each acquired image.
2.1 Maximum a posteriori probability estimation
An image acquired by the microscope can be modeled as a convolution of an ideal image of a real sample with the point spread function (PSF) of the microscope. The additional noise is composed of different noise sources (e.g., photon noise, read out noise) and can be modeled by additive Gaussian noise with zero mean [23,24]. Image acquisition in structured illumination microscopy can then be described as
The reconstruction of the HR image can be performed using a Bayesian approach [21,25,26]. The maximum a posteriori estimator of is given by maximizing the probability of the HR image represented by the observed LR images
Applying Bayes' theorem to the conditional probability in Eq. (2) and by taking the logarithm, we obtain
Because the LR images are independent measurements, we can write
Because of the presence of noise, the inversion of Eq. (1) is an ill-posed problem and some form of regularization is needed to ensure uniqueness of the solution. The regularization term in Eq. (3), provided by the density function , reflects prior knowledge about the HR image, such as a positivity constraint and image smoothness. Several kinds of priors and regularization techniques have been proposed within the Bayesian framework . To impose a smoothness condition and to ensure that a cost function is simple to minimize, we have adopted a term composed of finite difference approximations of the first order derivatives at each pixel location .
The cost function in Eq. (7) consists of two terms. The first term describes the mean square error between the estimated HR image and the observed LR images. The second term is the regularization term. Its contribution is controlled by the parameter , which is a small positive constant proportional to the noise variance , and which defines the strength of the regularization. Equation (7) is minimized by gradient descent optimization methods and the estimate of the unknown image at the iteration is obtained as
2.2 OTF modeling
The spatial frequency at which the optical transfer function (OTF) reaches zero determines the achievable resolution of a microscope, see Fig. 1(a). We model the PSF as an Airy disk  which in Fourier space leads to an OTFFig. 1(b).
2.3 Spectral merging
MAP estimation of a high resolution image obtained with structured illumination microscopy enables reconstruction of images (HR-MAP) with details unresolvable in a widefield microscope. However, MAP estimation as described here does not suppress the out of focus light. On the other hand, the homodyne detection method15] provides images (LR-HOM) with optical sectioning but without resolution improvement. Noting that the unwanted out of focus light is dominantly present at low spatial frequencies, we merge both LR-HOM and HR-MAP images in the frequency domain, see Fig. 1(c), to obtain the final HR image (MAP-SIM). Low pass filtering is applied to the LR-HOM image and a complementary high pass filter is applied to the HR-MAP image. O’Holleran and Shaw  used Gaussian weights with empirically adjusted standard deviations for weighting frequency components obtained by SR-SIM. We verified that Gaussian functions are well suited for our case, and we applied a weighting scheme based on linear combination of both merged components to preserve the total signal powerEq. (11) are described in more detail in Section 3.4.
3.1 Microscope setup and acquisition
Our setup is based on an IX71 microscope equipped with UPLSAPO 100 × /1.40 NA and 60 × /1.35 NA oil immersion objectives (Olympus, Hamburg, Germany) , see Fig. 2. We used a NEO sCMOS camera (pixel size 6.5 μm). Focus was adjusted using a piezo-Z stage (resolution 1.5 nm, NanoScan-Z, Prior, Cambridge, UK). The desired illumination patterns were produced by a high speed ferroelectric liquid crystal on silicon (LCOS) microdisplay (SXGA-3DM, Forth Dimension Displays, Dalgety Bay, Scotland; 1280 × 1024 pixels, 13.62 µm pixel pitch). Similar LCOS microdisplays have been used previously in SIM [14,18], and in other fast optical sectioning systems such as programmable array microscopy (PAM) . The display was illuminated by a home-built, three channel LED system based on high power LEDs (PT-54, Luminous Devices, Sunnyvale, California) with emission maxima at 460 nm, 525 nm, and 623 nm. The output of each LED was filtered with a band pass filter (450‒490 nm, 505‒555 nm, 633‒653 nm, resp., Chroma, Bellows Falls, Vermont), and the three wavelengths were combined with appropriate dichroic mirrors. The light was then vertically polarized with a linear polarizer (Edmund Optics, Barrington, NJ). We imaged the microdisplay into the microscope using a 180 mm focal length tube lens (U-TLU, Olympus) and polarizing beam splitter cube (Newport, Irvine, California). When using a 100 × objective, single microdisplay pixels are imaged into the sample with a nominal size of 136.2 nm, thus as diffraction-limited spots. Sample fluorescence was isolated with a dual band filter set appropriate for Cy3 and Cy5, or a single band set for GFP (Chroma).
3.2. Illumination patterns
Most strategies in structured illumination microscopy assume that a set of illumination patterns required for image reconstruction consists of equal movements of the same pattern such that the sum of all of the patterns results in homogenous illumination. In our experiments, the illumination patterns created by the microdisplay consisted of a regular grid of lines. The lines were one microdisplay pixel thick (diffraction limited in the sample when using a 100 × objective) with a gap of several “off” pixels in between. The line grid was shifted by one pixel between each image acquisition to obtain a new illumination mask, see Fig. 2(b). Changing the spacing between the “on” pixels allows one to vary the spatial frequency of the pattern in the sample which influences signal to noise ratio of the result, imaging depth, and optical sectioning ability . This spacing can be adjusted experimentally based on the sample, for example, when imaging deep into the sample, a lower pattern frequency may be required. In most experiments, we used pattern sequences with several orientations of the line grid pattern (0°, 90°, 45° and 135°) in order to achieve isotropic resolution improvement. Note that due to square shape of the microdisplay pixels, more patterns are required in the diagonal direction to equally cover the whole image while keeping the spatial frequency of the pattern approximately the same.
HepG2 cells expressing the labeled histone H4-dendra2  were maintained in DMEM supplemented with 10 % FCS, 100 U/ml penicillin, and 100 U/ml streptomycin (all from Invitrogen, Carlsbad, CA, USA) at 37 °C, 5% CO2, and 100 % humidity. Mowiol containing 1,4-diazabicyclo[2.2.2]octane (DABCO) was from Fluka (St. Louis, Missouri). Cells were grown on high precision #1.5 coverslips (Zeiss, Jena, Germany). Before imaging, cells were first washed with PBS, then fixed with 2 % paraformaldehyde for 15 minutes at 4 °C. For imaging of actin, we permeabilized fixed cells with 0.1 % Triton X-100 for 15 minutes at 4 °C, then labeled the cells with 2 nM Atto565-phalloidin (Atto-Tec Siegen, Germany) for 30 minutes at room temperature. We then mounted the coverslips in mowiol and sealed them onto clean slides with clear nail polish.
To illustrate the versatility of MAP-SIM, we imaged Drosophila salivary gland chromosomes (type 30-9066, Carolina Biological, Burlington, North Carolina), and pollen grains (type 30-4264, Carolina Biological). We also imaged mitochondria in bovine pulmonary artery endothelial (BPAE) cells labeled with MitoTracker Red CMXRos (FluoCells prepared slide #1, Invitrogen). The PSF of the microscope was measured using 100 nm tetraspeck beads (Invitrogen).
3.4 Image processing
The input data were first normalized into the range [0, 1] according to their bit depth. To obtain a HR-MAP image, we minimized Eq. (7) using a gradient descent algorithm. The speed of convergence is strongly influenced by the iteration step size . In the case of a fixed value (), the algorithm converges in approximately 14 iterations, see Fig. 3(a). In order to speed up the convergence, we used the Barzilai-Borwein method , which is a variation of a standard gradient descent algorithm but with a step size which is adapted in every iteration based on the changes of the image estimate and the gradient of the cost function between consecutive iterations. This accelerates the convergence rate substantially. A good initial guess of the HR-MAP image is also important for fast convergence. For this initial guess we used the sum of the acquired SIM images, which corresponds to a widefield image. Regularization of the problem in Eq. (7) is controlled by a small positive constant , which can be adjusted according to the noise conditions. If the noise increases, should also increase. The value worked well for all tested samples. Setting the regularization parameter to a small value prevents oversmoothing of the image and potential loss of high frequency information. We found that a stopping criterion was a reasonable value. With these settings, good results are obtained in about four iterations, see Figs. 3(a)-3(b). The frequency spectrum of the estimated image was then apodized with a cosine bell function and transformed into the real space to obtain the HR-MAP image. Note that the convolution step in Eq. (7) was performed in the frequency domain in order to achieve fast execution and memory efficiency.
The optically sectioned LR-HOM image and the super-resolution HR-MAP image are merged in the frequency domain by combining their Fourier spectra using Eq. (11), see the flowchart in Fig. 4. The balance between LR-HOM and HR-MAP images is controlled by the coefficient . To maximally exploit image details, it is preferred to put more emphasis on HR-MAP image. We experimentally determined that provides good results across the range of the samples we imaged. The standard deviation of the Gaussian weighting function is related to the normalized frequency , see Fig. 1(c).
4.1 Spatial resolution measurements
Spatial resolution was determined by averaging measurements from fifty individual 100 nm fluorescent beads. We used a 100 × /1.40 NA oil immersion objective and 460 nm LED excitation (emission 500 - 550 nm). Fourteen images were acquired at each z-plane (pattern orientation: 0°, 90°; number of shifts: 3; pattern period imaged in the sample: 409 nm; and orientation: 45°, 135°; shifts: 4; period: 385 nm). A region of interest (ROI) around every bead position (19 × 19 pixels) was extracted from both the widefield and MAP-SIM images. In order to align the position of the beads in each ROI, we registered the ROIs with sub-pixel accuracy using standard normalized cross-correlation methods. Intensity values were then fit with a Gaussian function and the full width at half maximum (FWHM) was determined in the axial and lateral directions. Figure 5 shows the resulting averaged FWHM values and PSF cross-sections.
4.2 2D MAP-SIM
To demonstrate the lateral resolution improvement of MAP-SIM in a thin sample, we imaged a Drosophila salivary gland chromosome preparation. The images were acquired using 623 nm LED illumination and a 100 × /1.40 NA oil immersion objective. Forty eight images were acquired at each z-plane (orientation: 0°, 90°; shifts: 10; period: 1.36 μm; and orientation: 45°, 135°; shifts: 14; period: 1.35 μm). The chromosome sample is quite thin (~1.5 µm), producing little out of focus light. Figure 6 demonstrates how MAP-SIM performs when compared to widefield and square-law methods in terms of contrast and lateral resolution. Plotting the intensity profile across the widefield, square-law and MAP-SIM images revealed many more fine details in MAP-SIM, see Fig. 6(g). We also plotted the normalized power spectral density vs. reduced spatial frequency, see Fig. 6(h). The reduced spatial frequency was normalized in the interval [0, 1] according to the maximum spatial frequency in the MAP-SIM image.
4.3 3D MAP-SIM
To demonstrate the optical sectioning characteristics of MAP-SIM, we imaged a relatively thick biological sample, a fluorescent pollen grain about 50 μm thick, see Fig. 7. The images were acquired using a 60 × /1.35 NA oil immersion objective. In this case a SIM pattern with a single orientation was used (orientation: 0°; number of shifts: 10; period: 2.27 μm). Ninety planes along the z-axis were scanned with a spacing of 500 nm. Lateral and axial cross sections of the pollen grain image in Fig. 7(c) reveal that MAP-SIM provides increased lateral and axial resolution compared to the widefield image. We also imaged Atto-532 phalloidin labeled actin in a HepG2 cell using the same illumination patterns as the pollen grain sample, see Fig. 8. Depth color coding was applied to the image using the isolum color map . Maximum intensity projections of the color coded 3D MAP-SIM images are shown in Fig. 8(a)-8(c).
In our microscope set-up, the illumination pattern contrast is lower at high spatial frequencies compared to SR-SIM using coherent illumination. Despite this, we compared MAP-SIM and SR-SIM processing methods in images of HepG2 cells expressing the labeled histone H4-dendra2. In this sample we also labeled actin using Atto-532 phalloidin. The images were acquired using a 460 nm LED (H4-dendra2) and 525 nm LED (atto-532 phalloidin) with a 100 × /1.40 NA oil immersion objective. In total, 24 patterned images were acquired for each z-plane Twenty four images were acquired at each z-plane (orientation: 0°, 90°; shifts: 5; period: 681 nm; and orientation: 45°, 135°; shifts: 7; period: 674 nm). Figure 9(a)-9(c) shows a maximum intensity projection of 23 Z-planes. Figure 9(f)-9(g) shows results for a single optical section. To process the data using SR-SIM methods, we followed the approach of Gustafsson, et al. . We located peaks in the Fourier spectrum using a spatial calibration method derived from our previous work .
4.4 SNR analysis and acquisition times
When using typical SR-SIM processing methods , high noise levels in the raw data can lead to inaccuracies when determining the shifts of the spectral components and thereby degrade the final super-resolution image. Thomas et al. examined the performance of SIM image reconstruction methods at low signal levels . They showed an image reconstruction for a sample where the SNR of an equivalent widefield image was estimated as 12 dB.
We evaluated the performance of MAP-SIM under various noise conditions. Using a 100 × /1.40 NA oil immersion objective, images of MitoTracker-labeled mitochondria in BPAE cells were acquired with 525 nm LED excitation and acquisition times (for one SIM pattern position) ranging from 10 ms to 400 ms. Under these conditions the SNR of equivalent widefield images ranged from 2.7 dB to 21.3 dB. We found that MAP-SIM reconstruction was successful down to a SNR of about 5.9 dB. The SNR was measured in widefield images based on manually selected regions in areas containing signal or background respectively and the results are shown in Fig. 10.
There are several advantages to the use of an incoherent illumination approach such as the one presented here. One is that we do not require a pupil plane mask to block the unwanted diffraction orders that are generated when using coherent illumination based on two-beam or three-beam laser interference. Such masks can be tricky to implement because different wavelength lasers are focused to different locations in the (reconstructed) pupil plane. This then requires numerous holes in the mask which must be precisely positioned. On the other hand, the contrast of our patterns is lower when compared to coherent interference patterns and we achieved a slightly lower resolution improvement than that typically reported in SR-SIM with coherent illumination.
The LCOS microdisplay used here can be configured with a variety of timing schemes which are supplied with the device. With the timing program that we used, the microdisplay can display an illumination pattern and switch to the next pattern in the sequence in 1.14 ms. Given a bright enough light source, fast enough camera, and appropriate sample, acquisition of raw SIM images at rates exceeding 800 Hz would therefore be possible. However, specifying the fastest possible acquisition rate, as is sometimes reported in SIM, is rather meaningless without consideration of the illumination power density, microscope objective, nature of the sample labeling, and other factors. The SNR analysis shown in Fig. 10 thus reflects an attempt to determine the relevant parameters based on measured quantities.
So far the MATLAB implementation of the MAP-SIM algorithm was not optimized for speed. The reconstruction of the 765 × 735 pixel image shown in Fig. 6, employing 48 patterned illumination images, took about 15 seconds using a conventional PC (Intel Core i7, 2.1 GHz, 8 GB RAM). We attribute the fast processing speeds to the frequency domain convolution we applied when solving Eq. (7), and to the Barzilai-Borwein method which ensures fast convergence. Processing each 2D plane separately also reduces the required CPU time and suggests parallel processing of individual planes, which would significantly speed up reconstruction of 3D samples.
We introduced a fast and efficient MAP-SIM algorithm, which is suitable for processing data acquired by both optical sectioning and super-resolution structured illumination microscopy. The proposed algorithm creates high quality super-resolution images. The measured resolution was (144 ± 7) nm in the lateral direction and (299 ± 50) nm axially. The reconstruction of super-resolution images was successful even in the presence of high noise levels, where the SNR of the corresponding widefield images was about 5.9 dB. Image acquisition and data processing are both very fast, revealing an interesting potential for live cell imaging. The microscope setup uses a relatively inexpensive microdisplay with no moving parts together with low cost LED illumination and is a simple add-on to conventional widefield fluorescence microscopes. MAP-SIM processing should also prove useful for other illumination strategies such as TIRF-SIM or emerging combinations of SIM and light sheet microscopy.
T.L. thanks Prof. Theo Lasser for his kind help and valuable advice. G.H. thanks Lubomír Kováčik for useful discussions. This work was supported by the Czech Science Foundation (P102/10/1320, P302/12/G157, 14-15272P, P205/12/P392), by Charles University in Prague (PRVOUK P27/LF1/1 and UNCE 204022), by OPPK CZ.2.16/3.1.00/24010 and OPVK CZ.1.07/2.3.00/30.0030, by COST CZ LD12018, by Czech Technical University in Prague (SGS14/148/OHK3/2T/13), and by the Biotechnology and Biomedicine Center of the Academy of Sciences and Charles University in Vestec. T.L. acknowledges a SCIEX scholarship (project code 13.183).
References and links
1. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef] [PubMed]
2. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef] [PubMed]
5. T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein, “Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI),” Proc. Natl. Acad. Sci. U.S.A. 106(52), 22287–22292 (2009). [CrossRef] [PubMed]
7. S. Geissbuehler, N. L. Bocchio, C. Dellagiacoma, C. Berclaz, M. Leutenegger, and T. Lasser, “Mapping molecular statistics with balanced super-resolution optical fluctuation imaging (bSOFI),” Opt. Nanoscopy 1(1), 4 (2012). [CrossRef]
8. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). [CrossRef]
10. M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. U.S.A. 102(37), 13081–13086 (2005). [CrossRef] [PubMed]
11. P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009). [CrossRef] [PubMed]
13. M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008). [CrossRef] [PubMed]
14. L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011). [CrossRef] [PubMed]
15. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef] [PubMed]
16. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy, J. B. Pawley, ed., 3rd ed. (Springer, 2006), pp. 265–279.
19. K. O’Holleran and M. Shaw, “Optimized approaches for optical sectioning and resolution enhancement in 2D structured illumination microscopy,” Biomed. Opt. Express 5(8), 2580–2590 (2014). [CrossRef] [PubMed]
20. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photonics 6(5), 312–315 (2012). [CrossRef]
21. F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J.-C. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21(2), 601–614 (2012). [CrossRef] [PubMed]
22. T. Lukeš, G. M. Hagen, P. Křížek, Z. Švindrych, K. Fliegel, and M. Klíma, “Comparison of image reconstruction methods for structured illumination microscopy,” in Proc. SPIE 9129, Biophotonics: Photonic Solutions for Better Health Care IV, 91293J (May 8, 2014) (2014), Vol. 9129, pp. 1–13.
23. G. M. P. Van Kempen, L. J. Van Vliet, P. J. Verveer, and H. T. M. Van Der Voort, “A quantitative comparison of image restoration methods for confocal microscopy,” J. Microsc. 185(3), 354–365 (1997). [CrossRef]
24. P. Sarder and A. Nehorai, “Deconvolution methods for 3-D fluorescence microscopy images,” IEEE Signal Process. Mag. 23(3), 32–45 (2006). [CrossRef]
25. P. J. Verveer and T. M. Jovin, “Efficient superresolution restoration algorithms using maximum a posteriori estimations with application to fluorescence microscopy,” J. Opt. Soc. Am. A 14(8), 1696–1706 (1997). [CrossRef]
26. P. J. Verveer, M. J. Gemkow, and T. M. Jovin, “A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy,” J. Microsc. 193(1), 50–61 (1999). [CrossRef] [PubMed]
27. P. Milanfar, ed., Super-Resolution Imaging (CRC Press, 2011), p. 490.
28. S. Chaudhuri, Super-Resolution Imaging (Kluwer Academic Publishers, 2000), p. 279.
29. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-HIll Int., 1996).
30. G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. De Vries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72(6), 431–440 (2009). [CrossRef] [PubMed]
31. Z. Cvačková, M. Mašata, D. Stanĕk, H. Fidlerová, and I. Raška, “Chromatin position in human HepG2 cells: although being non-random, significantly changed in daughter cells,” J. Struct. Biol. 165(2), 107–117 (2009). [CrossRef] [PubMed]
32. J. Barzilai and J. M. Borwein, “Two-point step size gradient methods,” IMA J. Numer. Anal. 8(1), 141–148 (1988). [CrossRef]
34. B. Thomas, M. Momany, and P. Kner, “Optical sectioning structured illumination microscopy with enhanced sensitivity,” J. Opt. 15(9), 094004 (2013). [CrossRef]