Computational imaging, or post-processing of images, allows the resolution limit of optical microscopy to be exceeded. Here we present an image-inversion approach to improve the resolution of a confocal laser scanning microscope (CLSM). The method combines a full-wave modeling of CLSM and an inverse reconstruction algorithm. The inverse reconstruction is cast into an optimization problem, where the distribution of refractive index of objects that yields the best match between computed images and experimental images is identified. The reconstructed image is a quantitative image based on the distribution of refractive index. Experimental results demonstrate a 35 nm edge-to-edge resolution using a two-disk pattern with 105 nm disk diameter. The proposed computational imaging approach greatly improves the resolution, without the need to change the existing microscopy system, at moderate computational cost.
© 2016 Optical Society of America
Since the early seventeenth century, optical microscopy has been extensively used in industrial and scientific research [1,2]. Compared to scanning probe microscopy (e.g., scanning tunneling microscopy and atomic force microscopy), optical microscopy provides a non-invasive and real-time imaging technique . However, its resolution is limited to about a half wavelength of illuminating light by diffraction limit. To break this resolution limit and achieve superresolution, various approaches have been developed in the past several decades. One of the most successful approaches is fluorescence microscopy [3–5]. However, this technique shares two inherent drawbacks . First, the temporal and spatial scannings are indispensable to these microscopy technologies, making them unsuitable for dynamic real-time imaging. Second, the use of special dyes hinders its potential applications in non-fluorescence conditions such as semiconductor wafer inspection or nanoscale structure exploration. The first drawback can be remedied by wide-field structured illumination microscopy , which projects a grating or fringe pattern on object structures and thus enhances resolution from the interference of illumination patterns with the object structures. For the semiconductor device, solid immersion microscopy takes advantage of high refractive index of silicon (up to 3.5) to achieve superresolution [7,8]. A complete model of a solid immersion microscope is modeled as a three-step process, i.e., focusing of the incident light, interaction of focal field with object structures, and imaging of the scattered light [7,9]. Based on the prediction of this optical model, a 100 nm half-pitch feature resolution ( for 1064 nm wavelength) for failure analysis of integrated circuits is obtained by an optimized pinhole size in infrared light . Nevertheless, the material of the solid immersion lens limits its use mainly to subsurface imaging of integrated circuits. All of the just-discussed studies are far-field imaging, which either reduces the size of the point spread function of the microscopy system or increases the bandwidth of the imaging system. Fluorescence microscopy and solid immersion microscopy belong to the former and structured illumination microscopy belongs to the latter [3,8,11].
In contrast to far-field imaging, near-field imaging enhances resolution more pronounced by the collection of evanescent waves. The typical example is near-field scanning optical microscopy , whose resolution is determined by the size of the tip instead of diffraction limit. However, it has the same drawbacks as scanning probe microscopy in terms of time inefficiency. Efforts have been made to surpass the diffraction barrier using specially designed devices such as superlenses [13,14] and hyperlenses . The authors claim that these designed lenses can transform evanescent waves to propagating waves and thus superresolution may be achieved. However, we should note that these designs need either sophisticated nanofabrication processes or special configurations for the imaging. Near-field high resolution by nanoscale spherical lenses that are self-assembled by bottom-up integration of organic molecules is also reported . It is found that a nano-solid immersion lens can produce a 25% smaller focal spot compared to a macroscopic solid immersion lens . Compared to nano-solid immersion lens, microspheres are easier to use and cost less. A remarkable record of 50 nm resolution has been reported for the first time, to our knowledge, through an optical transparent microsphere with white light illumination . Following this pioneering work, a 25 nm lateral resolution has been demonstrated by introducing microspheres into the light path of a confocal laser scanning microscope (CLSM) .
While most researchers enhance resolution by changing experimental designs, algorithmic-based approaches, i.e., post-processing of obtained images, can provide an alternative way to further improve resolution. Among these approaches, computational superresolution , the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been widely used in many practical applications such as facial image analysis  and medical imaging processing [22,23]. Although reconstruction-based superresolution algorithms treat the superresolution problem as an inverse problem, such algorithms mainly employ signal-processing algorithms such as warping, blurring, and down-sampling, where the physical principles of waves are not considered [20,23]. Thus, it is difficult for such computational superresolution algorithms to achieve high resolution in optical microscope applications due to the presence of diffraction in the imaging system.
Another method that enhances the resolution of CLSM has been used in fluorescence imaging and it processes the images recorded by a camera, instead of a pinhole detector, as the sample is scanned relative to a focused laser spot [24–26]. Benefiting from pixel-reassignment techniques, it delivers resolution close to the theoretical value of a CLSM with increased signal-to-noise ratio [27,28]. Similar ways of post-processing include structured illumination imaging , subtractive imaging , virtually structured detection , a multi-frame pixel superresolution approach , and Fourier ptychographic microscopy . However, the aforementioned methods apply simple post-processing strategies to the recorded images only without considering the effect from the complete optical system that is a three-step process, and thus the resolution is only slightly improved.
Compared to these simple post-processing approaches, the computational imaging strategies that solve an inverse problem in the framework of a physical model of microscopy are more promising due to the capability of resolution enhancement [34–40]. Among these methods, deconvolution imaging has been successfully used in fluorescence microscopy, including wide-field and scanning types , which allows one to compensate for the blurring caused by out-of-focus contribution to the recorded intensity at each pixel of the image . Several deconvolution methods have been proposed to reduce the degradation of the microscope—for example, the Richardson–Lucy iterative algorithm [36,41], which computes the maximum likelihood estimation adapted to Poisson statistics. In addition, another computational strategy that considers data reconstruction as an inverse problem has been proposed by Bertero et al. [38–40]. It is shown that the resolution of a CLSM can be significantly improved by recovering object profile from image information corresponding to each scanning position [42,43]. The reason for significantly enhancing the resolution of optical microscopy using a data-inversion approach is that this method can obtain the real target by eliminating the diffraction effect from the recorded images . However, the analysis of optical microscopy is based on the paraxial approximation and point spread function, which cannot rigorously model the interaction of the object structure and thus cannot be used for reconstructing real samples [39,42]. Further, the inverse process is ill-posed and sensitive to the potential noise and thus the reconstruction is numerically complicated and time-consuming. Therefore, it is advantageous to develop a rigorous data-inversion model of optical microscopy to enhance the resolution of the complex object structures.
In this paper, an image-inversion approach of further improving the resolution of a CLSM is presented for the first time, to our knowledge, by combining a full-wave modeling of CLSM and inverse reconstruction. The main contributions of this paper are summarized as follows.
- • First, a rigorous and complete model of an optical microscope, consisting of three subsystems  (focusing of incident light, interaction of focal field with object structures, and imaging of the scattered light) is proposed using a numerical method, i.e., the finite element–boundary integral (FE-BI) method . This model is described as a forward problem, i.e., we can compute intensity distributions in the image plane once characteristics of object structures and various important parameters of the optical system such as numerical aperture and polarizations, are known. Images of the object structures predicted by the proposed model are also confirmed by the experimental setup of a CLSM. It is highlighted that this complete model can provide much closer results to experimental results than a simpler linear model in some circumstance—for example, the object structures with high contrast. This is because the simpler linear model does not take the multiple scattering into account although the multiple scattering is strong in such a circumstance.
- • Second, based on the proposed forward model, an image-reconstruction approach for complex object structures is developed using a conjugate gradient method. This inverse problem is cast into an optimization problem, where the distribution of refractive index of object structures, which yields the best match between computed and recorded images, is identified. Such a post-processing not only enhances the resolution but also provides other quantitative information about object structures such as values of refractive index. This is because the image recorded by an optical microscopy is not a replica of object structures, but instead it is a distribution of light intensity at the recording plane. It is important to highlight that the proposed image-reconstruction approach is fundamentally different from various aforementioned post-processing approaches in the sense that multiple scattering of object structures is taken into account in the full-wave modeling of optical microscopy, which provides a more accurate modeling compared to the simpler linear model. We should note that the effect of errors on the measured data is already critical to the inverse problem and the addition of an error to the forward model should introduce additional errors and worsen the solutions.
- • Third, the proposed image-reconstruction approach is demonstrated using experimental images of several complex object structures. It is noted that the resolution and image quality are significantly improved compared to the original CLSM images. Compared to the popular deconvolution algorithm, the proposed method gives much better images with enhanced resolution. Specifically, we can clearly distinguish a 35 nm edge-to-edge distance from the reconstructed image of two-disk patterns with 105 nm disk diameter, although the original CLSM image of this pattern displays an elongated spot.
2. FORWARD MODEL OF OPTICAL MICROSCOPY
Figure 1 illustrates the schematic of a rigorous model of an optical microscope, which consists of three subsystems: focusing of incident light, interaction of focal field with the object structures, and imaging of the scattered light . The vector diffraction theory is used to analyze subsystems I and III, which is the same as the optical model in [7,46]. As a link of subsystems I and III, the subsystem II (interaction of the focal field with object structures) is treated using the FE-BI method .
Compared to the previous optical model in , the most important advantage of this numerical method is that it can deal with not only the homogeneous background case where object structures are embedded in a homogeneous medium, but also the more common samples where object structures are placed on a substrate. For such a case, the computation domain is shrunk to the interior of the integral boundary surface. We should note that the inhomogeneous background Green’s function at each discrete point in the computational domain is not directly available and we need to obtain the Green’s function at all discrete points located on the integral boundary surface. Specifically, a library for the inhomogeneous background, which consists of substrate and air, is generated using the FDTD Lumerical software in this paper. One type of Green’s function maps a dipole source on the boundary surface to the electric field on the boundary surface and another type maps a dipole source on the boundary surface to the electric field in the far-field region.
In the optical model, the laser light of wavelength 405 nm is focused by the objective lens and the focal field (see Supplement 1, Eq. S1) illuminates object structures attached to the substrate, which is referred to as sample. The interaction of the focal field with the sample is treated as an electromagnetic scattering problem in an inhomogeneous background, where the refractive index of both the background medium and the sample are spatially varying. As shown in Fig. 1, the close boundary (red line in the inset figure) of the domain divides the problem into an interior and an exterior region. The fields in the interior region satisfy the vector wave equation (Supplement 1, Eq. S5) and the fields on the surface satisfy the electric field integral equation (Supplement 1, Eq. S6). Coupling these two equations and performing a series of derivations (see Supplement 1, Eqs. S3–S15), we have the following state equations:Supplement 1, Eq. S16) are related to dyadic Green’s functions that map a dipole source on the surface to the electric field on the Gaussian reference sphere of the object lens. The scattered light from object structures is refracted by the object lens and then focused by the tube lens. The electric field in the focal region of the tube lens can be computed by using angular spectrum representation . When a scanning system is considered, object structures are assumed to be scanned relative to the optical system, and the total intensity of the light passing through a finite-sized detector pinhole is collected at the detector. For each pixel, the detected signal (Supplement 1, Eq. S33) can be expressed by the summation of the intensity over a pinhole size. For a given object structure of refractive index , we denote the simulated intensity distribution matrix as . For a detailed derivation of the three subsystems of the optical model, please see Supplement 1, Section I-A, forward model of the microscopy system.
For the implementation of the forward model, a modularized program for each subsystem is given by MATLAB code. The use of the FE-BI numerical method allows the proposed forward model to deal with three-dimensional object structures as long as the object structure is transparent to the illumination light. It is also highlighted that the forward model can deal with a three-dimensional patch, which is very thin (the thickness is very small compared to the wavelength of the illumination light) and invariant in the vertical direction. In the scanning system, each pixel scan is independent and thus the intensity for all the pixels can be simultaneously computed, which could save an enormous amount of computation time if the computation of the forward model is performed in a distributed computing system.
3. EXPERIMENTAL DEMONSTRATION OF THE PROPOSED OPTICAL MODEL
A CLSM experimental setup is shown in Fig. 2(a). Ted Pella Inc’s MetroChip Microscope Calibration Target is used as the sample to demonstrate the optical model; it has the patterns of etched polycrystalline silicon over a thin oxide on silicon substrate. The smallest feature in this target is 240 nm pitch with equal lines and spaces. A circularly polarized laser beam of 405 nm wavelength is focused by a objective lens. The focused light illuminates object structures and the scattered field is recorded by a photomultiplier tube (PMT) with a pinhole of 20 μm diameter. The choice of the 20 μm pinhole is a tradeoff between resolution and signal-to-noise ratio of the recorded experimental images. The recorded CLSM images are expressed by matrix , the value of which is in the range of .
In the experimental demonstrations, gratings with equal lines and spaces are used. For 400 nm pitch gratings in Fig. 2(b), we clearly identify lines and spaces from both simulated and experimental images, which is also specifically reflected in a cross-section comparison along the dashed lines. It is found that the simulation result is in good agreement with the experimental result, which indicates that the proposed optical model provides an accurate prediction for the imaging capability of a CLSM. The smallest gratings we can resolve using this CLSM with an objective lens is 240 nm pitch gratings as shown in Fig. 2(c). Both simulated and experimental images show a decreased image contrast compared to the 400 nm pitch gratings. From these comparisons, the correctness and accuracy of the proposed three-subsystem model have been demonstrated, which can be used to compute image-intensity distributions for given object structures.
4. IMAGE-RECONSTRUCTION ALGORITHM
Based on the proposed forward model, an image-reconstruction approach for complex object structures is developed using the conjugate gradient method, which is described as an inverse problem. Inverse reconstruction is to estimate the unknown refractive index distribution by minimizing the discrepancy between the computed and experimental image by the solving unconstrained optimization problem,
Another challenge for such a nonlinear optimization problem is that the inversion process is ill-posed . To alleviate this difficulty, some prior knowledge is brought into the cost function. In this paper, a cost function with a regularization term is defined as47,48]. Different regularization terms correspond to different characteristics of object structures. For example, Tikhonov regularization can suppress noise amplification and obtain a smooth result , which is suitable for the objective structures with smooth refractive index distribution. In this paper, a total variation regularization term, 34]. The main advantages of the total variation regularization are to preserve the edges in the image and to smooth out homogeneous areas. Thus, such a choice is suitable to the fact that refractive indices of the object structures are expected to be constant over a specific region, i.e., piecewise constant. A small constant in Eq. (7) is to make the objective function be differentiable at , and is taken in the computation. In our numerical simulations, we choose the initial guess for the optimization to be a homogeneous refractive index distribution of substrate background, and the algorithm is able to converge to global minimum with a high probability. Thus, the proposed algorithm can be considered a reliable one for practical purposes.
Based on the proposed forward model and the cost function, the image reconstruction (unconstrained optimization problem) is solved by Polak–Ribière conjugate gradient method (PR-CGM). Details on the PR-CGM can be found in Supplement 1, Section II—Conjugate Gradient Optimization Method. A schematic flow chart of the proposed iterative reconstruction algorithm is shown in Fig. 3, which is implemented in MATLAB code. For each iteration in the algorithm, the object structure is scanned relative to the optical system and the image intensity distribution in the recorded plane is computed using the forward optical model. For each scanning position of the object structures, we need to solve the equation , where is square matrix; the size is determined by the total edge elements of the computation domain in the FE-BI method. The conjugate gradient method can be used to solve this equation and therefore the computation complexity of the proposal algorithm is determined by the computational complexity of CGM and the size of the object structures. The detailed computation time for specific cases is shown in the following section.
5. INVERSE RECONSTRUCTION USING EXPERIMENTAL IMAGES
Experimental images are employed for inverse reconstruction in this section. All the optimizations are implemented using MATLAB code and performed on a workstation with an Intel(R) Xeon(R) CPU E5-2680 2.70 GHz processor and 256 GB of RAM. The proposed model can deal with three-dimensional object structures. Here, our sample is chosen as a three-dimensional patch, which is very thin (on the order of 10 nm). Since the patch is invariant in the vertical direction, the refractive index is modeled as a two-dimensional parameter in the transverse plane. The samples are fabricated by focused ion beam (Model FEI Helios Nanolab 600 FIB/SEM). The ion etching is performed under a voltage of 30 KV and a current of 28 pA with liquid metal Gallium ion sources. The sample consists of 30 nm thick chrome film coated on fused silica substrates. Four different object structures, i.e., two-disk, two-square, four-square, and four-disk patterns, with several pitch sizes are fabricated on the substrate. All scanning electron microscopic (SEM) images are taken by a Hitachi S-3400N. For the CLSM images, the diameter of the pinhole used is 20 μm, which is the tradeoff between the signal-to-noise ratio and the resolution of the recorded images. Each CLSM image is calibrated by the first-degree polynomial in Eq. (5) and the two factors and are given in Supplement 1, Table S1.
In order to demonstrate the capability of the proposed reconstruction approach, the popular deconvolution algorithm , i.e., Richardson–Lucy algorithm, is also used to reconstruct the images for comparison. The Richardson–Lucy algorithm reconstructs images from the recorded CLSM images, knowing the point spread function of the imaging system, by maximizing the likelihood distribution (normally Poisson distribution) with respect to the reconstructed images . Here, the point spread function of the imaging system can be computed for the optical system with objective lens . The linear Fourier model can be given by the convolution of the point spread function with object structures.
Figure 4 shows the SEM images and CLSM images of two-square and two-disk patterns with different pitch sizes. In the SEM image of Fig. 4(a), for each type pattern, the center to center distances are 400, 320, 280, 240, 200, 160, 140, and 120 nm and the edge-to-edge distances are 160, 120, 80, 60, 60, 40, 35, and 25 nm. As seen from CLSM image in Fig. 4(b), we can roughly resolve two-square and two-disk patterns with 280 nm pitch and 80 nm edge-to-edge distance, respectively, which is also reflected in the cross-section comparison of Fig. 4(c). Images for smaller pitch sizes exhibit a shape of an elongated spot. These images are used to reconstruct the original profiles of the patterns. It is also noted that the proposed optical model gives better results than the linear Fourier theory model. The detailed comparison can be found in Supplement 1, Fig. S1.
For two-square and two-disk patterns with pitches 200, 160, and 140 nm, Figs. 5I–III present SEM, simulated, and calibrated images, as well as inverse reconstruction images using the proposed image-reconstruction method and Richardson–Lucy algorithm. It is found that the CLSM images in Figs. 5I–III(e) and 5I–III(f) have a similar pattern as the simulated images in Figs. 5I–III(c) and 5I–III(d), respectively. These images exhibit a shape of an elongated spot, and we cannot identify the original two-square and two-disk structures from the CLSM images. Compared to the CLSM images, the reconstructed images using Richardson–Lucy algorithm as shown in Figs. 5I–III(i) and 5I–III(j) present sharper elongated spots with higher signal-to-noise ratio, but we still cannot resolve the squares and disks from the reconstructed images.
In comparison, the proposed image-reconstruction method provides better results. The reconstructed images [Figs. 5I(g) and 5I(h)] agree well with the SEM images [Figs. 5I(a) and 5I(b)] for the 200 nm pitch pattern and we can roughly identify the shape of the square and disk. Although two squares and two disks are resolvable for the 160 and 140 nm pitch pattern as shown in Figs. 5II–III(g) and 5II–III(h), respectively, we cannot distinguish the disk and square shapes from the reconstructed images of the 160 nm [Figs. 5II(g) and 5II(h)] and 140 nm [Figs. 5III(g) and 5III(h)] pitch patterns. This is because the CLSM image quality becomes worse when the feature size decreases, which is reflected in Figs. 5I(e) and 5I(f), Figs. 5II(e) and 5II(f), and Figs. 5III(e) and 5III(f). More specifically, an error ratio is defined as the fidelity term in the cost function of Eq. (6), , which is about 3%, 5%, and 9% for 200 nm, 160 nm, and 140 nm patterns, respectively. A detailed comparison of cross sections for simulated and calibrated images can be found in Supplement 1, Figs. S2–S4.
The proposed image-reconstruction algorithm has significantly enhanced the resolution compared to the Richardson–Lucy algorithm. Nevertheless, we should note that the computation cost is much higher than the Richardson–Lucy algorithm since the numerical method allows one to build an accurate optical model of a microscope. The computation time of the Richardson–Lucy algorithm is less than 10 min for an image. For the proposed algorithm, the computation times and the related parameter of the computational domain for the discussed optimizations are shown in Table 1. In the optimization, twelve processors ran in parallel.
In order to evaluate the capability of the image-inversion approach, the two-square and two-disk patterns of 120 nm pitch and 25 nm edge-to-edge distance are also considered, although the signal-noise-ratio is so low that the error ratio . As expected, we cannot resolve the feature from the reconstructed images. Details of the images are shown in Supplement 1, Figs. S5–S6. Based on these analyses, we conclude that the smaller the object’s feature size, the more difficult for the optimization process, which is due to greater noise contained in the measured data. We also should note that the spatial resolution of reconstructed images is not limitless and a critical resolution exists for the reconstructed imaging. It means that two squares or two disks separated at a distance smaller than the critical resolution cannot be identified as two objects by the reconstruction algorithm, which is also reflected in the reconstructed images for two-square and two-disk patterns of 120 nm pitch and 25 nm edge-to-edge distance. Despite all of the above-mentioned facts, the spatial resolution of reconstructed images is significantly improved compared to the experimental CLSM images. We can resolve two-square and two-disk patterns of 140 nm pitch and 35 nm edge-to-edge distance from the reconstructed images, which are much better than what the CLSM images can resolve, with 280 nm pitch and 80 nm edge-to-edge distance.
To demonstrate the capability of the proposed approach for complex structures, the four-square and four-disk patterns are also fabricated. SEM and CLSM images are shown in Supplement 1, Fig. S7. We can clearly resolve four-square and four-disk patterns with 320 nm pitch and 120 nm edge-to-edge distance from CLSM images [Fig. S7(c)]. Figure 6 presents reconstructed images for the fabricated patterns with 160 nm pitch size and 40 nm edge-to-edge distance. The reconstructed image [Fig. 6(g)] for the four-square pattern provides four separated spots with artifacts. This is because the CLSM image in Fig. 6(e) has some distortions due to the contamination of noises compared to the simulated images in Fig. 6(c). In contrast, the simulated and CLSM images [Figs. 6(d) and 6(f)] for the four-disk pattern are much similar and thus the reconstructed image of the four-disk pattern is much better, as shown in Fig. 6(h). For comparison, the reconstructed images from the Richardson–Lucy algorithm are also provided in Figs. 6(i) and 6(j), which cannot resolve the four dots.
In this paper, superresolution microscopy imaging is for the first time experimentally achieved, to our knowledge, by combining a full-wave modeling and image-inversion approach without modification to the existing hardware of a CLSM. The use of the numerical method, FE-BI, allows us to simulate the practical samples where object structures are on the surface of the substrate. Several structure patterns are used to evaluate the proposed image-inversion approach. Further, a quantitative comparison between the popular deconvolution algorithm, i.e., Richardson–Lucy algorithm, and the proposed image-reconstruction method is also conducted.
It is found that the image quality and resolution are significantly improved compared to the original measured CLSM images using the proposed image-inversion approach. For example, we roughly identify the square shape from the reconstructed image of the two-square pattern with 140 nm side length and 60 nm edge-to-edge distance, but we cannot observe it from its CLSM image. Moreover, a two-disk pattern with 105 nm disk diameter and 35 nm edge-to-edge distance can be clearly resolved from the reconstructed images, but in comparison we cannot even resolve a two-disk pattern with 160 nm disk diameter and 80 nm edge-to-edge distance from CLSM images. In contrast to the proposed image-reconstruction algorithm, a popular deconvolution algorithm (Richardson–Lucy algorithm) provides images with improved signal-to-noise ratios compared to the original CLSM images. However, the resolution enhancement is not observed in the reconstructed images. This is mainly because different forward-imaging models are used in the proposed image-reconstruction and deconvolution algorithms.
Typically, a microscopy imaging system can be divided into three processes: focusing of incident light, interaction of the focal field with object structures, and imaging of the scattered light. Each of them can influence the resolution and image quality of the imaging system [7,44]. The proposed forward model in this paper is a complete model which considers all of the processes, whereas the imaging model of the deconvolution algorithm considers the third process (imaging of the scattered light) only . As we know, the image recorded by the optical microscopy is not the exact replica of the object structure. It is rather the images of the induced currents (secondary source on the object structure) in the imaging system. The imaging model of the deconvolution algorithm builds up a relationship between the image at the detector and induced currents on an object structure. However, the induced current on an object structure is not the same as the object structure itself, and this difference is due to multiple scattering effects between different parts of the object structure. Thus, the proposed image-reconstruction method can obtain reconstructed images with enhanced resolution compared to the deconvolution algorithm since the multiple scattering of the object structures is taken into account in the full-wave modeling of the optical microscopy.
In addition, the image-inversion approach operates in software and thus can be easily integrated with other optical microscopies without changing the current experimental designs—for example, microsphere nanoscopy and fluorescence microscopy [18,19]. For microsphere nanoscopy, the only difference from the CLSM case is to develop a forward model of the corresponding system for simulating the images of object structures. However, we should note that fluorescence imaging is largely incoherent and thus an incoherent complete optical model is needed. To develop a complete numerical model for incoherent imaging will be one of our future research directions. Once a complete incoherent model is developed, we expect the proposed image-reconstruction algorithm would obtain a better resolution compared to the simple Richardson–Lucy deconvolution algorithm.
In summary, we have demonstrated the concept of inverse construction for better resolution and image quality using full-wave modeling. In contrast to the CLSM images, which are intensity distributions on the measurement plane, the proposed inversion image approach provides quantitative images, where the values of the objects’ refractive indices are numerically reconstructed. However, we should note that such an image-inverse approach in an inhomogeneous background is complicated and time-consuming for CLSM. So far, it is very difficult to apply the image-inverse approach into a large-scale CLSM image due to the tremendous computing workload. Therefore, further improvements must be performed to increase the efficiency of the presented approach. For example, we can store some sets of Green’s functions for different inhomogeneous background medium as libraries. We can also include as much prior knowledge as possible to reduce the ill-posedness of the inverse problem. The main computational burden is to solve the forward problem corresponding to the estimated refraction index at a given iteration of the optimization process. As we know, the intensity of each scanning position can be independently computed. Thus, the computation time can be significantly reduced if the computation can be performed in a distributed computing system. Also, if some internal relationship of close scanning positions can be found, the efficiency of the proposed algorithm can be further improved. With the continuous development of computer technology, computational imaging similar to this method may become a mainstream method for further enhancing the resolution of the microscopy system. We also expect that this work should influence the resolution improvement of various imaging systems in biotechnology and material science applications.
National Research Foundation Singapore (NRF) (NRF-CRP10-2012-04).
This work was supported by the A*STAR Computational Resource Centre through the use of its high-performance computing facilities.
See Supplement 1 for supporting content.
1. T. Wilson and C. J. R. Sheppard, Theory and Practice of Scanning Optical Microscopy (Academic, 1984), Vol. 1.
2. H. Wang, C. J. R. Sheppard, K. Ravi, S. T. Ho, and G. Vienne, “Fighting against diffraction: apodization and near field diffraction structures,” Laser Photon. Rev. 6, 354–392 (2012). [CrossRef]
3. S. W. Hell, “Far-field optical nanoscopy,” Science 316, 1153–1158 (2007). [CrossRef]
4. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19, 780–782 (1994). [CrossRef]
5. H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5, 417–423 (2008). [CrossRef]
6. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000). [CrossRef]
7. R. Chen, K. Agarwal, C. J. R. Sheppard, J. C. H. Phang, and X. Chen, “A complete and computationally efficient numerical model of aplanatic solid immersion lens scanning microscope,” Opt. Express 21, 14316–14330 (2013). [CrossRef]
8. S. M. Mansfield and G. S. Kino, “Solid immersion microscope,” Appl. Phys. Lett. 57, 2615–2616 (1990). [CrossRef]
9. R. Chen, K. Agarwal, C. J. R. Sheppard, and X. Chen, “Imaging using cylindrical vector beams in a high-numerical-aperture microscopy system,” Opt. Lett. 38, 3111–3114 (2013). [CrossRef]
10. K. Agarwal, R. Chen, L. S. Koh, C. J. R. Sheppard, and X. Chen, “Crossing the resolution limit in near-infrared imaging of silicon chips: targeting 10-nm node technology,” Phys. Rev. X 5, 021014 (2015). [CrossRef]
11. M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photon. 7, 241–275 (2015). [CrossRef]
12. R. C. Dunn, “Near-field scanning optical microscopy,” Chem. Rev. 99, 2891–2928 (1999). [CrossRef]
13. J. B. Pendry, “Negative refraction makes a perfect lens,” Phys. Rev. Lett. 85, 3966–3969 (2000). [CrossRef]
14. I. I. Smolyaninov, Y.-J. Hung, and C. C. Davis, “Magnifying superlens in the visible frequency range,” Science 315, 1699–1701 (2007). [CrossRef]
15. Z. Liu, H. Lee, Y. Xiong, C. Sun, and X. Zhang, “Far-field optical hyperlens magnifying sub-diffraction-limited objects,” Science 315, 1686 (2007). [CrossRef]
16. J. Y. Lee, B. H. Hong, W. Y. Kim, S. K. Min, Y. Kim, M. V. Jouravlev, R. Bose, K. S. Kim, I. C. Hwang, L. J. Kaufman, C. W. Wong, and P. Kim, “Near-field focusing and magnification through self-assembled nanoscale spherical lenses,” Nature 460, 498–501 (2009). [CrossRef]
17. D. R. Mason, M. V. Jouravlev, and K. S. Kim, “Enhanced resolution beyond the Abbe diffraction limit with wavelength-scale solid immersion lenses,” Opt. Lett. 35, 2007–2009 (2010). [CrossRef]
18. Z. B. Wang, W. Guo, L. Li, B. Luk’yanchuk, A. Khan, Z. Liu, Z. C. Chen, and M. H. Hong, “Optical virtual imaging at 50 nm lateral resolution with a white-light nanoscope,” Nat. Commun. 2, 218 (2011). [CrossRef]
19. Y. Yan, L. Li, C. Feng, W. Guo, S. Lee, and M. Hong, “Microsphere-coupled scanning laser confocal nanoscope for sub-diffraction-limited imaging at 25 nm lateral resolution in the visible spectrum,” ACS Nano 8, 1809–1816 (2014).
20. K. Nasrollahi and T. B. Moeslund, “Super-resolution: a comprehensive survey,” Mach. Vis. Appl. 25, 1423–1468 (2014). [CrossRef]
21. B. K. Gunturk, A. U. Batur, Y. Altunbasak, M. H. Hayes, and R. M. Mersereau, “Eigenface-domain super-resolution for face recognition,” IEEE Trans. Image Process. 12, 597–606 (2003). [CrossRef]
22. A. Schatzberg and A. J. Devaney, “Super-resolution in diffraction tomography,” Inverse Prob. 8, 149–164 (1992). [CrossRef]
23. R. R. Schultz and R. L. Stevenson, “A Bayesian approach to image expansion for improved definition,” IEEE Trans. Image Process. 3, 233–242 (1994). [CrossRef]
24. C. J. R. Sheppard, “Super-resolution in confocal imaging,” Optik 80, 53–54 (1988).
25. C. B. Muller and J. Enderlein, “Image scanning microscopy,” Phys. Rev. Lett. 104, 198101 (2010). [CrossRef]
26. O. Schulz, C. Pieper, M. Clever, J. Pfaff, A. Ruhlandt, R. H. Kehlenbach, F. S. Wouters, J. Großhans, G. Bunt, and J. Enderlein, “Resolution doubling in fluorescence microscopy with confocal spinning-disk image scanning microscopy,” Proc. Natl. Acad. Sci. U.S.A. 110, 21000–21005 (2013). [CrossRef]
27. C. J. R. Sheppard, S. B. Mehta, and R. Heintzmann, “Superresolution by image scanning microscopy using pixel reassignment,” Opt. Lett. 38, 2889–2892 (2013). [CrossRef]
28. J. E. McGregor, C. A. Mitchell, and N. A. Hartell, “Post-processing strategies in image scanning microscopy,” Methods 88, 28–36 (2015). [CrossRef]
29. A. G. York, P. Chandris, D. D. Nogare, J. Head, P. Wawrzusin, R. S. Fischer, A. Chitnis, and H. Shroff, “Instant super-resolution imaging in live cells and embryos via analog image processing,” Nat. Methods 10, 1122–1126 (2013). [CrossRef]
30. R. Heintzmann, V. Sarafis, P. Munroe, J. Nailon, Q. S. Hanley, and T. M. Jovin, “Resolution enhancement by subtraction of confocal signals taken at different pinhole sizes,” Micron 34, 293–300 (2003). [CrossRef]
31. R. W. Lu, B. Q. Wang, Q. X. Zhang, and X. C. Yao, “Super-resolution scanning laser microscopy through virtually structured detection,” Biomed. Opt. Express 4, 1673–1682 (2013). [CrossRef]
32. A. C. Sobieranski, F. Inci, H. C. Tekin, M. Yuksekkaya, E. Comunello, D. Cobra, A. von Wangenheim, and U. Demirci, “Portable lensless wide-field microscopy imaging platform based on digital inline holography and multi-frame pixel super-resolution,” Light Sci. Appl. 4, e346 (2015). [CrossRef]
33. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]
34. N. Dey, L. Blanc-Feraud, C. Zimmer, P. Roux, Z. Kam, J. C. Olivo-Marin, and J. Zerubia, “Richardson–Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution,” Microsc. Res. Tech. 69, 260–266 (2006). [CrossRef]
35. P. Sarder and A. Nehorai, “Deconvolution methods for 3-D fluorescence microscopy images,” IEEE Signal Process. Mag. 23(3), 32–45 (2006). [CrossRef]
36. L. Zhu, L. Li, L. Gao, and L. V. Wang, “Multiview optical resolution photoacoustic microscopy,” Optica 1, 217–222 (2014). [CrossRef]
37. J. Pawley and B. R. Masters, “Handbook of biological confocal microscopy,” Opt. Eng. 35, 2765–2766 (1996). [CrossRef]
38. M. Bertero and E. R. Pike, “Resolution in diffraction-limited imaging, a singular value analysis, I: the case of coherent illumination,” Opt. Acta 29, 727–746 (1982). [CrossRef]
39. M. Bertero, P. Brianzi, and E. R. Pike, “Superresolution in confocal scanning microscopy,” Inverse Prob. 3, 195–212 (1987). [CrossRef]
40. M. Defrise and C. Demol, “Superresolution in confocal scanning microscopy-generalized inversion formulas,” Inverse Prob. 8, 175–185 (1992). [CrossRef]
41. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974). [CrossRef]
42. M. Bertero, P. Boccacci, and E. R. Pike, “Resolution in diffraction-limited imaging, a singular value analysis, II: the case of incoherent illumination,” Opt. Acta 29, 1599–1611 (1982). [CrossRef]
43. M. Bertero, C. De Mol, E. R. Pike, and J. G. Walker, “Resolution in diffraction-limited imaging, a singular value analysis IV. The case of uncertain localization or non-uniform illumination of the object,” Opt. Acta 31, 923–946 (1984). [CrossRef]
44. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University, 2006).
45. J.-M. Jin, The Finite Element Method in Electromagnetics (Wiley, 2014).
46. R. Chen, “Modeling and designing aplanatic solid immersion lens microscope for failure analysis of integrated circuits,” Ph.D. thesis (National University of Singapore, 2013).
47. Y. W. Wen and R. H. Chan, “Parameter selection for total-variation-based image restoration using discrepancy principle,” IEEE Trans. Image Process. 21, 1770–1781 (2012). [CrossRef]
48. G. M. P. van Kempen and L. J. van Vliet, “The influence of the regularization parameter and the first estimate on the performance of Tikhonov regularized non-linear image restoration algorithms,” J. Microsc. 198, 63–75 (2000). [CrossRef]
49. G. M. P. van Kempen and L. J. van Vliet, “Background estimation in nonlinear image restoration,” J. Opt. Soc. Am. A 17, 425–433 (2000). [CrossRef]