Abstract
We present the derivation of the dyadic Green’s function for the aplanatic solid immersion lens based microscopy system. The presented dyadic Green’s function is general and is applicable at non-aplanatic points as well in the object plane. Thus, the electromagnetic wave formulation is used to describe the optical system without paraxial assumptions. Various important and useful properties of SIL based microscopy system are also presented. The effect of the numerical aperture of the objective on the peak intensities, resolutions and the depth of field are also reported. Some interesting longitudinal effects are also reported.
©2011 Optical Society of America
1. Introduction
Although solid immersion lens (SIL) have been used in optical data storage for a long time [1], its application in subsurface microscopy system is relatively new and currently being explored [2]. One of the motivations behind exploring SIL based microscopy system is that SIL based microscopy system is capable of providing better resolution than non-SIL based system owing to the high numerical aperture of the solid immersion lens [3,4]. Another motivation is that while most general microscopy systems are capable of imaging the surface of the object, SIL based microscopy is capable of providing sub-surface imaging with good resolution if the refractive index of the object is the same as the refractive index of the SIL. Thus, it attracts potential applications in the field of imaging solid state devices, which have most of the structures buried deep into the substrate. Thus, solid immersion lens made of the same material as the substrate is a suitable imaging method.
The initial studies for SIL based microscopy systems appeared in [2,4–8], more theoretical aspects were explored in [3,9–19] and experimental results based on SIL based microscopy were reported in [20–23]. Most reported results used ray optics and/or paraxial approximations to study the SIL based microscopy systems. While the use of point spread functions and aberration functions are common in optics, the utility of the dyadic Green’s function has also been demonstrated [24–32]. The dyadic Green’s functions are usually based on electromagnetic theory. Thus, they have greater applicability than the point spread function (in non-focal regions as well) and can be used to explain or predict the behavior of the system in many scenarios.
In this paper, we present a derivation of the dyadic Green’s function of the aplanatic SIL based microscopy system using the electromagnetic theory. In our knowledge, this is the first time that the dyadic Green’s function for the aplanatic SIL is being reported. Using the dyadic Green’s function, we compare the behavior of SIL based and non-SIL based microscopy systems in the presence of single dipole source (equivalent to a point spread function). The impact of using SIL on resolution enhancement and depth of focus is also demonstrated using numerical simulations.
2. Setup and notation
For convenience, we first introduce the mathematical notations that are used to represent physical quantities. The physical vectors like the dipole current element, the electric fields, the position vectors, etc. are represented with arrows above them. For example, . The subscripts denote the specific characteristics of the quantity. For example, denotes the position vector of a point in the charged couple device (CCD) region, where the image is formed. The matrices are shown using bold upright notation. For example is the dyadic Green’s function (a tensor) for mapping a current dipole in the object region to the electric fields in the image plane (CCD region). The scalars are shown using italic notations. For example, ρ is the radial distance of a point in the object plane from the origin in the object plane. In addition, the points are named or assigned labels (italic capital single alphabets, like A, , etc.) and the same point may be denoted by different vectors in the different coordinate systems. For example, denotes the point A in the objective coordinate system, while denotes the same point in the SIL coordinate system.
In the setup, the longitudinal direction is represented as the axis. The SIL is represented using its radius R and the refractive index of the SIL . The refractive index of the SIL is the same as the object to be imaged. The region to be imaged is referred to as the object region for convenience of reference. The objective lens is represented using its focal length , its semi-aperture angle , and its refractive index . The image region is the region containing the CCD screen. The CCD screen is in the focal region of the CCD lens, which is characterized with and the refractive index of the image region.
The setup can be understood as comprising three different coordinate systems:
- 1. SIL coordinate system: The aplanatic point of the SIL is considered as the origin for this system. It is represented using . This point is different from the geometric center of the SIL, which is represented using . A spherical coordinate system is used for representing the points in this coordinate system. Thus, is represented using , which are the radial distance of the point from , the angle of elevation, and the azimuthal angle respectively. The unit vectors are represented as . Alternatively, a Cartesian coordinate system may be used, where is represented using . For defining the dipole current element , we use the unit vectors .
- 2. Objective coordinate system: The objective is represented using the Gaussian reference sphere (GRS). The focal point of the objective is considered as the origin of the objective coordinate system. A spherical coordinate system is used for representing the points in the object region with reference to the objective coordinate system. Accordingly, is represented using , which are the radial distance of the point from , the angle of elevation, and the azimuthal angle respectively. The unit vectors are represented as .
- 3. CCD coordinate system: The CCD lens is also represented using a Gaussian reference sphere. The focal point of the CCD lens is considered as the origin of the CCD coordinate system. A spherical coordinate system is used for representing the points in the CCD region. Accordingly, a point is represented using , which are the radial distance of the point from , the angle of elevation, and the azimuthal angle respectively. The unit vectors are represented as . Alternatively, a Cartesian coordinate system is used, where is represented using .
3. Dyadic Green’s function
The dyadic Green’s function maps a dipole current in the object plane to the electric fields in the CCD region. The derivation of the dyadic Green’s function is presented here. For a dipole located at close to the aplanatic point (), the electric field inside the SIL region close to the interface (such that far-field approximation is valid) is given as [33]
The above expression represents a single ray, the direction of which is specified by We highlight that far field approximation is used above. We repeat for the convenience of reference that is the origin for the SIL coordinate system and is also the aplanatic point of SIL. The far field assumption implies that though the location of the dipole may be far from the aplanatic point of SIL (in terms of wavelength), the above far-field approximation remains valid after the refraction at the SIL interface and the refracted wave is considered equivalent to a wave very close to the focal point of the objective . Assuming that the radius of the SIL is much larger than the wavelength of the wave, for each ray specified by , the interface of SIL and objective regions can be considered planar and Fresnel transmission coefficients can be used. Thus, after the refraction at a point A on the SIL interface, the electric field in the objective region is given by
where and are the transmission coefficients at the point A for ‘s’ and ‘p’ polarizations respectively. For the point A on the interface, the transmission coefficients are given as below:
where and are the angles made by the wave vectors and with the normal vector at the interface. We highlight that the above coefficients are slightly different from the Fresnel transmission coefficients. This is because the spherical waveform of the form in Eqs. (1) and (2) use different coordinate systems and the required normalization is incorporated in the transmission coefficients defined in Eqs. (3) and (4). For the assumption, , , and , the transmission coefficients can be simplified to
After passing the objective lens, the wave travels parallel to the optic axis and reaches the CCD lens and is collected at the focus of the CCD lens. The electric field at a point B on the interface of the Gaussian reference sphere representing the CCD, is given as
where and are the unit vectors along the electric fields for ‘s’ polarizations in the objective and CCD regions, while and are the unit vectors along the electric fields for the ‘p’ polarizations in the objective and CCD regions. Using the fact that , , and ,
where
Finally, inside the CCD region, at a generic point , the field is given as
The above can be simplified as follows:
where, assuming ,
and
where
The dyadic Green’s function of the SIL based microscopy system for an arbitrarily located dipole is therefore given by Eqs. (12)–(14), where the elements in the last row of Eq. (12) are zero for the assumption , which is true for most practical microscopy systems. The only condition is that the wavelength of the wave and distance of the dipole from the aplanatic point is small in comparison with the radius of the SIL, .
For, total internal reflection of the electric field occurs at the interface of the SIL. Thus, the valid range of is given by , which implies that the maximum numerical aperture of an objective in a SIL based microscopy system is given by:
3.1 Definition of non-SIL based microscopy system used in sections 5 to 8
The above derivation can also be used for a non-SIL based system as shown in Fig. 1(c) using the substitutions: , , and . In such a system, the whole object region is replaced with the air. This means that there is no substrate or sample and no-SIL. The dipole to be imaged is simply present in the air. This is so that we can compare the two microscopy systems in a fair manner. We concede that this may not be practically possible, but such theoretical experiments provide the most honest comparison of the two systems and their performance parameters. Wherever it is not mentioned explicitly, if the expression is used for non-SIL based microscopy system, it means the maximum numerical aperture of the objective for the corresponding SIL based system. It should not be confused with the maximum numerical aperture for free space imaging system.
4. Paraxial approximation and magnification
In this section, for the convenience of usage, we drop the superscripts A and B, as their meaning is quite evident even without their use. In the paraxial approximation, . Then Eqs. (5) and (6) can be simplified to
Further, , , , , and . Using these, Eq. (14) can be simplified to
and is the lateral magnification of a general microscopy system without the SIL.
4.1 Lateral Magnification
Now, in order to derive the lateral resolution, we consider that belongs to the object plane . Also, we fix the image plane to be . Thus, Eq. (13) can be written as
, and . Thus, the PSF is given as
It is evident that for the point , the maximum intensity occurs when , which implies and . Thus, the lateral magnification of the SIL based microscopy system is
4.2 Longitudinal magnification
Now, for determining the longitudinal magnification, we consider that lies on the longitudinal axis, i.e., . Since we are interested in the longitudinal direction only, we fix . Thus, . Due to the property of Bessel functions, in Eq. (13) and is given as:
Thus, the PSF is given as
From the above, it is evident that the maximum intensity for occurs when , which implies
where is the longitudinal resolution of a general microscopy system without a SIL. Thus, the longitudinal magnification of the SIL based microscopy systems is
5. Basic characteristics of the dyadic Green’s function for a single dipole source
The intensity at a point is given by , where denotes the Euclidean norm. For convenience, we use the quantity , such that using Eqs. (11) and (12), accounts for only the integrals in Eq. (13). In all the plots in this paper, the intensity is used.
In the numerical simulations below, we consider the following settings. The SIL is made of silicon and has the refractive index of . This is useful in subsurface image of silicon chips. The radius of the SIL is . The wavelength of the optical signal is chosen as . The parameters of the objective are and . The receiving lens in the CCD region has the parameters and . When unspecified, the numerical aperture of the objective is given by Eq. (15). Thus, in general . If other values of numerical apertures are used, they are specified explicitly in the figures and the corresponding discussions.
Considering that there is a synthetic dipole source at the aplanatic point, , we use the dyadic Green’s function, Eq. (12), to compute the intensity in the CCD region. First we consider, . The intensity along the x axis in the CCD region (i.e., ) and the y axis in the CCD region (i.e., ) are plotted in Fig. 2(a) . The full width at half maximum (FWHM) along the x axis is and along the y axis is . The first zero occurs at about and away from the peak for x and y axes, respectively. The intensity along the longitudinal direction is plotted in Fig. 2(b). Though the FWHM and zeros along the longitudinal direction are not generally used in practice, we mention them for the sake of completeness. The FWHM and the location of first zero away from the peak are and , respectively. For a directed dipole, the plots are similar to Figs. 2(a,b), except that the two plots along the x and y axes are now interchanged in Fig. 2(a). Next, we consider and . The intensity along the x axis in the CCD region, i.e., is plotted in Fig. 2(c).
5.1 Comparison with non-SIL based microscopy system
We compare the SIL based microscopy system with the non-SIL based microscopy system. First, we compare the intensities and point spread at the aplanatic (or focal) point. For directed dipole, the intensity along the x and z axes for both SIL and non-SIL based system are plotted in Figs. 3(a,b) . The intensity along x axis for a directed dipole at the aplanatic (focal) point for SIL and non-SIL based system are plotted in Fig. 3(c). In all the figures, we have plotted the . However, the peak in the case of non-SIL based system is very small in comparison to the SIL based system and the actual intensities are not plotted as they do not give a good comparison of the point spread. In all the cases, we see that the point spreads for SIL and non-SIL based system are very close.
However, it is well known that the resolution of the SIL-based system is significantly better than the non-SIL based system [3,4]. Thus, in order to explain the resolution, we provide two perspectives: (1) the effect of magnification (2) the Rayleigh criterion. Firstly, for explaining the effect of magnification, we move the source away from the aplanatic (or focal) point and see the image of the source. In the object region, we place a directed source at and plot the intensities along the x axis for SIL and non-SIL based microscopy systems in Fig. 4(a) . Similarly, in the object region, we place a directed source at and plot the intensities along the z axis for SIL and non-SIL based microscopy systems in Fig. 4(b). It is seen that the shift in the image of the source greatly varies in the SIL and non-SIL based system. The shift in the SIL based system is much more than the non-SIL based system. This happens because of the larger magnification in the SIL based system (see Eqs. (20) and(24)).
Secondly, considering the Rayleigh criterion as a resolution criterion [4], two such sources are said to be resolved when the saddle-to-peak ratio of the combines point spread function is approximately 0.735. For the SIL based system with, we consider two directed source placed and . When, the saddle-to-peak ratio is 0.735, which means that based on the Rayleigh criterion, such two sources can be resolved, as shown in Fig. 5(a) . For the non-SIL based system with, this value is, and the image of resolved sources with is shown in Fig. 5(c). In Fig. 5(b), the two sources cannot be resolved for the non-SIL based system when . Therefore, the resolution of the SIL based system is significantly better than the non-SIL based system.
5.2 Effect of numerical aperture of the objective
In this subsection, we consider the effect of the numerical aperture of the objective on the SIL-based microscopy system. It is already mentioned that the maximum numerical aperture of the objective that can be used in the SIL-based microscopy system is given by Eq. (15). Now, we see the effect of using a numerical aperture less than .
In Figs. 6(a,b) , we plot the normalized intensities along x and z axes for a directed dipole placed at the aplanatic point for various values of numerical aperture of the objective. It is noted the point spread is smaller for larger numerical aperture. Figure 6(c) shows the peak intensities of the point spreads for various numerical apertures. Plot of non-SIL based microscopy system is also included. It is observed that the peak intensity of the point spread is higher for high numerical aperture. This is because the solid angle over which the radiated signal is collected is larger for larger numerical aperture. We also note that the peak intensity is significantly larger for SIL based microscopy system. This is because for a particular numerical aperture of the objective, the solid angle in the SIL region is significantly larger (effective larger numerical aperture) [3,4,14]. Figure 6(d) plots the FWHM of SIL and non-SIL based microscopy systems as a function of numerical aperture. SIL based microscopy has lower FWHM as compared to the non-SIL based microscopy system. Further, the FWHM is smaller for large values of numerical apertures. The first property (larger intensity for larger numerical aperture) is desirable for the ease of sensing in the CCD plane (and better signal to noise ratio) while the second property (smaller FWHM for larger numerical aperture) is desirable for resolution.
The effect of the numerical aperture is more severe along the longitudinal direction. We consider a directed dipole which is placed at various positions along the longitudinal axis. For Fig. 7 , we consider six positions , . Figures 7(a,b,c) correspond to numerical apertures , , and respectively. For , Fig. 7(a) shows that as we move away from the aplanatic point, there are three observations:
- 1. Though the peak of the intensity should be at according to the paraxial approximation (24), the actual peak is not at these locations. They are rather shifted further away from the expected image point. This observation is clearly visible in Fig. 8(a) , which provides the quantitative evaluation of this observation (see the next paragraph for details).
- 2. The intensity of the image decreases as we move away from the aplanatic point.
- 3. The side lobe level increases as we move away from the aplanatic point.
Though all these effects are present for (Fig. 7(b)) as well, the effects become less prominent (even absent) for smaller numerical apertures, for example (Fig. 7(c)).
Now, we change from 0 to and plot the point where the peak occurs against the actual location of the source in Fig. 8(a). In addition to various numerical apertures for SIL based system, we also consider the non-SIL based system. A gray line is also plotted, which represents the expected position of the image computed using Eq. (23). It is seen that for low numerical aperture, and for non-SIL based system, the actual image point is very close to the expected image point. In Fig. 8(b), we plot the peak intensities of the image for from 0 to . It is seen that the peak intensity falls rapidly away from the aplanatic point in the case of . On the other hand, for low numerical aperture, and for non-SIL based system, the intensity remains flat even away from the aplanatic point.
The above observations have two consequences from the practical point of view. The first is that better depth of focus in SIL-based system can be achieved by using lower numerical aperture. The second is that if is used for numerical aperture, the focal region can be very thin and specific, implying that the sources away from the focal plane will not affect (blur) the image, thus giving better image of the focal plane. However, as discussed after Fig. 6 numerical aperture also impacts the size of the focal spot and smaller focal spot is desirable for better resolution. Thus, in practice, all of these considerations need to be taken while selecting a suitable numerical aperture of the objective.
9. Conclusion
A derivation of the dyadic Green’s function for aplanatic SIL based microscopy system is presented for the first time. The paraxial approximations and the derivations of lateral and longitudinal magnifications are also presented. Using the dyadic Green’s function and magnifications, various properties of point spread functions of single dipoles are studied. In addition, using the Rayleigh criterion for resolution, we show that the resolution of SIL-based microscopy system is better than the non-SIL based system, in our subsequent work, investigate the resolution of SIL based system in detail. In general, such derivation and study of the properties of SIL based system are expected to be of great interest to the microscopy and lithography applications that use SIL.
Various practically important properties of SIL based microscopy system are highlighted in the manuscript. First, it is shown that though the point spread of the SIL and non-SIL based microscopy systems is similar, the magnification plays the important role in the well-documented higher resolution of SIL based system, as illustrated in section 5.1. Second, the impact of the numerical aperture on the peak intensity detected in the image region and the FWHM (full width at half maximum, an indication of the resolution) is demonstrated. It is shown that with the use of high numerical aperture of the objective, the intensity and resolution of SIL based microscopy system can be greatly enhanced. The maximum available numerical aperture of the SIL based microscopy system is also presented. We highlight that SIL microscopes system cannot only obtain images with better resolution than conventional microscopes but also collect more light than a conventional microscope. This property makes it useful when a high NA objective is not available or cannot be used. Further, it is also shown that the SIL based microscopy system shows deviation from paraxial approximation and small depth of focus for large numerical apertures of the objectives. The longitudinal properties of SIL based system are categorically different from the non-SIL based systems and there is a tradeoff between the depth of focus and the longitudinal resolution.
Acknowledgment
This work was supported by the Singapore Ministry of Education (MOE) grant under Project No. MOE2009–T2–2–086.
References and links
1. B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, and G. S. Kino, “Near-field optical data storage using solid immersion lens,” Appl. Phys. Lett. 65(4), 388–390 (1994). [CrossRef]
2. S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “High spatial resolution subsurface microscopy,” Appl. Phys. Lett. 78(26), 4071–4073 (2001). [CrossRef]
3. A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008). [CrossRef]
4. Q. Wu, L. P. Ghislain, and V. B. Elings, “Imaging with solid immersion lenses, spatial resolution, and applications,” Proc. IEEE 88(9), 1491–1498 (2000). [CrossRef]
5. L. E. Helseth, “Roles of polarization, phase and amplitude in solid immersion lens systems,” Opt. Commun. 191(3-6), 161–172 (2001). [CrossRef]
6. R. Brunner, M. Burkhardt, A. Pesch, O. Sandfuchs, M. Ferstl, S. Hohng, and J. O. White, “Diffraction-based solid immersion lens,” J. Opt. Soc. Am. A 21(7), 1186–1191 (2004). [CrossRef] [PubMed]
7. C. J. R. Sheppard and A. Choudhury, “Annular pupils, radial polarization, and superresolution,” Appl. Opt. 43(22), 4322–4327 (2004). [CrossRef] [PubMed]
8. S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “Theoretical analysis of numerical aperture increasing lens microscopy,” J. Appl. Phys. 97(5), 053105 (2005). [CrossRef]
9. Y. J. Zhang, “Design of high-performance supersphere solid immersion lenses,” Appl. Opt. 45(19), 4540–4546 (2006). [CrossRef] [PubMed]
10. C. A. Michaels, “Mid-infrared imaging with a solid immersion lens and broadband laser source,” Appl. Phys. Lett. 90(12), 121131 (2007). [CrossRef]
11. E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007). [CrossRef]
12. J. Zhang, C. W. See, and M. G. Somekh, “Imaging performance of widefield solid immersion lens microscopy,” Appl. Opt. 46(20), 4202–4208 (2007). [CrossRef] [PubMed]
13. S. B. Ippolito, P. Song, D. L. Miles, and J. D. Sylvestri, “Angular spectrum tailoring in solid immersion microscopy for circuit analysis,” Appl. Phys. Lett. 92(10), 101109 (2008). [CrossRef]
14. S. H. Goh and C. J. R. Sheppard, “High aperture focusing through a spherical interface: Application to refractive solid immersion lens (RSIL) for subsurface imaging,” Opt. Commun. 282(5), 1036–1041 (2009). [CrossRef]
15. S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009). [CrossRef] [PubMed]
16. Y. J. Yoon, W. C. Kim, N. C. Park, K. S. Park, and Y. P. Park, “Feasibility study of the application of radially polarized illumination to solid immersion lens-based near-field optics,” Opt. Lett. 34(13), 1961–1963 (2009). [CrossRef] [PubMed]
17. D. R. Mason, M. V. Jouravlev, and K. S. Kim, “Enhanced resolution beyond the Abbe diffraction limit with wavelength-scale solid immersion lenses,” Opt. Lett. 35(12), 2007–2009 (2010). [CrossRef] [PubMed]
18. L. Wang, M. C. Pitter, and M. G. Somekh, “Wide-field high-resolution solid immersion fluorescence microscopy applying an aplanatic solid immersion lens,” Appl. Opt. 49(31), 6160–6169 (2010). [CrossRef]
19. K. M. Lim, G. C. F. Lee, C. J. R. Sheppard, J. C. H. Phang, C. L. Wong, and X. D. Chen, “Effect of polarization on a solid immersion lens of arbitrary thickness,” J. Opt. Soc. Am. A 28(5), 903–911 (2011). [CrossRef] [PubMed]
20. S. Y. Yim, J. H. Kim, and J. Lee, “Solid Immersion lens microscope for spectroscopy of nanostructure materials,” J. Opt. Soc. Korea 15(1), 78–81 (2011). [CrossRef]
21. K. A. Serrels, E. Ramsay, R. J. Warburton, and D. T. Reid, “Nanoscale optical microscopy in the vectorial focusing regime,” Nat. Photonics 2(5), 311–314 (2008). [CrossRef]
22. J. Zhang, Y. Kim, S. H. Yang, and T. D. Milster, “Illumination artifacts in hyper-NA vector imaging,” J. Opt. Soc. Am. A 27(10), 2272–2284 (2010). [CrossRef] [PubMed]
23. F. H. Köklü and M. S. Unlü, “Subsurface microscopy of interconnect layers of an integrated circuit,” Opt. Lett. 35(2), 184–186 (2010). [CrossRef] [PubMed]
24. J. X. Cheng and X. S. Xie, “Green's function formulation for third-harmonic generation microscopy,” J. Opt. Soc. Am. B 19(7), 1604–1610 (2002). [CrossRef]
25. J. Frank, S. Altmeyer, and G. Wernicke, “Non-interferometric, non-iterative phase retrieval by Green’s functions,” J. Opt. Soc. Am. A 27(10), 2244–2251 (2010). [CrossRef] [PubMed]
26. H. M. Guo, S. L. Zhuang, J. B. Chen, and Z. C. Liang, “Imaging theory of an aplanatic system with a stratified medium based on the method for a vector coherent transfer function,” Opt. Lett. 31(20), 2978–2980 (2006). [CrossRef] [PubMed]
27. O. Keller, “Attached and radiated electromagnetic fields of an electric point dipole,” J. Opt. Soc. Am. B 16(5), 835–847 (1999). [CrossRef]
28. A. K. Zvezdin and V. I. Belotelov, “Electrodynamic Green-function technique for investigating the magneto-optics of low-dimensional systems and nanostructures,” J. Opt. Soc. Am. B 22(1), 228–239 (2005). [CrossRef]
29. T. Hakkarainen, T. Setälä, and A. T. Friberg, “Subwavelength electromagnetic near-field imaging of point dipole with metamaterial nanoslab,” J. Opt. Soc. Am. A 26(10), 2226–2234 (2009). [CrossRef] [PubMed]
30. P. Martinsson, H. Lajunen, and A. T. Friberg, “Scanning optical near-field resolution analyzed in terms of communication modes,” Opt. Express 14(23), 11392–11401 (2006). [CrossRef] [PubMed]
31. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University Press, Cambridge, 2006).
32. T. Setälä, M. Kaivola, and A. T. Friberg, “Decomposition of the point-dipole field into homogeneous and evanescent parts,” Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 59(1), 1200–1206 (1999). [CrossRef]
33. C. A. Balanis, Antenna Theory: Analysis and Design (Wiley, New York, 2005).