Abstract

We present the derivation of the dyadic Green’s function for the aplanatic solid immersion lens based microscopy system. The presented dyadic Green’s function is general and is applicable at non-aplanatic points as well in the object plane. Thus, the electromagnetic wave formulation is used to describe the optical system without paraxial assumptions. Various important and useful properties of SIL based microscopy system are also presented. The effect of the numerical aperture of the objective on the peak intensities, resolutions and the depth of field are also reported. Some interesting longitudinal effects are also reported.

©2011 Optical Society of America

1. Introduction

Although solid immersion lens (SIL) have been used in optical data storage for a long time [1], its application in subsurface microscopy system is relatively new and currently being explored [2]. One of the motivations behind exploring SIL based microscopy system is that SIL based microscopy system is capable of providing better resolution than non-SIL based system owing to the high numerical aperture of the solid immersion lens [3,4]. Another motivation is that while most general microscopy systems are capable of imaging the surface of the object, SIL based microscopy is capable of providing sub-surface imaging with good resolution if the refractive index of the object is the same as the refractive index of the SIL. Thus, it attracts potential applications in the field of imaging solid state devices, which have most of the structures buried deep into the substrate. Thus, solid immersion lens made of the same material as the substrate is a suitable imaging method.

The initial studies for SIL based microscopy systems appeared in [2,48], more theoretical aspects were explored in [3,919] and experimental results based on SIL based microscopy were reported in [2023]. Most reported results used ray optics and/or paraxial approximations to study the SIL based microscopy systems. While the use of point spread functions and aberration functions are common in optics, the utility of the dyadic Green’s function has also been demonstrated [2432]. The dyadic Green’s functions are usually based on electromagnetic theory. Thus, they have greater applicability than the point spread function (in non-focal regions as well) and can be used to explain or predict the behavior of the system in many scenarios.

In this paper, we present a derivation of the dyadic Green’s function of the aplanatic SIL based microscopy system using the electromagnetic theory. In our knowledge, this is the first time that the dyadic Green’s function for the aplanatic SIL is being reported. Using the dyadic Green’s function, we compare the behavior of SIL based and non-SIL based microscopy systems in the presence of single dipole source (equivalent to a point spread function). The impact of using SIL on resolution enhancement and depth of focus is also demonstrated using numerical simulations.

2. Setup and notation

For convenience, we first introduce the mathematical notations that are used to represent physical quantities. The physical vectors like the dipole current element, the electric fields, the position vectors, etc. are represented with arrows above them. For example, r. The subscripts denote the specific characteristics of the quantity. For example, rCCD denotes the position vector of a point in the charged couple device (CCD) region, where the image is formed. The matrices are shown using bold upright notation. For example GPSF is the dyadic Green’s function (a tensor) for mapping a current dipole in the object region to the electric fields in the image plane (CCD region). The scalars are shown using italic notations. For example, ρ is the radial distance of a point in the object plane from the origin in the object plane. In addition, the points are named or assigned labels (italic capital single alphabets, like A, OCCD, etc.) and the same point may be denoted by different vectors in the different coordinate systems. For example, robjA denotes the point A in the objective coordinate system, while rSILA denotes the same point in the SIL coordinate system.

In the setup, the longitudinal direction is represented as the z^ axis. The SIL is represented using its radius R and the refractive index of the SIL nSIL. The refractive index of the SIL is the same as the object to be imaged. The region to be imaged is referred to as the object region for convenience of reference. The objective lens is represented using its focal length fobj, its semi-aperture angle θmax, and its refractive index nobj. The image region is the region containing the CCD screen. The CCD screen is in the focal region of the CCD lens, which is characterized with fCCD and the refractive index of the image regionnCCD.

The setup can be understood as comprising three different coordinate systems:

  • 1. SIL coordinate system: The aplanatic point of the SIL is considered as the origin for this system. It is represented using OSIL. This point is different from the geometric center of the SIL, which is represented using CSIL. A spherical coordinate system is used for representing the points in this coordinate system. Thus, rSIL is represented using (rSIL,θSIL,ϕSIL), which are the radial distance of the point from OSIL, the angle of elevation, and the azimuthal angle respectively. The unit vectors are represented as (r^SIL,θ^SIL,ϕ^SIL). Alternatively, a Cartesian coordinate system may be used, where rSIL is represented using (xSIL,ySIL,zSIL). For defining the dipole current element p, we use the unit vectors (x^SIL,y^SIL,z^SIL).
  • 2. Objective coordinate system: The objective is represented using the Gaussian reference sphere (GRS). The focal point of the objective is considered as the origin Oobj of the objective coordinate system. A spherical coordinate system is used for representing the points in the object region with reference to the objective coordinate system. Accordingly, robj is represented using (robj,θobj,ϕobj), which are the radial distance of the point from Oobj, the angle of elevation, and the azimuthal angle respectively. The unit vectors are represented as (r^obj,θ^obj,ϕ^obj).
  • 3. CCD coordinate system: The CCD lens is also represented using a Gaussian reference sphere. The focal point of the CCD lens is considered as the origin OCCD of the CCD coordinate system. A spherical coordinate system is used for representing the points in the CCD region. Accordingly, a point rCCD is represented using (rCCD,θCCD,ϕCCD), which are the radial distance of the point from OCCD, the angle of elevation, and the azimuthal angle respectively. The unit vectors are represented as (r^CCD,θ^CCD,ϕ^CCD). Alternatively, a Cartesian coordinate system is used, where rCCD is represented using (xCCD,yCCD,zCCD).

3. Dyadic Green’s function

The dyadic Green’s function maps a dipole current in the object plane to the electric fields in the CCD region. The derivation of the dyadic Green’s function is presented here. For a dipole p(rSIL) located at rSIL close to the aplanatic point OSIL (rSILR), the electric field inside the SIL region close to the interface (such that far-field approximation is valid) is given as [33]

ESIL(rSIL,rSIL)ω2μ0(θ^SILθ^SIL+ϕ^SILϕ^SIL)exp(ikSILrSIL)4πrSILexp(ikSILrSIL).p(rSIL)

The above expression represents a single ray, the direction of which is specified by (θ^SIL,ϕ^SIL) We highlight that far field approximation rSILR is used above. We repeat for the convenience of reference that OSIL is the origin for the SIL coordinate system and is also the aplanatic point of SIL. The far field assumption rSILR implies that though the location of the dipole may be far from the aplanatic point of SIL (in terms of wavelength), the above far-field approximation remains valid after the refraction at the SIL interface and the refracted wave is considered equivalent to a wave very close to the focal point of the objective Oobj. Assuming that the radius of the SIL is much larger than the wavelength of the wave, for each ray specified by (θ^SIL,ϕ^SIL), the interface of SIL and objective regions can be considered planar and Fresnel transmission coefficients can be used. Thus, after the refraction at a point A on the SIL interface, the electric field in the objective region is given by

Eobj(robjA,rSIL)ω2μ0(tpAθ^objAθ^SILA+tsAϕ^objAϕ^SILA)exp(ikobjrobjA)4πrobjAexp(ikSILrSIL).p(rSIL)

where tsA and tpA are the transmission coefficients at the point A for ‘s’ and ‘p’ polarizations respectively. For the point A on the interface, the transmission coefficients are given as below:

tsA=2nSILcosγSILAnSILcosγSILA+nobjcosγobjAexp(ikSILrSILAikobjrobjA)robjArSILA
tpA=2nSILcosγSILAnobjcosγSILA+nSILcosγobjAexp(ikSILrSILAikobjrobjA)robjArSILA

where γSILA and γobjA are the angles made by the wave vectors kSIL and kobj with the normal vector at the interface. We highlight that the above coefficients are slightly different from the Fresnel transmission coefficients. This is because the spherical waveform of the form exp(ikr)/4πr in Eqs. (1) and (2) use different coordinate systems and the required normalization is incorporated in the transmission coefficients defined in Eqs. (3) and (4). For the assumptionrSILR, γSILAθobjA, γobjAθSILA, and robjA/rSILA=nSIL/nobj, the transmission coefficients can be simplified to

tsA=2nSILcosθobjAnSILcosθobjA+nobjcosθSILAnSILnobj
tpA=2nSILcosθobjAnobjcosθobjA+nSILcosθSILAnSILnobj

After passing the objective lens, the wave travels parallel to the optic axis and reaches the CCD lens and is collected at the focus of the CCD lens. The electric field at a point B on the interface of the Gaussian reference sphere representing the CCD, ECCD(θCCDB,ϕCCDB) is given as

ECCD(θCCDB,ϕCCDB)((Eobj.s^obj)s^CCD+(Eobj.p^obj)p^CCD)nobjcosθCCDBnCCDcosθobjA

where s^obj and s^CCD are the unit vectors along the electric fields for ‘s’ polarizations in the objective and CCD regions, whilep^obj and p^CCD are the unit vectors along the electric fields for the ‘p’ polarizations in the objective and CCD regions. Using the fact that s^obj=s^CCD=ϕ^SILA=ϕ^objA=ϕ^CCDB, p^obj=θ^objA, and p^CCD=θ^CCDB,

ECCD(θCCDB,ϕCCDB)=ω2μ0exp(ikobjfobj)4πfobjexp(ikSILrSIL)×nobjcosθCCDBnCCDcosθobjA(tpAθ^CCDBθ^SILA+tsAϕ^SILAϕ^SILA)p(rSIL)=ω2μ0exp(ikobjfobj)8πfobjexp(ikSILrSIL)×nobjcosθCCDBnCCDcosθobjA[E¯xE¯yE¯z]p(rSIL)

where

E¯x=[(tsA+tpAcosθCCDBcosθSILA)(tsAtpAcosθCCDBcosθSILA)cos2ϕSILA(tsAtpAcosθCCDBcosθSILA)sin2ϕSILA2tpAsinθCCDBcosθSILAcosϕSILA]E¯y=[(tsAtpAcosθCCDBcosθSILA)sin2ϕSILA(tsA+tpAcosθCCDBcosθSILA)+(tsAtpAcosθCCDBcosθSILA)cos2ϕSILA2tpAsinθCCDBcosθSILAsinϕSILA]E¯z=2tpAsinθSILA[cosθCCDBcosϕSILAcosθCCDBsinϕSILAsinθCCDB]

Finally, inside the CCD region, at a generic point rCCD, the field is given as

ECCD(rCCD)=ikCCDfCCDexp(ikCCDfCCD)2πΩCCDECCD(θCCDB,ϕCCDB)exp(ikCCDrCCD)sinθCCDB  dθCCDBdϕCCDB

The above can be simplified as follows:

ECCD(rCCD)=ω2μ0GPSFp(rSIL)

where, assuming fCCDfobj,

GPSF=α[I0+I21I222iI11I22I0I212iI12000]

and

α=ikCCD8πfobjfCCD(nobjnCCD)12exp(i(kobjfobj+kCCDfCCD))I0=0θmaxsinθobjAcosθobjA(tsA+tpAcosθSILA)J0(ρ)exp(iz)dθobjAI11=0θmaxsinθobjAcosθobjA(tpAsinθSILA)J1(ρ)exp(iz)cosψdθobjAI12=0θmaxsinθobjAcosθobjA(tpAsinθSILA)J1(ρ)exp(iz)sinψdθobjAI21=0θmaxsinθobjAcosθobjA(tsAtpAcosθSILA)J2(ρ)exp(iz)cos2ψdθobjAI22=0θmaxsinθobjAcosθobjA(tsAtpAcosθSILA)J2(ρ)exp(iz)sin2ψdθobjA

where

ρ=x2+y2;ψ=tan1yx;x=(kCCDsinθCCDBxCCDB+kSILsinθSILAxSIL)y=(kCCDsinθCCDByCCDB+kSILsinθSILAySIL)z=kCCDcosθCCDBzCCDBkSILcosθSILAzSIL

The dyadic Green’s function of the SIL based microscopy system for an arbitrarily located dipole is therefore given by Eqs. (12)(14), where the elements in the last row of Eq. (12) are zero for the assumption fCCDfobj, which is true for most practical microscopy systems. The only condition is that the wavelength of the wave and distance of the dipole from the aplanatic point is small in comparison with the radius of the SIL, λ,rSILR.

Forsinθobj>nobj/nSIL, total internal reflection of the electric field occurs at the interface of the SIL. Thus, the valid range of θobj is given by sinθobjnobj/nSIL, which implies that the maximum numerical aperture of an objective in a SIL based microscopy system is given by:

NAmax=(nobj)2nSIL

3.1 Definition of non-SIL based microscopy system used in sections 5 to 8

The above derivation can also be used for a non-SIL based system as shown in Fig. 1(c) using the substitutions: nSIL=nobj, θSIL=θobj, and OSIL=Oobj. In such a system, the whole object region is replaced with the air. This means that there is no substrate or sample and no-SIL. The dipole to be imaged is simply present in the air. This is so that we can compare the two microscopy systems in a fair manner. We concede that this may not be practically possible, but such theoretical experiments provide the most honest comparison of the two systems and their performance parameters. Wherever it is not mentioned explicitly, if the expression NAmax is used for non-SIL based microscopy system, it means the maximum numerical aperture of the objective for the corresponding SIL based system. It should not be confused with the maximum numerical aperture for free space imaging system.

 figure: Fig. 1

Fig. 1 The setup of the SIL based microscopy system. (a) Diagram showing various interfaces and the path of a ray travelling through these interfaces. Various regions, coordinate systems, and important angles are also indicated. The horizontal axis shown above is the longitudinal axis z^. (b) A practical representation of the SIL-based microscopy system. (c) The non-SIL based system obtained by substituting nSIL=nobj, which is used in all the numerical experiments presented in sections 5-8. (d) Illustration of the angles γSIL and γobjA used in Eqs. (3) and (4).

Download Full Size | PPT Slide | PDF

4. Paraxial approximation and magnification

In this section, for the convenience of usage, we drop the superscripts A and B, as their meaning is quite evident even without their use. In the paraxial approximation, θmax0. Then Eqs. (5) and (6) can be simplified to

t=ts=tp=2nSILnSIL+nobjnSILnobj

Further, sinθSILθobjnSIL/nobj, sinθCCDθobjfobj/fCCD, cosθobj1θobj2/2, cosθSIL10.5(θobjnSIL/nobj)2, and cosθCCD10.5(θobjfobj/fCCD)2. Using these, Eq. (14) can be simplified to

x=x˜θobj; where x˜=kobjM(xCCD+M(nSILnobj)2xSIL)y=y˜θobj; where y˜=kobjM(yCCD+M(nSILnobj)2ySIL)z=z^+θobj2z˜;                  where z^=(kCCDzCCDkSILzSIL)                 and z˜=12(kCCDzCCD(fobjfCCD)2kSILzSIL(nSILnobj)2)ρ=θobjρ˜; where ρ˜=x˜2+y˜2ψ=tan1y˜x˜;

and M=nobjnCCDfCCDfobj is the lateral magnification of a general microscopy system without the SIL.

4.1 Lateral Magnification

Now, in order to derive the lateral resolution, we consider that rSIL belongs to the object plane zSIL=0. Also, we fix the image plane to be zCCD=0. Thus, Eq. (13) can be written as

I0t0θmaxθobj(114θobj2)(212(nSILnobj)2θobj2)J0(ρ)dθobj2t0θmaxθobjJ0(ρ˜θobj)dθobj2tθmax2J1(ρ˜θmax)(ρ˜θmax),,

I11,I120θmaxO(θobj3)dθobj0, and I21,I220θmaxO(θobj5)dθobj0. Thus, the PSF is given as

GPSF=α2tθmax2J1(ρ˜θmax)(ρ˜θmax)[100010000]

It is evident that for the point rSIL, the maximum intensity occurs when ρ˜=0, which implies xCCD=M(nSILnobj)2xSIL and yCCD=M(nSILnobj)2ySIL. Thus, the lateral magnification of the SIL based microscopy system is

MSILlat=M(nSILnobj)2

4.2 Longitudinal magnification

Now, for determining the longitudinal magnification, we consider that rSIL lies on the longitudinal axis, i.e., xSIL=ySIL=0. Since we are interested in the longitudinal direction only, we fix xCCD=yCCD=0. Thus, ρ=0. Due to the property of Bessel functions, I11=I12=I21=I22=0 in Eq. (13) and I0 is given as:

I0texp(iz^)0θmaxθobj(114θobj2)(212(nSILnobj)2θobj2)exp(iz˜θobj2)dθobj2texp(iz^)0θmaxθobjexp(iz˜θobj2)dθobj=tθmax2sin(z˜θmax22)(z˜θmax22)exp(iz˜θmax22+iz^)

Thus, the PSF is given as

GPSF=αtθmax2sin(z˜θmax22)(z˜θmax22)exp(iz˜θmax22+iz^)[100010000]

From the above, it is evident that the maximum intensity for rSIL occurs when z˜=0, which implies

zCCD=nSIL1nCCD(fCCDfobj)2(nSILnobj)2zSIL=M(nSILnobj)3zSIL

where M=nCCDnobjM2 is the longitudinal resolution of a general microscopy system without a SIL. Thus, the longitudinal magnification of the SIL based microscopy systems is

MSILlon=(nSILnobj)3M

5. Basic characteristics of the dyadic Green’s function for a single dipole source

The intensity at a point rCCD is given by I(rCCD)=|E(rCCD)|2, where |·| denotes the Euclidean norm. For convenience, we use the quantity In(rCCD)=I(rCCD)/(ω2μ0α)2, such that using Eqs. (11) and (12), In(rCCD) accounts for only the integrals in Eq. (13). In all the plots in this paper, the intensity In(rCCD) is used.

In the numerical simulations below, we consider the following settings. The SIL is made of silicon and has the refractive index of nSIL=3.5. This is useful in subsurface image of silicon chips. The radius of the SIL is R=500×106 m. The wavelength of the optical signal is chosen as λ=1.340×106 m. The parameters of the objective are nobj=1 and fobj=1×102 m. The receiving lens in the CCD region has the parameters nCCD=1 and fCCD=1×101 m. When unspecified, the numerical aperture of the objective is given by Eq. (15). Thus, in general NAobj=NAmax=0.2857. If other values of numerical apertures are used, they are specified explicitly in the figures and the corresponding discussions.

Considering that there is a synthetic dipole source at the aplanatic point, rSIL=0, we use the dyadic Green’s function, Eq. (12), to compute the intensity in the CCD region. First we consider, p(rSIL)=x^SIL. The intensity along the x axis in the CCD region (i.e., yCCD=zCCD=0) and the y axis in the CCD region (i.e., xCCD=zCCD=0) are plotted in Fig. 2(a) . The full width at half maximum (FWHM) along the x axis is 0.156λ and along the y axis is 0.142λ. The first zero occurs at about 0.19λ and 0.165λ away from the peak for x and y axes, respectively. The intensity along the longitudinal direction is plotted in Fig. 2(b). Though the FWHM and zeros along the longitudinal direction are not generally used in practice, we mention them for the sake of completeness. The FWHM and the location of first zero away from the peak are 0.512λ and 0.58λ, respectively. For a y^ directed dipole, the plots are similar to Figs. 2(a,b), except that the two plots along the x and y axes are now interchanged in Fig. 2(a). Next, we consider rSIL=0 and p(rSIL)=z^SIL. The intensity along the x axis in the CCD region, i.e., yCCD=zCCD=0 is plotted in Fig. 2(c).

 figure: Fig. 2

Fig. 2 Basic characteristics of the dyadic Green’s function of SIL based microscopy system for NAmax=0.2857. (a) The intensity along the x and y axes in the CCD region corresponding to a x^ directed dipole at the aplanatic point, for the case xCCD=0,ρCCD=yCCDwhile the case yCCD=0, ρCCD=xCCD (b) The intensity along the z axis in the CCD region corresponding to a x^ directed dipole at the aplanatic point. (c) The intensity along the x axis in the CCD region corresponding to a z^ directed dipole at the aplanatic point.

Download Full Size | PPT Slide | PDF

5.1 Comparison with non-SIL based microscopy system

We compare the SIL based microscopy system with the non-SIL based microscopy system. First, we compare the intensities and point spread at the aplanatic (or focal) point. For x^ directed dipole, the intensity along the x and z axes for both SIL and non-SIL based system are plotted in Figs. 3(a,b) . The intensity along x axis for a z^ directed dipole at the aplanatic (focal) point for SIL and non-SIL based system are plotted in Fig. 3(c). In all the figures, we have plotted the In/max(In). However, the peak In in the case of non-SIL based system is very small in comparison to the SIL based system and the actual intensities In are not plotted as they do not give a good comparison of the point spread. In all the cases, we see that the point spreads for SIL and non-SIL based system are very close.

 figure: Fig. 3

Fig. 3 Comparison of the dyadic Green’s function of SIL based microscopy system with non-SIL based microscopy system for NA=0.2857. (a) The normalized intensity along the x axis in the CCD region corresponding to a x^ directed dipole at the aplanatic point. (b) The normalized intensity along the z axis in the CCD region corresponding to a x^ directed dipole at the aplanatic point. (c) The normalized intensity along the x axis in the CCD region corresponding to a z^ directed dipole at the aplanatic point.

Download Full Size | PPT Slide | PDF

However, it is well known that the resolution of the SIL-based system is significantly better than the non-SIL based system [3,4]. Thus, in order to explain the resolution, we provide two perspectives: (1) the effect of magnification (2) the Rayleigh criterion. Firstly, for explaining the effect of magnification, we move the source away from the aplanatic (or focal) point and see the image of the source. In the object region, we place a x^ directed source at rSIL=(2λ,0,0) and plot the intensities along the x axis for SIL and non-SIL based microscopy systems in Fig. 4(a) . Similarly, in the object region, we place a x^ directed source at rSIL=(0,0,2λ) and plot the intensities along the z axis for SIL and non-SIL based microscopy systems in Fig. 4(b). It is seen that the shift in the image of the source greatly varies in the SIL and non-SIL based system. The shift in the SIL based system is much more than the non-SIL based system. This happens because of the larger magnification in the SIL based system (see Eqs. (20) and(24)).

 figure: Fig. 4

Fig. 4 Comparison of the shift in image of a x^ directed dipole in SIL and non-SIL based microscopy systems for NA=0.2857. (a) For dipole location rSIL=(2λ,0,0), the intensity along the x axis in the CCD region. (b) For dipole location rSIL=(0,0,2λ), the intensity along the z axis in the CCD region.

Download Full Size | PPT Slide | PDF

Secondly, considering the Rayleigh criterion as a resolution criterion [4], two such sources are said to be resolved when the saddle-to-peak ratio of the combines point spread function is approximately 0.735. For the SIL based system withNA=0.2857, we consider two x^ directed source placed rSIL′1=(Δx/2,0,0) and rSIL′2=(Δx/2,0,0). WhenΔx=0.245λ, the saddle-to-peak ratio is 0.735, which means that based on the Rayleigh criterion, such two sources can be resolved, as shown in Fig. 5(a) . For the non-SIL based system withNA=0.2857, this value isΔx=2.874λ, and the image of resolved sources with Δx=2.874λ is shown in Fig. 5(c). In Fig. 5(b), the two sources cannot be resolved for the non-SIL based system when Δx=0.245λ. Therefore, the resolution of the SIL based system is significantly better than the non-SIL based system.

 figure: Fig. 5

Fig. 5 Visual demonstration of resolution limit according to the Rayleigh Criterion for two x^directed dipoles along x axis. (a) For the SIL based system, Δx=0.245λ satisfies the Rayleigh criterion and the dipoles with Δx=0.245λ can be indeed resolved by the SIL based system, (b) The non-SIL based system cannot resolve the sources with Δx=0.245λ. (c) For the non-SIL based system, Δx=2.874λ satisfies the Rayleigh criterion and the dipoles with Δx=2.874λ can be resolved by the non-SIL based system.

Download Full Size | PPT Slide | PDF

5.2 Effect of numerical aperture of the objective

In this subsection, we consider the effect of the numerical aperture of the objective on the SIL-based microscopy system. It is already mentioned that the maximum numerical aperture of the objective that can be used in the SIL-based microscopy system is given by Eq. (15). Now, we see the effect of using a numerical aperture less than NAmax.

In Figs. 6(a,b) , we plot the normalized intensities along x and z axes for a x^ directed dipole placed at the aplanatic point for various values of numerical aperture of the objective. It is noted the point spread is smaller for larger numerical aperture. Figure 6(c) shows the peak intensities of the point spreads for various numerical apertures. Plot of non-SIL based microscopy system is also included. It is observed that the peak intensity of the point spread is higher for high numerical aperture. This is because the solid angle over which the radiated signal is collected is larger for larger numerical aperture. We also note that the peak intensity is significantly larger for SIL based microscopy system. This is because for a particular numerical aperture of the objective, the solid angle in the SIL region is significantly larger (effective larger numerical aperture) [3,4,14]. Figure 6(d) plots the FWHM of SIL and non-SIL based microscopy systems as a function of numerical aperture. SIL based microscopy has lower FWHM as compared to the non-SIL based microscopy system. Further, the FWHM is smaller for large values of numerical apertures. The first property (larger intensity for larger numerical aperture) is desirable for the ease of sensing in the CCD plane (and better signal to noise ratio) while the second property (smaller FWHM for larger numerical aperture) is desirable for resolution.

 figure: Fig. 6

Fig. 6 Effect of numerical aperture on the point spread function. (a) The normalized intensity along the x axis in the CCD region for various values of numerical aperture. (b) The normalized intensity along the z axis in the CCD region for various values of numerical aperture. (c) The peak intensities of SIL based and non-SIL based microscopy systems for various values of numerical apertures. (d) The full width at half maximum of the point spreads of SIL based and non-SIL based microscopy systems for various values of numerical apertures.

Download Full Size | PPT Slide | PDF

The effect of the numerical aperture is more severe along the longitudinal direction. We consider a x^ directed dipole which is placed at various positions along the longitudinal axis. For Fig. 7 , we consider six positions zSIL={0,0.2,0.4,0.6,0.8,1}λ, xSIL=ySIL=0. Figures 7(a,b,c) correspond to numerical apertures NAmax, NA=0.2, and NA=0.1 respectively. For NAmax, Fig. 7(a) shows that as we move away from the aplanatic point, there are three observations:

 figure: Fig. 7

Fig. 7 Effect of numerical aperture of the objective along the longitudinal axis. We consider six positions zSIL={0,0.2,0.4,0.6,0.8,1}λ, xSIL=xSIL=0. (a) NAmax (b) NA=0.2 (c) NA=0.1.

Download Full Size | PPT Slide | PDF

  • 1. Though the peak of the intensity should be at zCCD=MSILlonzSIL according to the paraxial approximation (24), the actual peak is not at these locations. They are rather shifted further away from the expected image point. This observation is clearly visible in Fig. 8(a) , which provides the quantitative evaluation of this observation (see the next paragraph for details).
     figure: Fig. 8

    Fig. 8 Effect of numerical aperture of the objective along the longitudinal axis. (a) The peak points in the image (CCD) region vs. the actual source locations in the object (SIL) region. The faint gray line shows the expected peak points. (b) The peak intensities in the image (CCD) plane corresponding to the actual locations in the object (SIL) region. For both (a) and (b), the non-SIL based microscopy system uses NA=0.2857. MSILlon from Eq. (24) is used for SIL based microscopy system and M=nCCDM2/nobj is used for non-SIL based microscopy system.

    Download Full Size | PPT Slide | PDF

  • 2. The intensity of the image decreases as we move away from the aplanatic point.
  • 3. The side lobe level increases as we move away from the aplanatic point.

Though all these effects are present for NA=0.2 (Fig. 7(b)) as well, the effects become less prominent (even absent) for smaller numerical apertures, for example NA=0.1(Fig. 7(c)).

Now, we change zSIL from 0 to 10λ and plot the point where the peak occurs against the actual location of the source in Fig. 8(a). In addition to various numerical apertures for SIL based system, we also consider the non-SIL based system. A gray line is also plotted, which represents the expected position of the image computed using Eq. (23). It is seen that for low numerical aperture, NA=0.1 and for non-SIL based system, the actual image point is very close to the expected image point. In Fig. 8(b), we plot the peak intensities of the image for zSIL from 0 to 10λ. It is seen that the peak intensity falls rapidly away from the aplanatic point in the case of NAmax. On the other hand, for low numerical aperture, NA=0.1 and for non-SIL based system, the intensity remains flat even away from the aplanatic point.

The above observations have two consequences from the practical point of view. The first is that better depth of focus in SIL-based system can be achieved by using lower numerical aperture. The second is that if NAmaxis used for numerical aperture, the focal region can be very thin and specific, implying that the sources away from the focal plane will not affect (blur) the image, thus giving better image of the focal plane. However, as discussed after Fig. 6 numerical aperture also impacts the size of the focal spot and smaller focal spot is desirable for better resolution. Thus, in practice, all of these considerations need to be taken while selecting a suitable numerical aperture of the objective.

9. Conclusion

A derivation of the dyadic Green’s function for aplanatic SIL based microscopy system is presented for the first time. The paraxial approximations and the derivations of lateral and longitudinal magnifications are also presented. Using the dyadic Green’s function and magnifications, various properties of point spread functions of single dipoles are studied. In addition, using the Rayleigh criterion for resolution, we show that the resolution of SIL-based microscopy system is better than the non-SIL based system, in our subsequent work, investigate the resolution of SIL based system in detail. In general, such derivation and study of the properties of SIL based system are expected to be of great interest to the microscopy and lithography applications that use SIL.

Various practically important properties of SIL based microscopy system are highlighted in the manuscript. First, it is shown that though the point spread of the SIL and non-SIL based microscopy systems is similar, the magnification plays the important role in the well-documented higher resolution of SIL based system, as illustrated in section 5.1. Second, the impact of the numerical aperture on the peak intensity detected in the image region and the FWHM (full width at half maximum, an indication of the resolution) is demonstrated. It is shown that with the use of high numerical aperture of the objective, the intensity and resolution of SIL based microscopy system can be greatly enhanced. The maximum available numerical aperture of the SIL based microscopy system is also presented. We highlight that SIL microscopes system cannot only obtain images with better resolution than conventional microscopes but also collect more light than a conventional microscope. This property makes it useful when a high NA objective is not available or cannot be used. Further, it is also shown that the SIL based microscopy system shows deviation from paraxial approximation and small depth of focus for large numerical apertures of the objectives. The longitudinal properties of SIL based system are categorically different from the non-SIL based systems and there is a tradeoff between the depth of focus and the longitudinal resolution.

Acknowledgment

This work was supported by the Singapore Ministry of Education (MOE) grant under Project No. MOE2009–T2–2–086.

References and links

1. B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, and G. S. Kino, “Near-field optical data storage using solid immersion lens,” Appl. Phys. Lett. 65(4), 388–390 (1994). [CrossRef]  

2. S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “High spatial resolution subsurface microscopy,” Appl. Phys. Lett. 78(26), 4071–4073 (2001). [CrossRef]  

3. A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008). [CrossRef]  

4. Q. Wu, L. P. Ghislain, and V. B. Elings, “Imaging with solid immersion lenses, spatial resolution, and applications,” Proc. IEEE 88(9), 1491–1498 (2000). [CrossRef]  

5. L. E. Helseth, “Roles of polarization, phase and amplitude in solid immersion lens systems,” Opt. Commun. 191(3-6), 161–172 (2001). [CrossRef]  

6. R. Brunner, M. Burkhardt, A. Pesch, O. Sandfuchs, M. Ferstl, S. Hohng, and J. O. White, “Diffraction-based solid immersion lens,” J. Opt. Soc. Am. A 21(7), 1186–1191 (2004). [CrossRef]   [PubMed]  

7. C. J. R. Sheppard and A. Choudhury, “Annular pupils, radial polarization, and superresolution,” Appl. Opt. 43(22), 4322–4327 (2004). [CrossRef]   [PubMed]  

8. S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “Theoretical analysis of numerical aperture increasing lens microscopy,” J. Appl. Phys. 97(5), 053105 (2005). [CrossRef]  

9. Y. J. Zhang, “Design of high-performance supersphere solid immersion lenses,” Appl. Opt. 45(19), 4540–4546 (2006). [CrossRef]   [PubMed]  

10. C. A. Michaels, “Mid-infrared imaging with a solid immersion lens and broadband laser source,” Appl. Phys. Lett. 90(12), 121131 (2007). [CrossRef]  

11. E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007). [CrossRef]  

12. J. Zhang, C. W. See, and M. G. Somekh, “Imaging performance of widefield solid immersion lens microscopy,” Appl. Opt. 46(20), 4202–4208 (2007). [CrossRef]   [PubMed]  

13. S. B. Ippolito, P. Song, D. L. Miles, and J. D. Sylvestri, “Angular spectrum tailoring in solid immersion microscopy for circuit analysis,” Appl. Phys. Lett. 92(10), 101109 (2008). [CrossRef]  

14. S. H. Goh and C. J. R. Sheppard, “High aperture focusing through a spherical interface: Application to refractive solid immersion lens (RSIL) for subsurface imaging,” Opt. Commun. 282(5), 1036–1041 (2009). [CrossRef]  

15. S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009). [CrossRef]   [PubMed]  

16. Y. J. Yoon, W. C. Kim, N. C. Park, K. S. Park, and Y. P. Park, “Feasibility study of the application of radially polarized illumination to solid immersion lens-based near-field optics,” Opt. Lett. 34(13), 1961–1963 (2009). [CrossRef]   [PubMed]  

17. D. R. Mason, M. V. Jouravlev, and K. S. Kim, “Enhanced resolution beyond the Abbe diffraction limit with wavelength-scale solid immersion lenses,” Opt. Lett. 35(12), 2007–2009 (2010). [CrossRef]   [PubMed]  

18. L. Wang, M. C. Pitter, and M. G. Somekh, “Wide-field high-resolution solid immersion fluorescence microscopy applying an aplanatic solid immersion lens,” Appl. Opt. 49(31), 6160–6169 (2010). [CrossRef]  

19. K. M. Lim, G. C. F. Lee, C. J. R. Sheppard, J. C. H. Phang, C. L. Wong, and X. D. Chen, “Effect of polarization on a solid immersion lens of arbitrary thickness,” J. Opt. Soc. Am. A 28(5), 903–911 (2011). [CrossRef]   [PubMed]  

20. S. Y. Yim, J. H. Kim, and J. Lee, “Solid Immersion lens microscope for spectroscopy of nanostructure materials,” J. Opt. Soc. Korea 15(1), 78–81 (2011). [CrossRef]  

21. K. A. Serrels, E. Ramsay, R. J. Warburton, and D. T. Reid, “Nanoscale optical microscopy in the vectorial focusing regime,” Nat. Photonics 2(5), 311–314 (2008). [CrossRef]  

22. J. Zhang, Y. Kim, S. H. Yang, and T. D. Milster, “Illumination artifacts in hyper-NA vector imaging,” J. Opt. Soc. Am. A 27(10), 2272–2284 (2010). [CrossRef]   [PubMed]  

23. F. H. Köklü and M. S. Unlü, “Subsurface microscopy of interconnect layers of an integrated circuit,” Opt. Lett. 35(2), 184–186 (2010). [CrossRef]   [PubMed]  

24. J. X. Cheng and X. S. Xie, “Green's function formulation for third-harmonic generation microscopy,” J. Opt. Soc. Am. B 19(7), 1604–1610 (2002). [CrossRef]  

25. J. Frank, S. Altmeyer, and G. Wernicke, “Non-interferometric, non-iterative phase retrieval by Green’s functions,” J. Opt. Soc. Am. A 27(10), 2244–2251 (2010). [CrossRef]   [PubMed]  

26. H. M. Guo, S. L. Zhuang, J. B. Chen, and Z. C. Liang, “Imaging theory of an aplanatic system with a stratified medium based on the method for a vector coherent transfer function,” Opt. Lett. 31(20), 2978–2980 (2006). [CrossRef]   [PubMed]  

27. O. Keller, “Attached and radiated electromagnetic fields of an electric point dipole,” J. Opt. Soc. Am. B 16(5), 835–847 (1999). [CrossRef]  

28. A. K. Zvezdin and V. I. Belotelov, “Electrodynamic Green-function technique for investigating the magneto-optics of low-dimensional systems and nanostructures,” J. Opt. Soc. Am. B 22(1), 228–239 (2005). [CrossRef]  

29. T. Hakkarainen, T. Setälä, and A. T. Friberg, “Subwavelength electromagnetic near-field imaging of point dipole with metamaterial nanoslab,” J. Opt. Soc. Am. A 26(10), 2226–2234 (2009). [CrossRef]   [PubMed]  

30. P. Martinsson, H. Lajunen, and A. T. Friberg, “Scanning optical near-field resolution analyzed in terms of communication modes,” Opt. Express 14(23), 11392–11401 (2006). [CrossRef]   [PubMed]  

31. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University Press, Cambridge, 2006).

32. T. Setälä, M. Kaivola, and A. T. Friberg, “Decomposition of the point-dipole field into homogeneous and evanescent parts,” Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 59(1), 1200–1206 (1999). [CrossRef]  

33. C. A. Balanis, Antenna Theory: Analysis and Design (Wiley, New York, 2005).

References

  • View by:
  • |
  • |
  • |

  1. B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, and G. S. Kino, “Near-field optical data storage using solid immersion lens,” Appl. Phys. Lett. 65(4), 388–390 (1994).
    [Crossref]
  2. S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “High spatial resolution subsurface microscopy,” Appl. Phys. Lett. 78(26), 4071–4073 (2001).
    [Crossref]
  3. A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008).
    [Crossref]
  4. Q. Wu, L. P. Ghislain, and V. B. Elings, “Imaging with solid immersion lenses, spatial resolution, and applications,” Proc. IEEE 88(9), 1491–1498 (2000).
    [Crossref]
  5. L. E. Helseth, “Roles of polarization, phase and amplitude in solid immersion lens systems,” Opt. Commun. 191(3-6), 161–172 (2001).
    [Crossref]
  6. R. Brunner, M. Burkhardt, A. Pesch, O. Sandfuchs, M. Ferstl, S. Hohng, and J. O. White, “Diffraction-based solid immersion lens,” J. Opt. Soc. Am. A 21(7), 1186–1191 (2004).
    [Crossref] [PubMed]
  7. C. J. R. Sheppard and A. Choudhury, “Annular pupils, radial polarization, and superresolution,” Appl. Opt. 43(22), 4322–4327 (2004).
    [Crossref] [PubMed]
  8. S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “Theoretical analysis of numerical aperture increasing lens microscopy,” J. Appl. Phys. 97(5), 053105 (2005).
    [Crossref]
  9. Y. J. Zhang, “Design of high-performance supersphere solid immersion lenses,” Appl. Opt. 45(19), 4540–4546 (2006).
    [Crossref] [PubMed]
  10. C. A. Michaels, “Mid-infrared imaging with a solid immersion lens and broadband laser source,” Appl. Phys. Lett. 90(12), 121131 (2007).
    [Crossref]
  11. E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007).
    [Crossref]
  12. J. Zhang, C. W. See, and M. G. Somekh, “Imaging performance of widefield solid immersion lens microscopy,” Appl. Opt. 46(20), 4202–4208 (2007).
    [Crossref] [PubMed]
  13. S. B. Ippolito, P. Song, D. L. Miles, and J. D. Sylvestri, “Angular spectrum tailoring in solid immersion microscopy for circuit analysis,” Appl. Phys. Lett. 92(10), 101109 (2008).
    [Crossref]
  14. S. H. Goh and C. J. R. Sheppard, “High aperture focusing through a spherical interface: Application to refractive solid immersion lens (RSIL) for subsurface imaging,” Opt. Commun. 282(5), 1036–1041 (2009).
    [Crossref]
  15. S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009).
    [Crossref] [PubMed]
  16. Y. J. Yoon, W. C. Kim, N. C. Park, K. S. Park, and Y. P. Park, “Feasibility study of the application of radially polarized illumination to solid immersion lens-based near-field optics,” Opt. Lett. 34(13), 1961–1963 (2009).
    [Crossref] [PubMed]
  17. D. R. Mason, M. V. Jouravlev, and K. S. Kim, “Enhanced resolution beyond the Abbe diffraction limit with wavelength-scale solid immersion lenses,” Opt. Lett. 35(12), 2007–2009 (2010).
    [Crossref] [PubMed]
  18. L. Wang, M. C. Pitter, and M. G. Somekh, “Wide-field high-resolution solid immersion fluorescence microscopy applying an aplanatic solid immersion lens,” Appl. Opt. 49(31), 6160–6169 (2010).
    [Crossref]
  19. K. M. Lim, G. C. F. Lee, C. J. R. Sheppard, J. C. H. Phang, C. L. Wong, and X. D. Chen, “Effect of polarization on a solid immersion lens of arbitrary thickness,” J. Opt. Soc. Am. A 28(5), 903–911 (2011).
    [Crossref] [PubMed]
  20. S. Y. Yim, J. H. Kim, and J. Lee, “Solid Immersion lens microscope for spectroscopy of nanostructure materials,” J. Opt. Soc. Korea 15(1), 78–81 (2011).
    [Crossref]
  21. K. A. Serrels, E. Ramsay, R. J. Warburton, and D. T. Reid, “Nanoscale optical microscopy in the vectorial focusing regime,” Nat. Photonics 2(5), 311–314 (2008).
    [Crossref]
  22. J. Zhang, Y. Kim, S. H. Yang, and T. D. Milster, “Illumination artifacts in hyper-NA vector imaging,” J. Opt. Soc. Am. A 27(10), 2272–2284 (2010).
    [Crossref] [PubMed]
  23. F. H. Köklü and M. S. Unlü, “Subsurface microscopy of interconnect layers of an integrated circuit,” Opt. Lett. 35(2), 184–186 (2010).
    [Crossref] [PubMed]
  24. J. X. Cheng and X. S. Xie, “Green's function formulation for third-harmonic generation microscopy,” J. Opt. Soc. Am. B 19(7), 1604–1610 (2002).
    [Crossref]
  25. J. Frank, S. Altmeyer, and G. Wernicke, “Non-interferometric, non-iterative phase retrieval by Green’s functions,” J. Opt. Soc. Am. A 27(10), 2244–2251 (2010).
    [Crossref] [PubMed]
  26. H. M. Guo, S. L. Zhuang, J. B. Chen, and Z. C. Liang, “Imaging theory of an aplanatic system with a stratified medium based on the method for a vector coherent transfer function,” Opt. Lett. 31(20), 2978–2980 (2006).
    [Crossref] [PubMed]
  27. O. Keller, “Attached and radiated electromagnetic fields of an electric point dipole,” J. Opt. Soc. Am. B 16(5), 835–847 (1999).
    [Crossref]
  28. A. K. Zvezdin and V. I. Belotelov, “Electrodynamic Green-function technique for investigating the magneto-optics of low-dimensional systems and nanostructures,” J. Opt. Soc. Am. B 22(1), 228–239 (2005).
    [Crossref]
  29. T. Hakkarainen, T. Setälä, and A. T. Friberg, “Subwavelength electromagnetic near-field imaging of point dipole with metamaterial nanoslab,” J. Opt. Soc. Am. A 26(10), 2226–2234 (2009).
    [Crossref] [PubMed]
  30. P. Martinsson, H. Lajunen, and A. T. Friberg, “Scanning optical near-field resolution analyzed in terms of communication modes,” Opt. Express 14(23), 11392–11401 (2006).
    [Crossref] [PubMed]
  31. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University Press, Cambridge, 2006).
  32. T. Setälä, M. Kaivola, and A. T. Friberg, “Decomposition of the point-dipole field into homogeneous and evanescent parts,” Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 59(1), 1200–1206 (1999).
    [Crossref]
  33. C. A. Balanis, Antenna Theory: Analysis and Design (Wiley, New York, 2005).

2011 (2)

2010 (5)

2009 (4)

S. H. Goh and C. J. R. Sheppard, “High aperture focusing through a spherical interface: Application to refractive solid immersion lens (RSIL) for subsurface imaging,” Opt. Commun. 282(5), 1036–1041 (2009).
[Crossref]

S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009).
[Crossref] [PubMed]

Y. J. Yoon, W. C. Kim, N. C. Park, K. S. Park, and Y. P. Park, “Feasibility study of the application of radially polarized illumination to solid immersion lens-based near-field optics,” Opt. Lett. 34(13), 1961–1963 (2009).
[Crossref] [PubMed]

T. Hakkarainen, T. Setälä, and A. T. Friberg, “Subwavelength electromagnetic near-field imaging of point dipole with metamaterial nanoslab,” J. Opt. Soc. Am. A 26(10), 2226–2234 (2009).
[Crossref] [PubMed]

2008 (3)

K. A. Serrels, E. Ramsay, R. J. Warburton, and D. T. Reid, “Nanoscale optical microscopy in the vectorial focusing regime,” Nat. Photonics 2(5), 311–314 (2008).
[Crossref]

S. B. Ippolito, P. Song, D. L. Miles, and J. D. Sylvestri, “Angular spectrum tailoring in solid immersion microscopy for circuit analysis,” Appl. Phys. Lett. 92(10), 101109 (2008).
[Crossref]

A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008).
[Crossref]

2007 (3)

C. A. Michaels, “Mid-infrared imaging with a solid immersion lens and broadband laser source,” Appl. Phys. Lett. 90(12), 121131 (2007).
[Crossref]

E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007).
[Crossref]

J. Zhang, C. W. See, and M. G. Somekh, “Imaging performance of widefield solid immersion lens microscopy,” Appl. Opt. 46(20), 4202–4208 (2007).
[Crossref] [PubMed]

2006 (3)

2005 (2)

A. K. Zvezdin and V. I. Belotelov, “Electrodynamic Green-function technique for investigating the magneto-optics of low-dimensional systems and nanostructures,” J. Opt. Soc. Am. B 22(1), 228–239 (2005).
[Crossref]

S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “Theoretical analysis of numerical aperture increasing lens microscopy,” J. Appl. Phys. 97(5), 053105 (2005).
[Crossref]

2004 (2)

2002 (1)

2001 (2)

L. E. Helseth, “Roles of polarization, phase and amplitude in solid immersion lens systems,” Opt. Commun. 191(3-6), 161–172 (2001).
[Crossref]

S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “High spatial resolution subsurface microscopy,” Appl. Phys. Lett. 78(26), 4071–4073 (2001).
[Crossref]

2000 (1)

Q. Wu, L. P. Ghislain, and V. B. Elings, “Imaging with solid immersion lenses, spatial resolution, and applications,” Proc. IEEE 88(9), 1491–1498 (2000).
[Crossref]

1999 (2)

O. Keller, “Attached and radiated electromagnetic fields of an electric point dipole,” J. Opt. Soc. Am. B 16(5), 835–847 (1999).
[Crossref]

T. Setälä, M. Kaivola, and A. T. Friberg, “Decomposition of the point-dipole field into homogeneous and evanescent parts,” Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 59(1), 1200–1206 (1999).
[Crossref]

1994 (1)

B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, and G. S. Kino, “Near-field optical data storage using solid immersion lens,” Appl. Phys. Lett. 65(4), 388–390 (1994).
[Crossref]

Altmeyer, S.

Behringer, E. R.

A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008).
[Crossref]

Belotelov, V. I.

Brunner, R.

Burkhardt, M.

Chen, J. B.

Chen, X. D.

Cheng, J. X.

Choudhury, A.

Chua, C. M.

S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009).
[Crossref] [PubMed]

Elings, V. B.

Q. Wu, L. P. Ghislain, and V. B. Elings, “Imaging with solid immersion lenses, spatial resolution, and applications,” Proc. IEEE 88(9), 1491–1498 (2000).
[Crossref]

Ferstl, M.

Frank, J.

Friberg, A. T.

Ghislain, L. P.

Q. Wu, L. P. Ghislain, and V. B. Elings, “Imaging with solid immersion lenses, spatial resolution, and applications,” Proc. IEEE 88(9), 1491–1498 (2000).
[Crossref]

Goh, S. H.

S. H. Goh and C. J. R. Sheppard, “High aperture focusing through a spherical interface: Application to refractive solid immersion lens (RSIL) for subsurface imaging,” Opt. Commun. 282(5), 1036–1041 (2009).
[Crossref]

S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009).
[Crossref] [PubMed]

Goldberg, B. B.

A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008).
[Crossref]

S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “Theoretical analysis of numerical aperture increasing lens microscopy,” J. Appl. Phys. 97(5), 053105 (2005).
[Crossref]

S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “High spatial resolution subsurface microscopy,” Appl. Phys. Lett. 78(26), 4071–4073 (2001).
[Crossref]

Guo, H. M.

Hakkarainen, T.

Helseth, L. E.

L. E. Helseth, “Roles of polarization, phase and amplitude in solid immersion lens systems,” Opt. Commun. 191(3-6), 161–172 (2001).
[Crossref]

Hohng, S.

Ippolito, S. B.

A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008).
[Crossref]

S. B. Ippolito, P. Song, D. L. Miles, and J. D. Sylvestri, “Angular spectrum tailoring in solid immersion microscopy for circuit analysis,” Appl. Phys. Lett. 92(10), 101109 (2008).
[Crossref]

S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “Theoretical analysis of numerical aperture increasing lens microscopy,” J. Appl. Phys. 97(5), 053105 (2005).
[Crossref]

S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “High spatial resolution subsurface microscopy,” Appl. Phys. Lett. 78(26), 4071–4073 (2001).
[Crossref]

Jouravlev, M. V.

Kaivola, M.

T. Setälä, M. Kaivola, and A. T. Friberg, “Decomposition of the point-dipole field into homogeneous and evanescent parts,” Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 59(1), 1200–1206 (1999).
[Crossref]

Keller, O.

Kim, J. H.

Kim, K. S.

Kim, W. C.

Kim, Y.

Kino, G. S.

B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, and G. S. Kino, “Near-field optical data storage using solid immersion lens,” Appl. Phys. Lett. 65(4), 388–390 (1994).
[Crossref]

Koh, L. S.

S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009).
[Crossref] [PubMed]

Köklü, F. H.

Lajunen, H.

Lee, G. C. F.

Lee, J.

Liang, Z. C.

Lim, K. M.

Mamin, H. J.

B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, and G. S. Kino, “Near-field optical data storage using solid immersion lens,” Appl. Phys. Lett. 65(4), 388–390 (1994).
[Crossref]

Martinsson, P.

Mason, D. R.

Michaels, C. A.

C. A. Michaels, “Mid-infrared imaging with a solid immersion lens and broadband laser source,” Appl. Phys. Lett. 90(12), 121131 (2007).
[Crossref]

Miles, D. L.

S. B. Ippolito, P. Song, D. L. Miles, and J. D. Sylvestri, “Angular spectrum tailoring in solid immersion microscopy for circuit analysis,” Appl. Phys. Lett. 92(10), 101109 (2008).
[Crossref]

Milster, T. D.

Nickolas Vamivakas, A.

A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008).
[Crossref]

Park, K. S.

Park, N. C.

Park, Y. P.

Pesch, A.

Phang, J. C. H.

K. M. Lim, G. C. F. Lee, C. J. R. Sheppard, J. C. H. Phang, C. L. Wong, and X. D. Chen, “Effect of polarization on a solid immersion lens of arbitrary thickness,” J. Opt. Soc. Am. A 28(5), 903–911 (2011).
[Crossref] [PubMed]

S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009).
[Crossref] [PubMed]

Pitter, M. C.

Quah, A. C. T.

S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009).
[Crossref] [PubMed]

Ramsay, E.

K. A. Serrels, E. Ramsay, R. J. Warburton, and D. T. Reid, “Nanoscale optical microscopy in the vectorial focusing regime,” Nat. Photonics 2(5), 311–314 (2008).
[Crossref]

E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007).
[Crossref]

Reid, D. T.

K. A. Serrels, E. Ramsay, R. J. Warburton, and D. T. Reid, “Nanoscale optical microscopy in the vectorial focusing regime,” Nat. Photonics 2(5), 311–314 (2008).
[Crossref]

E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007).
[Crossref]

Rugar, D.

B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, and G. S. Kino, “Near-field optical data storage using solid immersion lens,” Appl. Phys. Lett. 65(4), 388–390 (1994).
[Crossref]

Sandfuchs, O.

See, C. W.

Serrels, K. A.

K. A. Serrels, E. Ramsay, R. J. Warburton, and D. T. Reid, “Nanoscale optical microscopy in the vectorial focusing regime,” Nat. Photonics 2(5), 311–314 (2008).
[Crossref]

E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007).
[Crossref]

Setälä, T.

T. Hakkarainen, T. Setälä, and A. T. Friberg, “Subwavelength electromagnetic near-field imaging of point dipole with metamaterial nanoslab,” J. Opt. Soc. Am. A 26(10), 2226–2234 (2009).
[Crossref] [PubMed]

T. Setälä, M. Kaivola, and A. T. Friberg, “Decomposition of the point-dipole field into homogeneous and evanescent parts,” Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 59(1), 1200–1206 (1999).
[Crossref]

Sheppard, C. J. R.

K. M. Lim, G. C. F. Lee, C. J. R. Sheppard, J. C. H. Phang, C. L. Wong, and X. D. Chen, “Effect of polarization on a solid immersion lens of arbitrary thickness,” J. Opt. Soc. Am. A 28(5), 903–911 (2011).
[Crossref] [PubMed]

S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009).
[Crossref] [PubMed]

S. H. Goh and C. J. R. Sheppard, “High aperture focusing through a spherical interface: Application to refractive solid immersion lens (RSIL) for subsurface imaging,” Opt. Commun. 282(5), 1036–1041 (2009).
[Crossref]

C. J. R. Sheppard and A. Choudhury, “Annular pupils, radial polarization, and superresolution,” Appl. Opt. 43(22), 4322–4327 (2004).
[Crossref] [PubMed]

Somekh, M. G.

Song, P.

S. B. Ippolito, P. Song, D. L. Miles, and J. D. Sylvestri, “Angular spectrum tailoring in solid immersion microscopy for circuit analysis,” Appl. Phys. Lett. 92(10), 101109 (2008).
[Crossref]

Studenmund, W. R.

B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, and G. S. Kino, “Near-field optical data storage using solid immersion lens,” Appl. Phys. Lett. 65(4), 388–390 (1994).
[Crossref]

Swan, A. K.

A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008).
[Crossref]

Sylvestri, J. D.

S. B. Ippolito, P. Song, D. L. Miles, and J. D. Sylvestri, “Angular spectrum tailoring in solid immersion microscopy for circuit analysis,” Appl. Phys. Lett. 92(10), 101109 (2008).
[Crossref]

Taghizadeh, M. R.

E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007).
[Crossref]

Terris, B. D.

B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, and G. S. Kino, “Near-field optical data storage using solid immersion lens,” Appl. Phys. Lett. 65(4), 388–390 (1994).
[Crossref]

Thomson, M. J.

E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007).
[Crossref]

Unlu, M. S.

S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “Theoretical analysis of numerical aperture increasing lens microscopy,” J. Appl. Phys. 97(5), 053105 (2005).
[Crossref]

S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “High spatial resolution subsurface microscopy,” Appl. Phys. Lett. 78(26), 4071–4073 (2001).
[Crossref]

Unlü, M. S.

Ünlü, M. S.

A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008).
[Crossref]

Waddie, A. J.

E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007).
[Crossref]

Wang, L.

Warburton, R. J.

K. A. Serrels, E. Ramsay, R. J. Warburton, and D. T. Reid, “Nanoscale optical microscopy in the vectorial focusing regime,” Nat. Photonics 2(5), 311–314 (2008).
[Crossref]

E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007).
[Crossref]

Wernicke, G.

White, J. O.

Wong, C. L.

Wu, Q.

Q. Wu, L. P. Ghislain, and V. B. Elings, “Imaging with solid immersion lenses, spatial resolution, and applications,” Proc. IEEE 88(9), 1491–1498 (2000).
[Crossref]

Xie, X. S.

Yang, S. H.

Yim, S. Y.

Yoon, Y. J.

Younger, R. D.

A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008).
[Crossref]

Zhang, J.

Zhang, Y. J.

Zhuang, S. L.

Zvezdin, A. K.

Am. J. Phys. (1)

A. Nickolas Vamivakas, R. D. Younger, B. B. Goldberg, A. K. Swan, M. S. Ünlü, E. R. Behringer, and S. B. Ippolito, “A case study for optics: The solid immersion microscope,” Am. J. Phys. 76(8), 758–768 (2008).
[Crossref]

Appl. Opt. (4)

Appl. Phys. Lett. (5)

C. A. Michaels, “Mid-infrared imaging with a solid immersion lens and broadband laser source,” Appl. Phys. Lett. 90(12), 121131 (2007).
[Crossref]

E. Ramsay, K. A. Serrels, M. J. Thomson, A. J. Waddie, M. R. Taghizadeh, R. J. Warburton, and D. T. Reid, “Three-dimensional nanoscale subsurface optical imaging of silicon circuits,” Appl. Phys. Lett. 90(13), 131101 (2007).
[Crossref]

S. B. Ippolito, P. Song, D. L. Miles, and J. D. Sylvestri, “Angular spectrum tailoring in solid immersion microscopy for circuit analysis,” Appl. Phys. Lett. 92(10), 101109 (2008).
[Crossref]

B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, and G. S. Kino, “Near-field optical data storage using solid immersion lens,” Appl. Phys. Lett. 65(4), 388–390 (1994).
[Crossref]

S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “High spatial resolution subsurface microscopy,” Appl. Phys. Lett. 78(26), 4071–4073 (2001).
[Crossref]

J. Appl. Phys. (1)

S. B. Ippolito, B. B. Goldberg, and M. S. Unlu, “Theoretical analysis of numerical aperture increasing lens microscopy,” J. Appl. Phys. 97(5), 053105 (2005).
[Crossref]

J. Opt. Soc. Am. A (5)

J. Opt. Soc. Am. B (3)

J. Opt. Soc. Korea (1)

Nat. Photonics (1)

K. A. Serrels, E. Ramsay, R. J. Warburton, and D. T. Reid, “Nanoscale optical microscopy in the vectorial focusing regime,” Nat. Photonics 2(5), 311–314 (2008).
[Crossref]

Opt. Commun. (2)

L. E. Helseth, “Roles of polarization, phase and amplitude in solid immersion lens systems,” Opt. Commun. 191(3-6), 161–172 (2001).
[Crossref]

S. H. Goh and C. J. R. Sheppard, “High aperture focusing through a spherical interface: Application to refractive solid immersion lens (RSIL) for subsurface imaging,” Opt. Commun. 282(5), 1036–1041 (2009).
[Crossref]

Opt. Express (1)

Opt. Lett. (4)

Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics (1)

T. Setälä, M. Kaivola, and A. T. Friberg, “Decomposition of the point-dipole field into homogeneous and evanescent parts,” Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 59(1), 1200–1206 (1999).
[Crossref]

Proc. IEEE (1)

Q. Wu, L. P. Ghislain, and V. B. Elings, “Imaging with solid immersion lenses, spatial resolution, and applications,” Proc. IEEE 88(9), 1491–1498 (2000).
[Crossref]

Rev. Sci. Instrum. (1)

S. H. Goh, C. J. R. Sheppard, A. C. T. Quah, C. M. Chua, L. S. Koh, and J. C. H. Phang, “Design considerations for refractive solid immersion lens: application to subsurface integrated circuit fault localization using laser induced techniques,” Rev. Sci. Instrum. 80(1), 013703 (2009).
[Crossref] [PubMed]

Other (2)

C. A. Balanis, Antenna Theory: Analysis and Design (Wiley, New York, 2005).

L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University Press, Cambridge, 2006).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 The setup of the SIL based microscopy system. (a) Diagram showing various interfaces and the path of a ray travelling through these interfaces. Various regions, coordinate systems, and important angles are also indicated. The horizontal axis shown above is the longitudinal axis z ^ . (b) A practical representation of the SIL-based microscopy system. (c) The non-SIL based system obtained by substituting n S I L = n o b j , which is used in all the numerical experiments presented in sections 5-8. (d) Illustration of the angles γ S I L and γ o b j A used in Eqs. (3) and (4).
Fig. 2
Fig. 2 Basic characteristics of the dyadic Green’s function of SIL based microscopy system for N A max = 0.2857 . (a) The intensity along the x and y axes in the CCD region corresponding to a x ^ directed dipole at the aplanatic point, for the case x C C D = 0 , ρ C C D = y C C D while the case y C C D = 0 , ρ C C D = x C C D (b) The intensity along the z axis in the CCD region corresponding to a x ^ directed dipole at the aplanatic point. (c) The intensity along the x axis in the CCD region corresponding to a z ^ directed dipole at the aplanatic point.
Fig. 3
Fig. 3 Comparison of the dyadic Green’s function of SIL based microscopy system with non-SIL based microscopy system for N A = 0.2857 . (a) The normalized intensity along the x axis in the CCD region corresponding to a x ^ directed dipole at the aplanatic point. (b) The normalized intensity along the z axis in the CCD region corresponding to a x ^ directed dipole at the aplanatic point. (c) The normalized intensity along the x axis in the CCD region corresponding to a z ^ directed dipole at the aplanatic point.
Fig. 4
Fig. 4 Comparison of the shift in image of a x ^ directed dipole in SIL and non-SIL based microscopy systems for N A = 0.2857 . (a) For dipole location r S I L = ( 2 λ , 0 , 0 ) , the intensity along the x axis in the CCD region. (b) For dipole location r S I L = ( 0 , 0 , 2 λ ) , the intensity along the z axis in the CCD region.
Fig. 5
Fig. 5 Visual demonstration of resolution limit according to the Rayleigh Criterion for two x ^ directed dipoles along x axis. (a) For the SIL based system, Δ x = 0.245 λ satisfies the Rayleigh criterion and the dipoles with Δ x = 0.245 λ can be indeed resolved by the SIL based system, (b) The non-SIL based system cannot resolve the sources with Δ x = 0.245 λ . (c) For the non-SIL based system, Δ x = 2.874 λ satisfies the Rayleigh criterion and the dipoles with Δ x = 2.874 λ can be resolved by the non-SIL based system.
Fig. 6
Fig. 6 Effect of numerical aperture on the point spread function. (a) The normalized intensity along the x axis in the CCD region for various values of numerical aperture. (b) The normalized intensity along the z axis in the CCD region for various values of numerical aperture. (c) The peak intensities of SIL based and non-SIL based microscopy systems for various values of numerical apertures. (d) The full width at half maximum of the point spreads of SIL based and non-SIL based microscopy systems for various values of numerical apertures.
Fig. 7
Fig. 7 Effect of numerical aperture of the objective along the longitudinal axis. We consider six positions z S I L = { 0 , 0.2 , 0.4 , 0.6 , 0.8 , 1 } λ , x S I L = x S I L = 0 . (a) N A max (b) N A = 0.2 (c) N A = 0.1 .
Fig. 8
Fig. 8 Effect of numerical aperture of the objective along the longitudinal axis. (a) The peak points in the image (CCD) region vs. the actual source locations in the object (SIL) region. The faint gray line shows the expected peak points. (b) The peak intensities in the image (CCD) plane corresponding to the actual locations in the object (SIL) region. For both (a) and (b), the non-SIL based microscopy system uses N A = 0.2857 . M S I L l o n from Eq. (24) is used for SIL based microscopy system and M = n C C D M 2 / n o b j is used for non-SIL based microscopy system.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

E S I L ( r S I L , r S I L ) ω 2 μ 0 ( θ ^ S I L θ ^ S I L + ϕ ^ S I L ϕ ^ S I L ) exp ( i k S I L r S I L ) 4 π r S I L exp ( i k S I L r S I L ) . p ( r S I L )
E o b j ( r o b j A , r S I L ) ω 2 μ 0 ( t p A θ ^ o b j A θ ^ S I L A + t s A ϕ ^ o b j A ϕ ^ S I L A ) exp ( i k o b j r o b j A ) 4 π r o b j A exp ( i k S I L r S I L ) . p ( r S I L )
t s A = 2 n S I L cos γ S I L A n S I L cos γ S I L A + n o b j cos γ o b j A exp ( i k S I L r S I L A i k o b j r o b j A ) r o b j A r S I L A
t p A = 2 n S I L cos γ S I L A n o b j cos γ S I L A + n S I L cos γ o b j A exp ( i k S I L r S I L A i k o b j r o b j A ) r o b j A r S I L A
t s A = 2 n S I L cos θ o b j A n S I L cos θ o b j A + n o b j cos θ S I L A n S I L n o b j
t p A = 2 n S I L cos θ o b j A n o b j cos θ o b j A + n S I L cos θ S I L A n S I L n o b j
E C C D ( θ C C D B , ϕ C C D B ) ( ( E o b j . s ^ o b j ) s ^ C C D + ( E o b j . p ^ o b j ) p ^ C C D ) n o b j cos θ C C D B n C C D cos θ o b j A
E C C D ( θ C C D B , ϕ C C D B ) = ω 2 μ 0 exp ( i k o b j f o b j ) 4 π f o b j exp ( i k S I L r S I L ) × n o b j cos θ C C D B n C C D cos θ o b j A ( t p A θ ^ C C D B θ ^ S I L A + t s A ϕ ^ S I L A ϕ ^ S I L A ) p ( r S I L ) = ω 2 μ 0 exp ( i k o b j f o b j ) 8 π f o b j exp ( i k S I L r S I L ) × n o b j cos θ C C D B n C C D cos θ o b j A [ E ¯ x E ¯ y E ¯ z ] p ( r S I L )
E ¯ x = [ ( t s A + t p A cos θ C C D B cos θ S I L A ) ( t s A t p A cos θ C C D B cos θ S I L A ) cos 2 ϕ S I L A ( t s A t p A cos θ C C D B cos θ S I L A ) sin 2 ϕ S I L A 2 t p A sin θ C C D B cos θ S I L A cos ϕ S I L A ] E ¯ y = [ ( t s A t p A cos θ C C D B cos θ S I L A ) sin 2 ϕ S I L A ( t s A + t p A cos θ C C D B cos θ S I L A ) + ( t s A t p A cos θ C C D B cos θ S I L A ) cos 2 ϕ S I L A 2 t p A sin θ C C D B cos θ S I L A sin ϕ S I L A ] E ¯ z = 2 t p A sin θ S I L A [ cos θ C C D B cos ϕ S I L A cos θ C C D B sin ϕ S I L A sin θ C C D B ]
E C C D ( r C C D ) = i k C C D f C C D exp ( i k C C D f C C D ) 2 π Ω C C D E C C D ( θ C C D B , ϕ C C D B ) exp ( i k C C D r C C D ) sin θ C C D B   d θ C C D B d ϕ C C D B
E C C D ( r C C D ) = ω 2 μ 0 G P S F p ( r S I L )
G P S F = α [ I 0 + I 21 I 22 2 i I 11 I 22 I 0 I 21 2 i I 12 0 0 0 ]
α = i k C C D 8 π f o b j f C C D ( n o b j n C C D ) 1 2 exp ( i ( k o b j f o b j + k C C D f C C D ) ) I 0 = 0 θ max sin θ o b j A cos θ o b j A ( t s A + t p A cos θ S I L A ) J 0 ( ρ ) exp ( i z ) d θ o b j A I 11 = 0 θ max sin θ o b j A cos θ o b j A ( t p A sin θ S I L A ) J 1 ( ρ ) exp ( i z ) cos ψ d θ o b j A I 12 = 0 θ max sin θ o b j A cos θ o b j A ( t p A sin θ S I L A ) J 1 ( ρ ) exp ( i z ) sin ψ d θ o b j A I 21 = 0 θ max sin θ o b j A cos θ o b j A ( t s A t p A cos θ S I L A ) J 2 ( ρ ) exp ( i z ) cos 2 ψ d θ o b j A I 22 = 0 θ max sin θ o b j A cos θ o b j A ( t s A t p A cos θ S I L A ) J 2 ( ρ ) exp ( i z ) sin 2 ψ d θ o b j A
ρ = x 2 + y 2 ; ψ = tan 1 y x ; x = ( k C C D sin θ C C D B x C C D B + k S I L sin θ S I L A x S I L ) y = ( k C C D sin θ C C D B y C C D B + k S I L sin θ S I L A y S I L ) z = k C C D cos θ C C D B z C C D B k S I L cos θ S I L A z S I L
N A max = ( n o b j ) 2 n S I L
t = t s = t p = 2 n S I L n S I L + n o b j n S I L n o b j
x = x ˜ θ o b j ;  where  x ˜ = k o b j M ( x C C D + M ( n S I L n o b j ) 2 x S I L ) y = y ˜ θ o b j ;  where  y ˜ = k o b j M ( y C C D + M ( n S I L n o b j ) 2 y S I L ) z = z ^ + θ o b j 2 z ˜ ;                    where  z ^ = ( k C C D z C C D k S I L z S I L )                   and  z ˜ = 1 2 ( k C C D z C C D ( f o b j f C C D ) 2 k S I L z S I L ( n S I L n o b j ) 2 ) ρ = θ o b j ρ ˜ ;  where  ρ ˜ = x ˜ 2 + y ˜ 2 ψ = tan 1 y ˜ x ˜ ;
I 0 t 0 θ max θ o b j ( 1 1 4 θ o b j 2 ) ( 2 1 2 ( n S I L n o b j ) 2 θ o b j 2 ) J 0 ( ρ ) d θ o b j 2 t 0 θ max θ o b j J 0 ( ρ ˜ θ o b j ) d θ o b j 2 t θ max 2 J 1 ( ρ ˜ θ max ) ( ρ ˜ θ max ) , ,
G P S F = α 2 t θ max 2 J 1 ( ρ ˜ θ max ) ( ρ ˜ θ max ) [ 1 0 0 0 1 0 0 0 0 ]
M S I L l a t = M ( n S I L n o b j ) 2
I 0 t exp ( i z ^ ) 0 θ max θ o b j ( 1 1 4 θ o b j 2 ) ( 2 1 2 ( n S I L n o b j ) 2 θ o b j 2 ) exp ( i z ˜ θ o b j 2 ) d θ o b j 2 t exp ( i z ^ ) 0 θ max θ o b j exp ( i z ˜ θ o b j 2 ) d θ o b j = t θ max 2 sin ( z ˜ θ max 2 2 ) ( z ˜ θ max 2 2 ) exp ( i z ˜ θ max 2 2 + i z ^ )
G P S F = α t θ max 2 sin ( z ˜ θ max 2 2 ) ( z ˜ θ max 2 2 ) exp ( i z ˜ θ max 2 2 + i z ^ ) [ 1 0 0 0 1 0 0 0 0 ]
z C C D = n S I L 1 n C C D ( f C C D f o b j ) 2 ( n S I L n o b j ) 2 z S I L = M ( n S I L n o b j ) 3 z S I L
M S I L l o n = ( n S I L n o b j ) 3 M

Metrics