Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Vectorial ray-based diffraction integral

Open Access Open Access

Abstract

Laser interferometry, as applied in cutting-edge length and displacement metrology, requires detailed analysis of systematic effects due to diffraction, which may affect the measurement uncertainty. When the measurements aim at subnanometer accuracy levels, it is possible that the description of interferometer operation by paraxial and scalar approximations is not sufficient. Therefore, in this paper, we place emphasis on models based on nonparaxial vector beams. We address this challenge by proposing a method that uses the Huygens integral to propagate the electromagnetic fields and ray tracing to achieve numerical computability. Toy models are used to test the method’s accuracy. Finally, we recalculate the diffraction correction for an interferometer, which was recently investigated by paraxial methods.

© 2015 Optical Society of America

Corrections

Birk Andreas, Giovanni Mana, and Carlo Palmisano, "Vectorial ray-based diffraction integral: erratum," J. Opt. Soc. Am. A 33, 559-560 (2016)
https://opg.optica.org/josaa/abstract.cfm?uri=josaa-33-4-559

1. INTRODUCTION

Soon after the invention of the laser, its applicability to unexcelled measurements of lengths and displacements by interferometry was recognized [1]. Plane or spherical waves were assumed to relate the phase of the interference fringes to the measurand, but, as early as 1976, Dorenwendt and Bönsch pointed out that this is not correct and that diffraction brings about systematic errors [2]. Since then, metrologists have routinely applied corrections based on scalar and paraxial approximations of the interferometer operations [313]. In recent years, cutting-edge interferometric measurements carried out to determine the Avogadro constant are looking at 1×109 relative accuracies over propagation distances of the order of 1 m and propagation differences of many centimeters. Consequently, metrologists are drawing attention to nonparaxial vector models, with a goal to verifying the validity of the scalar and paraxial approximations or to improving the correction calculation. Since these measurements use continuous-wave lasers, which are stabilized and have an extremely narrow bandwidth, monochromatic light propagation can be assumed.

Optical interferometers may combine many optical components in a potentially complex geometry. Wolf and Richards carried out early work in 1959 on diffraction in optical systems, but that work provided only a description of the image or focal region [14,15]. Among the tools based on the paraxial approximation, a common one is the Collins integral, which allows the electromagnetic field at the output plane to be calculated from the field in the input plane and the ray matrix of the optical system [16]. Ray matrices have been also applied to propagate Gaussian beams, as well as higher-order Laguerre–Gaussian or Hermite–Gaussian beams, which are solutions of the paraxial scalar wave equation [17,18]. Finally, Fourier optics is used together with the thin lens approximation [19].

In regards to nonparaxial models, in 1980, Byckling and Simola [20] proposed a sequential application of spherical harmonic Green’s functions and the evaluation of the relevant diffraction integral by the stationary phase method, but their method is limited to the scalar approximation and to spherical interfaces.

Geometric optics and ray tracing allow nonparaxial propagation to be considered, but usually they do not include diffraction [21]. Some aspects of diffraction can be caught by associating phase information, in terms of the optical path length, to each ray. Douglas et al. simulated the operation of an interferometer by ad hoc assumptions that allowed computability to be simplified, but this waived the exact description of diffraction [22]. Improvements were made by Riechert et al., but at the cost of sampling issues and relatively large computation times [23].

There exist several commercial software packages, where the segments of an optical system are tackled by different propagation techniques, e.g., ZEMAX, GLAD, and LightTrans VirtualLab. The latter is described in detail by Wyrowski and Kuhn [24]. While free-space propagation is handled by Fourier optics methods, segments with interfaces are modeled either by the thin lens approximation or the geometrical optics field tracing. At the interfaces, the input field is locally decomposed, assuming local plane waves propagating in a direction perpendicular to the wavefront. These plane waves define the launch directions of ray tubes, which are traced through the subsequent subdomain, taking the amplitude changes into account by Fresnel equations and actual geometrical convergence or divergence. However, because they do not interact at the output plane, diffraction is not modeled. For this reason, LightTrans VirtualLab does not continuously model diffraction throughout the optical system; therefore, it is not suited for our purposes.

A different approach, which is common in optical design and is the basis of at least two software packages, was proposed by Greynolds in 1985 [25]. This approach is based on the input-field decomposition into Gaussian beams [26], which, in turn, are represented by base rays along with a set of characteristic parabasal rays. After tracing the base and parabasal rays, the beams are recombined to form the output field [27,28]. However, Gaussian beams do not form a complete orthogonal set of basis functions; the decomposition is not unique and produces artifacts in the output field. Moreover, this method is reliable only if the parabasal rays are paraxial everywhere, and it is not clear how, for instance, coma can be modeled by basis beams that, at most, describe astigmatism.

The method called stable aggregate of flexible elements (SAFE) relies on a similar but improved concept, which allows the estimation of its own accuracy. Alas, so far it has not yet been extended to 3D problems [29,30]. Recently, a quantum-mechanical method has been proposed, by using Feynman path integrals and stationary-phase approximation to link geometric and scalar wave optics [31,32].

When the measurements aim at subnanometer accuracy levels, it is possible that the description of interferometer operation by paraxial and scalar approximations is not sufficient. The software package CodeV exploits a method called beam synthesis propagation that might meet the case. However, to the authors’ knowledge, no detailed description of this method is available to carry out an uncertainty analysis when applied to compensate for phase delays due to diffraction in state-of-the-art dimensional metrology. Extensive investigations of the propagation of electromagnetic waves and diffraction can be found in the literature of the radio wave and microwave communities [33], where significant effort was put into the efficient calculation of large-scale antenna fields as well as of radar cross sections and where computationally efficient ray tracing techniques were developed [3443].

In order to provide an efficient, vectorial, and nonparaxial model of the operation of laser interferometers, as applied in length and displacement metrology, and to calculate the relevant diffraction corrections, we developed a ray-based method to integrate diffraction integrals (VRBDI, vectorial ray-based diffraction integral). An early scalar version of this method is described in [44]. Although it was independently developed, the main concept is the same as proposed by Ling et al., who, in 1989, published a comprehensive ray method, later known as SBR-PO for shooting and bouncing rays and physical optics [39,43].

Ling et al. approximated the ray-based field on the detector grid locally by plane waves [39], although in 1988, they described the far-field contribution of a ray tube by an approximate solution of an integral taken on the ray-tube wavefront [40]. In this paper, we calculate the field on the detector grid by using a ray aiming approach together with a local representation of the wavefront, which is based on matrix optics [1618,45] and differential ray tracing [34,46]. In [39], the input field was assumed to be a single plane wave. Therefore, a single set of parallel rays entered the investigated optical system. Here, we decompose the input field into numerous components, which are either spherical or plane waves and are represented by sets of divergent or parallel ray tubes, respectively. The output field is then obtained by an integral superposition of the traced components. Ling et al. applied the phase matching technique to describe the local curvature and, hence, the divergence assigned to a single ray [34]. In [39], it is noted that an alternative description by differential ray tubes, which, in turn, consist of base and parabasal rays, does not deliver a correct description of the Gouy phase as, potentially, too many ray path crossings that are difficult to detect could not be considered. In Appendix B of this paper, it is shown how the correct Gouy phase can indeed be obtained from differential ray tubes without the tracking of ray path crossings. In addition to the perfectly reflecting surfaces considered in [39], we extended the material equations to include transparent dielectrics. Finally, much effort was put into a stepwise, i.e., surface to surface, integral method, for which numerical tests of energy conservation were carried out in order to check physical consistency.

The VRBDI method is described in Section 3. In Section 2, the exact treatment of vector diffraction theory in linear, homogeneous, and isotropic dielectrics is reviewed to set the basis for the VRBDI and a touchstone of its performance. Section 4 compares this more rigorous method to the VRBDI methods by applying them to a simple toy model. Finally, to illustrate the applicability of the VRBDI, it is used in Section 5 to simulate an interferometer setup. Furthermore, these results are compared to an earlier paper where paraxial methods were utilized to describe the same setup.

2. VECTORIAL DIFFRACTION THEORY

A. Derivation of Vectorial Diffraction Integrals

For the following treatment, the same boundary conditions as, e.g., in Chen et al. [47] are assumed. The derivation of the diffraction integrals follows the one found in a book by Smith [48]. The harmonic time dependence exp(iωt) is consequently omitted.

A monochromatic continuous electromagnetic wave in a linear, homogeneous, isotropic, and transparent dielectric is fully characterized by knowing the components of either the electric field E or the magnetic field H tangential to a plane. For the sake of simplicity, let this plane be the x, y plane of a Cartesian coordinate system and let Ex(x0,y0) as well as Ey(x0,y0) be initially known. Then the angular plane-wave spectrum can be obtained by Fourier transform [19]:

Ex,y(kx,ky)=+dx0dy0Ex,y(x0,y0)exp[i(kxx0+kyy0)],
where kx and ky are components of the wave vector k in the respective medium. From the transversality condition E(kx,ky)k=0, one can obtain the z component [49,50]:
Ez=Exkx+Eykykz,
where kz=(k2kx2ky2)1/2, k=|k|, and for kz>0 the field is propagating in the positive direction. The respective field vectors of the magnetic plane wave spectrum H(kx,ky) can be obtained from [49]
H(kx,ky)=nϵ0μ0k^×E(kx,ky),
where k^=k/k is the normalized wave vector. The electric field at any point inside the same medium can be expressed as [19,48]
E(x,y,z)=14π2+dkxdky·E(kx,ky)exp[i(kxx+kyy+kzz)].
The identical relation is obtained for H(x,y,z) by replacing E(kx,ky) with H(kx,ky). It is worth noting that, for E(kx,ky), a relation analogue to Eq. (3) exists, which likewise allows the full characterization of the electromagnetic field, in the case that only Hx and Hy are initially known.

The following derivation is done solely for the electric field, as it is fully analogous to the one for the magnetic field. Setting Eq. (2) back into Eq. (4) and using the versors x^, y^, and z^ of the coordinate system, one obtains

E=14π2+dkxdky[Exx^+Eyy^(Exkxkz+Eykykz)z^]exp[i(kxx+kyy+kzz)],
where the function dependences are dropped for the sake of brevity. Equation (5) can equivalently be written as [48]
E(x,y,z)=2×+dx0dy0z^×E(x0,y0){i8π2+dkxdky·exp[i(kx(xx0)+ky(yy0)+kzz)]kz}.
The expression {} is known as the Weyl representation of a spherical wave and can be integrated in closed form. The result for z0 is the free-space scalar Green’s function for harmonic time dependence [48,51]:
exp(ikr)4πr=i8π2+dkxdky·exp[i(kx(xx0)+ky(yy0)+kzz)]kz,
where r=(xx0)2+(yy0)2+z2. Setting Eq. (7) into Eq. (6) yields
E(x,y,z)=12π×+dx0dy0z^×E(x0,y0)exp(ikr)r.
Now we assume that only the integral over a finite area S0 delivers nonzero contributions:
E(x,y,z)=12π×S0dx0dy0z^×E(x0,y0)exp(ikr)r.
The above-mentioned expression is formally identical to Smythe’s integral equation [49,52,53]. In the given references, it is derived in different ways for the open aperture S0 in an infinite metallic (perfectly conducting) screen. In this special case, it is an exact solution [49,52,53]. However, since we assume the field on S0 initially to be known, we have xx0, yy0, and z>0. Therefore, in our case Eq. (9) is not an integral equation. Thus, it can be further simplified. But first, by means of simple coordinate transformations, the more general result
E(P1)=12π×S0dA0N^0×E(P0)exp(ikr)r,
for an arbitrarily oriented plane with normal N^0, surface element dA0, P0S0, S0 now lying in this plane and r=|r|=|P1P0| is obtained. Consequently, P1 must lie in the half-space limited by this plane and into which N^0 is pointing and, therefore, (P1P0)N^0>0.

With the short notation E(Pj)=Ej, the versor r^=r/r, and, after pulling the curl operator into the integral, one obtains

E1=iλS0dA0exp(ikr)r(1ikr)(N^0×E0)×r^+12πS0dA0exp(ikr)r×(N^0×E0),
with λ=2π/k and where λ=λ0/n is the wavelength in the respective medium with refractive index n, and λ0 is the wavelength in vacuum. In our situation, the curl operator can only act on functions depending on P1 because P0 is excluded. Therefore, the second integral is 0, and the final result,
E1=iλS0dA0exp(ikr)r(1ikr)(N^0×E0)×r^,
is obtained. Analogously, for the magnetic field, one obtains
H1=iλS0dA0exp(ikr)r(1ikr)(N^0×H0)×r^.
The derived diffraction integrals are solutions of the Maxwell equations when the integration surface is a plane. Furthermore, direct substitution with the kernels of Eqs. (12) and (13) satisfies the Helmholtz equation, ΔF+k2F=0, where F=E,H. It is worth noting that the kernels permit only field components orthogonal to r^.

Because we are interested in the simulation of physical light beams, we also request that the field on S0 is a section of a field satisfying the Maxwell equations. Practically, this condition can only be met asymptotically, because a finite and discrete calculation window where the initial field is calculated by Eqs. (1 )–(4) from two given components involves a cut at the border of the calculation window and finite sampling resolution. However, with sufficient resolution and a large enough computation window, which encloses a square integrable field, the error can be made negligible.

A necessary but not sufficient check of physical correctness can be done by applying the law of energy conservation. For a time-averaged continuous wave, this implies conservation of power. The respective numerical tests will be treated after the next subsection. For these tests, we also require that the initial field is a restriction of a solution of the Maxwell equations to S0 in order to calculate the initial power.

B. Propagation through an Interface

Although the boundary conditions for the initial field applied to Eqs. (12) and (13) require the definition on a plane, arbitrarily curved interfaces are assumed here. The impact of this approximation is checked later by numerical tests.

An alternative method, based on the Stratton–Chu diffraction integrals [54], was introduced by Guha and Gillen [55], but it lacked the possibility to test power conservation. We compared numerically the Stratton–Chu diffraction integrals with Eqs. (12) and (13) by checking power conservation. We found that Eqs. (12) and (13) ensure power conservation several orders of magnitude more accurately than the Stratton–Chu diffraction integrals. A thorough report of this issue will be given in a separate paper; for now, we do not see advantages in using the more complex Stratton–Chu diffraction integrals in our kind of propagation problems.

A sketch of the situation is shown in Fig. 1. For each versor r^, the respective reflected versor r^r and refracted one r^t are calculated by assuming for an infinitesimally small patch around the intersection point local planarity for the wavefront and interface. Furthermore, in transparent isotropic media, the Poynting vector, which describes the transport of energy by electromagnetic radiation [49,50],

S=12Re(E×H*),
is always parallel to the wave vector k. By substitution of the kernels of Eqs. (12) and (13) into Eq. (14), one finds that the local Poynting vector
dS=dA02λr2(1+1k2r2)Re{r^[r^N^(N^(E0×H0))]}r^
and, therefore, the assumed local plane wave vector k¯r^. Then, the kinematic properties of the boundary conditions of the Maxwell equations for plane waves can be used [49,50]:
r^r=r^2(r^N^1)N^1,
r^t=n1n2[r^(r^N^1)]+σ(r^N^1)1(n1n2)2[1(r^N^1)2]N^1,
where Eq. (16) is the reflection law, Eq. (17) is Snell’s law [49,50], and σ(s)=1, 0, 1 for s<0, s=0, s>0, is a sign operator. The surface normal N^1 can also be a local function on S1. For example, if S1 is spherical, N^1 can be found by subtraction of the actual position P1 from the sphere center C and subsequent normalization: N^1=(CP1)/|CP1|.

 figure: Fig. 1.

Fig. 1. Propagation through interface. The versors for reflection r^r and refraction r^t are calculated for each versor r^.

Download Full Size | PDF

It is useful to define the reference frames (ξ^,η^,r^), (ξ^r,η^,r^r), and (ξ^t,η^,r^t), with

ξ^=r^×(N^1×r^)|r^×(N^1×r^)|,η^=r^×ξ^,ξ^r=η^×r^r,ξ^t=η^×r^t.
In the case where r^ and N^1 are parallel, an arbitrary orientation orthogonal to r^ can be chosen for ξ^, e.g., ξ^=[r^z0r^x]T/r^x2+r^z2. The versor η^ is orthogonal to the plane of incidence, and all other versors lie in this plane.

The dynamic properties of plane waves lead to the Fresnel equations [49,50]:

rTM=n2cosθn1cosθtn2cosθ+n1cosθt,rTE=n1cosθn2cosθtn1cosθ+n2cosθt,tTM=2n1cosθn2cosθ+n1cosθt,tTE=2n1cosθn1cosθ+n2cosθt,
where cosθ=r^N^1 and cosθt=r^tN^1. In the case of normal incidence, it is rTE=rTM=(n1n2)/(n1+n2) and tTE=tTM=2n1/(n1+n2). For the applicability of Eq. (19), local planarity and direction conformity are exploited again. The Fresnel equations are exact only in the case of an infinite plane wave at an infinite plane boundary. Therefore, we expect some accuracy loss, which we will quantify by tests of power conservation.

It is convenient to express the field calculations at the interface by differentials and perform the integration afterward. With the quantities defined above, one can write

dE1=in1λ0dA0exp(ik0n1r)r(1ik0n1r)(N^0×E0)×r^;
dE1,r=rTM(dE1ξ^)ξ^r+rTE(dE1η^)η^;
dE1,t=tTM(dE1ξ^)ξ^t+tTE(dE1η^)η^;
dH1=in1λ0dA0exp(ik0n1r)r(1ik0n1r)(N^0×H0)×r^;
dH1,r=rTM(dH1ξ^)ξ^r+rTE(dH1η^)η^;
dH1,t=n2n1[tTM(dH1ξ^)ξ^t+tTE(dH1η^)η^].
It should be noted that Eqs. (24) and (25) can be deduced from Eqs. (21) and (22) by patient use of Eq. (3) and one or two vector identities. Now, the integrals can be expressed as F=S0dF, where F[E1,E1,r,E1,t,H1,H1,r,H1,t].

The next step is the propagation from S1 to S2, where Eqs. (12) and (13) can be used by replacing E0, H0, k and λ with E1,t, H1,t, k0n2 and λ0/n2, respectively. If an integration is performed on a curved interface, the surface element dA must be changed. For the (x,y) plane, one has dA=dxdy. To define a surface, the z coordinate can be given by the 2D function z=f(x,y). Then, the surface element can be calculated according to dAf(x,y)=1+(f/x)2+(f/y)2dxdy [56]. For example, a hemisphere is uniquely described by

z=f(x,y)=Czσ(Cz)R2(xCx)2(yCy)2,
with Cz0. Therefore,
dAsph(x,y)=R2R2(xCx)2(yCy)2dxdy.
Note, however, that tilt operations (as well as translations) applied to grids of points do not change dA.

C. Numerical Tests of Power Conservation

The transport of energy by electromagnetic radiation is described by Eq. (14). Taking the scalar product with a surface normal N^i yields the irradiance,

Ii=|SiN^i|,
which has the dimension of power per area. Therefore, integrating Ii over the surface Si yields the respective power:
Pi=SidAiIi.
For the following numerical tests, the fields are discretely represented on equidistant Cartesian sampling grids in the case of planes or on surface points generated by Eq. (26) from those grids. Therefore, dxidyi as well as Eq. (27) are finite quantities, and the integrals are replaced by sums over all respective sampling points Pi.

The simulated toy models are depicted in Fig. 2. In accordance with [47], the input field on surface S0 is defined by

E0,x=exp[(x2+y2)/w02]V/m,
where w0=0.5mm and E0,y=0, while the other components are obtained by Eqs. (1 )–(4). In the first tests, the refractive indices n1 and n2 are kept identical. Therefore, the intermediate surface is not a real interface, and a comparison to the direct propagation between the first and the last surface is possible. The respective quantities on S2 obtained by the direct propagation receive the index i=2. The first test comprises an arbitrarily oriented plane as an intermediate surface. Differently than shown in Fig. 2, two successive rotations about the y and x axes are applied to an x, y mesh to generate S1 (Table 2, Test 1). The power conservation can be described by the relative errors:
  • δ1,0=P1P0P0=8.6·1015,
  • δ2,1=P2P1P1=9.6·1015,
  • δ2,2=P2P2P2=1.2·1014.
 figure: Fig. 2.

Fig. 2. Toy models 1 (top) and 2 (bottom) for the numerical test of power conservation for reflection, refraction, and propagation of electromagnetic fields by vectorial diffraction integrals. An interface separates two transparent, isotropic, homogeneous, and nonmagnetic media with refractive indices n1 and n2. The fixed parameters are listed in Table 1; the variable parameters are given in Table 2.

Download Full Size | PDF

Tables Icon

Table 1. Fixed Simulation Parameters for the Toy Models Shown in Fig. 2

Tables Icon

Table 2. Variable Simulation Parameters for Toy Models Shown in Fig. 2 a

The field values on S2 from the direct propagation are presented in Fig. 3. For sake of brevity, in the main article only the electric field components are shown. The corresponding magnetic components can be found in Appendix C. It can be seen that the y component of the electric field, E2,y, vanishes everywhere.

 figure: Fig. 3.

Fig. 3. Directly from S0 to S2 [Fig. 2 (top), Test 1 in Tables 1 and 2] propagated electric field components of a linearly x-polarized Gaussian beam by use of vectorial diffraction integrals. The viewing angle is perpendicular to S2. The corresponding magnetic components can be found in Appendix C.

Download Full Size | PDF

The field that is propagated over S1 does not vanish exactly, as can be deduced from the relative deviations plotted in Fig. 4. However, the residual values are small and likely numerical artifacts. Because the directly propagated E2,y is exactly zero, the relative deviation for E2,y, as it is defined in Fig. 4, is formally up to 1. However, since, in absolute terms |E2,y|<4·1015Vm1, it is of no physical relevance. The irradiance from the directly propagated beam and its relative deviation to the indirectly propagated one is presented in Fig. 5. One can see that the size of the calculation window is indeed amply chosen. The peak-to-valley (PV) deviation of the irradiances is found to be ΔIPV=[max(I2I2)min(I2I2)]/max(I2)=4.8·1013. One can state that this first test reveals the magnitude of the residual numerical errors because no systematic errors of the theory can be identified.

 figure: Fig. 4.

Fig. 4. Relative deviations of directly [S0 to S2, Fig. 2 (top), Test 1 in Tables 1 and 2] propagated electric field components of a linearly x-polarized Gaussian beam by use of vectorial diffraction integrals from indirectly propagated ones (S0 via S1 to S2). The viewing angle is perpendicular to S2. The large relative deviation for the y-component results from the fact that the y-component of the directly propagated field, i.e., E2,y, is exactly zero. The corresponding magnetic components can be found in Appendix C.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Irradiance (left) and its relative deviation of the directly [S0 to S2, Fig. 2 (top), Test 1 in Tables 1 and 2] propagated linearly x-polarized Gaussian beam by use of vectorial diffraction integrals from the indirectly propagated one (S0 via S1 to S2). The viewing angle is perpendicular to S2.

Download Full Size | PDF

For Test 2, the tilted plane is replaced by a spherical surface while again n1=n2. As mentioned earlier, in this case the found integrals in Eqs. (12) and (13) can only be approximations whose viability is tested now. The relative deviation of the field components is given in Fig. 6. Again, the y component of the electric field is nonzero. Its absolute value as well as the relative deviation of the irradiance from the directly propagated one is shown in Fig. 7.

 figure: Fig. 6.

Fig. 6. Relative deviations of directly [S0 to S2, Fig. 2 (bottom), Test 2 in Tables 1 and 2] propagated electric field components of a linearly x-polarized Gaussian beam by use of vectorial diffraction integrals from indirectly propagated ones (S0 via S1 to S2). The viewing angle is perpendicular to S2. The large relative deviation for the y-component results from the fact that the y component of the directly propagated field, i.e., E2,y, is exactly zero. The corresponding magnetic components can be found in Appendix C.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Left: from S0 via S1 to S2 [Fig. 2 (bottom), Test 2 in Tables 1 and 2] propagated y component of the electric field of a linearly x-polarized Gaussian beam by use of vectorial diffraction integrals. Right: relative deviation of the irradiance between the directly from S0 to S2 and the indirectly (S0 via S1 to S2) propagated beam. The viewing angle is perpendicular to S2.

Download Full Size | PDF

Now, with S1 curved, the deviations can no longer be classified as numerical artifacts. These regular patterns of larger magnitude must be caused by systematic effects. By inspection of Eq. (12) and comparison to Test 1, it turns out that two reasons can be recognized as necessary and sufficient to explain at least the nonzero y component shown in Fig. 7 (left):

  • • The cross product term in Eq. (12) produces a y component if the normal vector contains a y component.
  • • On the sphere surface, all normals are oriented differently.

The first reason is also true for the arbitrarily tilted plane. However, the second, which enables the (almost) utter cancellation, is not. For the sphere case, cancellation is not perfect due to the different normal orientations; thus, the residual regular pattern is allowed (Fig. 7, left). The regular deviation patterns for the other components are likely caused in an analogue way, i.e., by parasitic contributions, which did not cancel out because of the different normal directions and the vectorial nature of the integral kernels. However, the impact on the irradiance is still quite small: ΔIPV=9.1·1013 (Fig. 7, right). Since the residual irradiance error is quite symmetrically modulated around zero, upon integration the positive and negative parts almost cancel each other out. Therefore, the respective relative error of power conservation δ2,2 is even smaller:

  • δ1,0=P1P0P0=6.0·1015,
  • δ2,1=P2P1P1=2.6·1014,
  • δ2,2=P2P2P2=2.1·1015.

Thus, in rotationally symmetric setups, relying on power conservation alone can potentially lull us into a false sense of security. This symmetry can be broken when S2 is tilted, as is the case for the remaining tests.

Table 3 lists the power conservation errors for the Tests 3 (plane interface) and 4 (curved interface). The relevant test parameters are given in Table 2. Now, the refractive indices n1 and n2 are different and S1 becomes a real interface. The largest error appears in the calculation of reflected and refracted power and is more pronounced with a curved interface. Strangely, the error for the propagation step to or from the curved interface is smaller than for the tilted plane case by 1 order of magnitude. This may be explained by the symmetry effect mentioned above. On the other hand, these values are already quite small and in the range of the numerical limit. Therefore, it is questionable if they are interpretable this way.

Tables Icon

Table 3. Numerical Test of Power Conservation for Propagation through a Real Interface between Two Transparent, Isotropic, Homogeneous, and Nonmagnetic Mediaa

To check the influence of the sample intervals on the error, Test 4 is repeated for different sampling resolutions. Table 4 shows the results together with the needed calculation times on a personal computer with 64 bit MATLAB, 24 GB RAM and two Intel Xeon X5680 @3.33 GHz, i.e., altogether 12 cores, each working in parallel at 100% workload by use of MATLAB’s Parallel Processing Toolbox. Also, all computation times given later refer to the above hardware and software conditions. While δ1,0 and δ2,1 always are in the range of the numerical limit, |δ1| decreases with increasing resolution to 1.3·109, which is likely a lucky value beyond the reachable numerical limit for this type of calculation in this setup, because the values for 4442 and 5552 pixels are larger than for 3332 pixels. Therefore, this numerical error appears to be in the range of 5·109, and it is reached at a certain resolution threshold, somewhere between 2552 and 3332 pixels.

Tables Icon

Table 4. Numerical Test of Power Conservation for Different Sampling Resolutions for Propagation through an Interface between Two Transparent, Isotropic, Homogeneous, and Nonmagnetic Mediaa

The dependence on the sphere radius R is also checked. In order to keep the calculation time low, the sampling resolution was chosen to be 199×199. The results are listed in Table 5.

Tables Icon

Table 5. Numerical Test of Power Conservation for Different Radii of Curvature of an Interface between Two Transparent, Isotropic, Homogeneous, and Nonmagnetic Mediaa

Now, δ1 is always 4·108, while δ1,0 and δ2,1 are again governed by numerical noise.

The purpose of the last numerical test in this section is twofold. First, it checks whether the field in a focal plane can be calculated. Second, it delivers error values for a stronger refractive index contrast. The refractive index values are changed to n1=1.05 and n2=3.17. Then, the paraxial focus is obtained at z2 for R=z1z2(n2n1)/(z1n2+z2n1)=20.113852mm. The results are shown in Table 6 for various sampling resolutions.

Tables Icon

Table 6. Numerical Test of Power Conservation for Different Sampling Resolutions for Propagation in a Focusing Toy Model with Stronger Refractive Index Contrast n2/n13 a

The values for δ1,0 and δ2,1 are again in the range of the numerical limit. Therefore, it would be daring to obtain significant trends from three data points. Comparison of the column at 1992 pixels to the R=20mm column of Table 5 reveals for |δ1| a strong proportionality to the refractive index contrast. Due to this scaling effect, the decreasing of |δ1| with increasing resolution is now more resolvable. However, again there appears to be a limit in the range of 109. As mentioned earlier, the Fresnel equations are only exact for infinite plane waves at infinite plane boundaries. Thus, the violation of this condition may reveal itself here.

It could be shown that the above stepwise method describes the transport of radiation power through an interface between two transparent, isotropic, homogeneous, and nonmagnetic media with reasonable accuracy for quite large wavelengths, small curvature radii, and strong refractive index contrasts. However, it should be noted here that this method is not immune to undersampling, which can always prevent proper results. Due to the sampling requirements of the exp(ikr) term in Eqs. (12) and (13), which have already been extensively discussed in the literature, e.g., see [57], the simulation of a macroscopic optical system with many (close-by) interfaces can easily inflate the computation time to astronomic scale. This consideration has led to a different approach that is not limited by the number and distance of interfaces in an optical system. This method is presented in the next section. Fortunately, it can be compared to the stepwise method in simple toy models, such as the ones shown above. This is done after the next section.

3. VECTORIAL RAY-BASED DIFFRACTION INTEGRAL

In this section, the vectorial ray-based diffraction integral (VRBDI) is presented. In order to increase readability, it is structured into several subsections. First, a brief introduction into the used geometrical ray tracing is given. Then, a method to get close to specific sampling points behind an optical system is briefly presented while the details can be found in Appendix A.

Afterward, the extension of ray tracing to gather all relevant information for the VRBDI is introduced while the details of the method, i.e., how matrix optics [1618,45] and differential ray tracing [34,46] can be used to calculate the field at the sampling grid points, have been moved to Appendix B. The VRBDI itself concludes this section.

A. Ray Tracing

Geometrical ray tracing is a common technique used in optical design software [21,50]. A ray is defined by position P0 and versor r^. Then, the question as to how to scale the versor in order to intersect the next surface, i.e., P1=P0+lr^, leads to an equation system that needs to be solved. One has to distinguish between sequential ray tracing, where the next surface is given by the next entry in a list, and nonsequential ray tracing, where the next surface has to be determined for each ray individually. The latter one prevents effective parallel implementation, i.e., the tracing of many rays at once rather than one after another. Therefore, we chose sequential ray tracing in order to implement a parallel computation.

A plane is fully characterized by pivot vector T and normal N^z. Then, the condition (P0+lr^T)N^z=0 directly yields

l=(TP0)N^zr^N^z.
A sphere is fully characterized by radius R and center location C. However, because a sphere is a closed convex surface, a quadratic equation system yields two solutions in the general case:
l1,2=(CP0)r^±[(CP0)r^]2|CP0|2+R2.
Therefore, the correct solution for the respective physical situation must be chosen. There are other surface types that can be handled in a similar way, such as ellipsoids, paraboloids, cones, and cylinders. More complicated and only possible to be tackled by numerical iterations are general surface functions, such as different kinds of polynomials. However, here we restrict ourselves to planes and spheres.

When the intersection point is found, the reflected or refracted versor is calculated by Eq. (16) or (17), respectively. Then, the next ray is defined by the intersection point and the respective versor. If a ray intersection lies outside of allowed positions, i.e., outside an aperture, it is marked as terminated. The first and last surfaces are usually planes, the input plane and the detector plane, respectively.

B. Ray Aiming

A ray starting from the input plane of an optical system is defined by position P0 and versor r^. There are two possible cases, depending on the chosen decomposition of the input field: which parameter is fixed and which one is free for aiming. For a plane wave decomposition, r^ is fixed and the starting point on a plane, which is normal to r^ and shares its origin with the input plane, is the free parameter. In contrast, for spherical wave decomposition, P0 is a fixed sampling grid point and r^ is free. Both cases are sketched in Fig. 8. Initially, a grid on the aperture plane defines the ray spacing.

 figure: Fig. 8.

Fig. 8. Types of used input field decompositions in the ray picture. Only one component is shown, respectively. Initially, a grid on the aperture plane defines the ray spacing. Top: decomposition into plane waves: each component is represented by parallel rays starting from the wave front, which intersects the origin of the input plane, i.e., the plane where the decomposition is done by Eqs. (1) and (2). Bottom: decomposition into spherical waves: each component is represented by divergent rays starting from one of the sampling points where the input field is given.

Download Full Size | PDF

By tracing test rays through the optical system, various starting conditions can be related to positions on the detector plane. A fit by Zernike polynomials [50] is then utilized to obtain a functional relation between the starting condition and the detector position. Then, it is possible to aim for specific points on the detector sampling grid. A detailed explanation of the used ray-aiming procedure and the description of a test tool for the reachability of detector positions and quality of the ray aiming is given in Appendix A. Generally, with this kind of ray aiming, the targeted positions on the detector cannot be reached exactly. There is always a smaller or larger displacement between the targeted grid point and the ray intersection. This displacement will also be important for the next subsection.

There are also situations where certain regions in the area of the detector cannot be reached by rays, while in other regions the relation between starting condition and position on the detector plane is not unique, i.e., where the different test rays intersect each other. This is the case at caustics, focal, and image regions.

C. Tracing the Electromagnetic Field

From each aimed ray, i.e., a ray that gets close to an assigned detector pixel and is found by ray aiming, a differential ray tube is constructed at an input position [34,46]. To each ray tube, an electric field contribution is assigned by

dE0=dA0dkxdky4π2E(kx,ky),
with plane or
dE0=dA0(N^0×E0)×r^initial,
with spherical wave decomposition of the input field. During ray tracing at each interface, the reflected or refracted field is calculated in analogy to Eqs. (21) or (22) and without using Eq. (20). After m1 surfaces, dEm1 is obtained. By use of Eq. (3), one also obtains the magnetic field contribution:
dHm1=nm1ϵ0μ0r^×dEm1,r^=r^final.
The traced ray tubes are evaluated to obtain information about the respective optical path
OPL=i=0m1ni|Pi+1Pi|,
the amplitude factor AFPW,SW, and paraxial eikonal SPW,SW [16,58] at the sampling grid points on the detector plane. A detailed description of the evaluation procedure, which utilizes matrix optics [1618,45] and differential ray tracing [34,46], is given in Appendix B. Finally, the field on a detector sampling grid point can be written as
(dE,dH)sgp=AFPWexp(ik0OPL)exp(ik0SPW)(dE,dH)m1,
with plane or
(dE,dH)sgp=iλ0AFSWexp(ik0OPL)(1ik0OPL)·exp(ik0SSW)(dE,dH)m1,
with spherical wave decomposition of the input field.

D. VRBDI

The VRBDI itself is then given by

(E,H)sgp=inputplane(dE,dH)sgp.
It should be noted that, if a member of a ray tube is terminated, the whole ray tube gets terminated. Furthermore, it is worth noting that the VRBDI, in contrast to the geometrical optics field tracing of Wyrowski and Kuhn [24], is a numerical implementation of a diffraction integral. Therefore, all input field components interact with each other on the output plane according to the Huygens–Fresnel principle [50] or its equivalent description by plane waves [19]. In Fig. 9, a flow chart of the full algorithm is shown. The irradiance can then be calculated by use of Eqs. (14) and (28).

 figure: Fig. 9.

Fig. 9. Flow chart of the VRBDI algorithm.

Download Full Size | PDF

4. COMPARISON BY A TOY MODEL

In this section, the VRBDI is compared to the stepwise propagation method of the second section. The second toy model from Fig. 2 is chosen, and the parameters are changed according to Table 7. Again, the input field is given by Eq. (30) with w0=0.5mm, E0,y=0, and using Eqs. (1 )–(4) for the other components. It is worth noting that the optimal conditions for the plane wave decomposition are reciprocal to the optimal conditions for the spherical wave decomposition. For instance, in Table 7 the input field for the plane wave decomposition is chosen on a wider spatial mesh (the number of pixels is maintained while the overall dimensions of the mesh are increased), which yields the needed higher spatial frequency resolution. In contrast, for the spherical decomposition, a sufficient spatial resolution is most important. Therefore, a denser spatial mesh is better.

Tables Icon

Table 7. Simulation Parameters for the Comparison Between the Vectorial Ray-Based Diffraction Integral and Stepwise Method Based on Vectorial Diffraction Integralsa

The stepwise method is again checked for power conservation:

  • δ1,0=P1P0P0=6.7·1013,
  • δ1=P1r+P1tP1P1=3.6·108,
  • δ2,1=P2P1tP1t=1.3·1012.

From the plane wave decomposition, 1069 components, i.e., plane waves, and from the spherical decomposition, 59,381 points were chosen for the VRBDI. In both cases, a relative threshold, 109 for the spherical wave decomposition and 1010 for the plane wave decomposition, in relation to the strongest component, defines a cut-off level. The following calculation times are obtained for the three methods:

  • • stepwise: 27 min,
  • • VRBDI-SW: 198 min,
  • • VRBDI-PW: 3.5 min.

Obviously, choosing lower cut-off levels increases calculation time but, of course, also accuracy. Therefore, a trade-off between accuracy and computation time has to be found. However, most important is the proper choice of the decomposition type. In the presented case, VRBDI-PW is roughly 57 times faster than VRBDI-SW, which corresponds to the number of the respective input components: 59,381/106956. The results are quite comparable, as can be seen in Fig. 10. However, some distinctions are observable: in the VRBDI-PW deviations from the stepwise method, one can see some numerical noise in the low-field regions.

 figure: Fig. 10.

Fig. 10. Comparison between different propagation methods by toy model 2 shown in Fig. 2 with the parameters from Table 7. Left: electric field components propagated by a vectorial ray-based diffraction integral where a linearly x-polarized Gaussian beam is decomposed by spherical waves (VRBDI-SW). Middle: relative deviation from a stepwise propagation by vectorial diffraction integrals. Right: relative deviation of the electric field components propagated by a VRBDI with plane wave decomposition (VRBDI-PW) from the ones obtained by the stepwise method. The viewing angle is perpendicular to S2. The corresponding magnetic components can be found in Appendix C.

Download Full Size | PDF

The irradiance calculated by VRBDI-SW and its relative deviation from the stepwise method as well as the relative deviation of VRBDI-PW from VRBDI-SW are shown in Fig. 11. The latter is quite small, ΔIPV±6·108, and seems to be caused by sampling issues. Therefore, the uncertainty for both is represented by the relative deviation of VRBDI-SW from the stepwise method, which is ΔIPV6·106. From the shape of the deviation, one can deduce that the beam spot generated by the stepwise method is slightly broader. By consideration of power conservation,

  • δSW,2=PSWP2P2=1.6·105,
  • δPW,2=PPWP2P2=1.6·105,
and the bound that essentially is set by the above δ1=3.6·108, one can conclude that the stepwise method delivers the more exact solution. The relative deviations between the field components calculated by VRBDI-PW and VRBDI-SW are plotted in Fig. 12. Now, the numerical noise can be clearly seen.

 figure: Fig. 11.

Fig. 11. Comparison between different propagation methods by toy model 2 shown in Fig. 2 with the parameters from Table 7. Left: irradiance obtained by a vectorial ray-based diffraction integral where a linearly x-polarized Gaussian beam is decomposed by spherical waves (VRBDI-SW). Middle: relative deviation from a stepwise propagation by vectorial diffraction integrals. Right: relative deviation of the irradiance obtained by a vectorial diffraction integral with plane wave decomposition (VRBDI-PW) from the one obtained by VRBDI-SW. The viewing angle is perpendicular to S2.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Comparison between different propagation methods by toy model 2 shown in Fig. 2 with the parameters from Table 7: relative deviation of the electric field components propagated by a vectorial diffraction integral with spherical wave decomposition (VRBDI-SW) of a linearly x-polarized Gaussian beam from the ones propagated by a VRBDI with plane wave decomposition (VRBDI-PW). The viewing angle is perpendicular to S2. The corresponding magnetic components can be found in Appendix C.

Download Full Size | PDF

5. INTERFEROMETER SIMULATION

In order to illustrate the versatility of the VRBDI method, it is applied to the calculation of the diffraction correction for an interferometer, as it was used to measure the diameter of silicon spheres [1113]. The parameters of the simulated setup are identical to the ones in [13], where a Gaussian beam-tracing algorithm, known as vectorial general astigmatic Gaussian beam (VGAGB) tracing, was utilized.

As a rule of thumb, when deciding which decomposition type for the input field is preferable, i.e., the one that needs the fewest components, it is suggested to consider two extreme situations:

  • • A large collimated beam close to the first aperture can efficiently be composed from a few plane waves, VRBDI-PW.
  • • A small divergent source far away from the first aperture can efficiently be composed from a few point sources, VRBDI-SW.

Following this rule, VRBDI-SW is chosen because the source, a Gaussian beam with waist radius w0=0.28562mm, can be sufficiently sampled by 31×31 pixels at its waist location, which is 683.6 mm away from the first interferometer component. Furthermore, a relative threshold of 106 yields 793 components for each VRBDI run.

The resolution and size of the detectors were chosen to be 255×255 pixels over 8×8mm2. The interferometer is operated by wavelength shifting. As in [13], the full procedure is simulated here as well but only for the ideal configuration of the interferometer (no Monte Carlo simulations). For each measurement, i.e., for an empty and loaded cavity, indicated as E and L, respectively, seven interferograms IE,L(λ), one for each wavelength, are captured and fed into the phase retrieval formula of [13]. For each wavelength, three beam paths are considered: corresponding to zero, one, and two round trips in the empty or loaded gap. Therefore, 2×7×3=42 VRBDI runs have to be performed. With Eqs. (14) and (28), the interferograms IE,L(λ) are obtained from the respective sum of the (complex-valued) fields from each path. In Fig. 13, the phase biases ΔϕE,L are plotted. We refer to bias as the difference between measured value and correct value of a quantity. Therefore, a correction to a measured value is the respective negative bias. The difference to the VGAGB calculation, δΔϕE,L=ΔϕE,L,VRBDIΔϕE,L,VGAGB, is shown in Fig. 14. Table 8 compares the resulting measurement biases for empty gap, loaded gap, and resulting sphere diameter from VRBDI-SW and VGAGB. These values are calculated directly from δΔϕE,L(0,0). Since an odd sampling was chosen, the central value already exists and needs not to be interpolated. The agreement is quite remarkable. By inspection of Fig. 14, one can see that this is only true in a relatively small central area, i.e., the paraxial regime, where the VGAGB method can be considered to be accurate. It should be noted here that the meaning of paraxial and nonparaxial depends on the desired accuracy. On that note, nonparaxial means that the paraxial approximation can no longer be applied because the desired accuracy is no longer met. For a metrologist working in the field of ultraprecision interferometry, the paraxial approximation can potentially be much more restrictive than commonly assumed. However, it is shown here that it is well justified for the desired accuracy at the extremal position of the interferogram phases in the described interferometer, i.e., the location where the interferograms obtained from this setup are usually evaluated. Dividing the calculation time of 16.6 h by 42 yields a mean calculation time per run of ca. 24 min; further division by 793 results in ca. 1.8 s per input field component.

 figure: Fig. 13.

Fig. 13. Phase bias for empty and loaded cavity in the ideal configuration of the interferometer described in [1113] but calculated by the vectorial ray-based diffraction integral with spherical wave decomposition (VRBDI-SW).

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Difference of the phase bias for empty (top) and loaded cavity in the ideal configuration of the interferometer described in [1113] between the calculation by a vectorial ray-based diffraction integral with spherical wave decomposition (VRBDI-SW) and a vectorial general astigmatic Gaussian beam-tracing algorithm (VGAGB) [13].

Download Full Size | PDF

Tables Icon

Table 8. Measurement Bias for Empty Gap ΔDE, Loaded Gap ΔDL, and Resulting Sphere Diameter ΔD a

6. DISCUSSION

The stepwise method, as it is introduced in the second section, represents to our knowledge the most rigorous and still practically computable propagation method applicable to the optical interface problem, where it can be successfully used to test other propagation methods such as the VRBDI.

To provide a more physically motivated and intuitive interpretation, we refer to the Huygens–Feynman–Fresnel principle [31,32]. Provided that the Huygens–Fresnel principle and the Maxwell equations [4850] are equivalent to the quantum-mechanical description of a single photon, the stepwise method solves the path integral problem for each segment of the optical system individually. It constitutes the full brute-force approach: for every point on the target surface, the coherent sum of all wavelets emerging from all possible input positions is calculated numerically, although spatial discretization is unavoidable.

Following this picture, the VRBDI would be an approximation whose main assumption is that the light interaction at the interfaces of an optical system can be approximately described deterministically. In this sense, the traced rays represent the different possibilities that remain when the influence of an aperture on the photon’s momentum uncertainty is neglected [59]:

Δx·Δpx2sinΔαxλ4πΔx.
Although the right-hand side of Eq. (40) is equivalent to the left-hand side, the cautious reader may consider this analogy only as a rude estimation. In any case, for a valid description of light propagation by the VRBDI, the apertures in an optical system should be much larger than the wavelength.

More generally, any confinement of photon trajectories should increase the momentum uncertainty. This could explain why focal regions cannot be described by deterministic and straight photon trajectories. The question as to whether it is possible to properly describe the situation by curved trajectories goes beyond the scope of this paper [60].

Beside these fundamental considerations, there are also technical issues that prevent the proper usage of the VRBDI in focal regions. However, by using Eqs. (12) and (13) or, alternatively, Eqs. (1 )–(4), one can easily close this reachability gap. Alternatively, it seems natural to assume that proper usage of respective angle-to-angle and point-to-angle eikonals [58] could also solve this issue. However, this has not been attempted by the authors so far.

For proper handling of small apertures, an intermediate field has to be sampled on the apertured surface. Approximately, by assuming Kirchhoff boundary conditions [50], the field in the aperture can simply be obtained by multiplication to the respective aperture function. Then, further propagation can be started from there.

7. CONCLUSION

In order to simulate the operation of laser interferometers, a vectorial ray-based diffraction integral method has been developed. The tangential electric components of a continuous monochromatic wave field on a plane, which is immersed in a transparent nonmagnetic homogeneous medium, are necessary and sufficient to obtain the other electromagnetic components by use of the angular plane wave spectrum. From the latter, vectorial diffraction integrals are deduced, which satisfy Maxwell and Helmholtz equations. By exploitation of the kinematic and dynamic boundary conditions of the Maxwell equations, a stepwise surface-to-surface propagation method is established, which is still a good approximation when the interface between two linear, transparent, nonmagnetic, and homogeneous media is curved.

For application of the VRBDI method, an input field for a general optical system is decomposed either into spherical waves, i.e., point sources, or plane waves. By means of ray aiming, an input position of a point source or, alternatively, an input angle of a plane wave can be related to a certain sampling position on a detector plane at the end of this optical system. Differential ray tubes, each consisting of an aimed base ray and four close, i.e., paraxial, parabasal rays, are launched and traced in parallel through the optical system to their respective sampling point on the detector plane. The optical path length and a ray matrix are obtained for each ray tube. By use of the respective paraxial eikonals and further adaptation of the diffraction integrals, the electromagnetic field at all sampling points is calculated and summed to all the other traced contributions from the decomposed input field. For the location of the detector plane, caustics as well as focal and image regions have to be avoided. However, propagation to these regions can be carried out by common free-space propagation methods.

Direct comparison of the VRBDI with the stepwise method in a simple curved interface toy model using power conservation as a criterion reveals that the stepwise method is most exact, but the VRBDI is a good approximation when the apertures are much larger than the wavelength. In the chosen toy model, the remaining error of the simulated irradiance is at the parts per million level, which should be almost unresolvable by experiment. By proper choice of the input field decomposition, the VRBDI is already faster than the stepwise method. Improper choice can lead to longer computation times, though, albeit this drawback applies only to simple systems, which can only just be modeled by the stepwise method. Macroscopic optical systems with many (closely spaced) surfaces cannot be modeled with the stepwise method in reasonable computation times.

The simulation of an interferometer, which was previously described by a paraxial Gaussian beam tracing method, showed very good agreement in the paraxial regime, whereas deviations between the VRBDI and the paraxial method at more distant locations from the optical axis already indicate the expectable failure of the paraxial method in the nonparaxial regime.

APPENDIX A: DETAILED DESCRIPTION OF RAY AIMING AND TEST TOOL

For the sake of brevity, only the ray aiming for the spherical wave decomposition case is explained here. Then, P0 is a fixed sampling grid point and r^ is free. More exactly, the continuous independent variables are r^x and r^y when r^z=1r^x2r^y2 is chosen to be dependent. When the intersection point Pi on the detector plane is expressed in the local detector frame defined by the orthonormal set (N^x,N^y,N^z) and pivot vector T,

x=(PiT)N^x,y=(PiT)N^y,
r^x and r^y can formally be expressed as functions of these coordinates:
r^x=r^x(x,y),r^y=r^y(x,y).
Now, (A2) can be approximated by proper interpolation functions. For this purpose, we chose Zernike polynomials [50]. The definition space of the Zernike polynomials requires
ρ=(x)2+(y)2ρmax,φ=atan2(y,x).
The largest reachable radius ρmax is used to confine ρ to [0,1]. Our goal is to write r^ as ΣZj(ρ,ϕ)aj. The following equation system must then be solved:
[Z]a=[Z00(ρ1,φ1)Z11(ρ1,φ1)Z00(ρν,φν)Z11(ρν,φν)Zuu(ρ1,φ1)Zuu(ρν,φν)][a1aζ]=[F(ρ1,φ1)F(ρν,φν)]=F,
where F=r^x or r^y. Thus, F must contain uncorrelated values, e.g., randomly chosen or discretely sampled, for r^x or r^y, respectively. In Eq. (A4), the number of Zernike polynomials ζ, i.e., the number of columns in [Z], should match the number of detected ray intersections ν, i.e., the number of rows in [Z]. However, a least-squares solution for νζ is readily obtained by [61]
a=([Z]T[Z])1[Z]TF.
When a is determined, the sought versor components to reach the sampling grid points are obtained directly from
Fsgp=[Z]sgpa,
where [Z]sgp is constructed by use of the local coordinates (xsgp,ysgp) of the chosen detector sampling grid points and (A3). Then, with r^x,sgp and r^y,sgp, new rays are traced that should “hit” the sampling grid points.

In order to evaluate the quality of the ray aiming and the reachability of the detector sampling grid points, a test tool has been developed. Figure 15 illustrates the test method. To the aperture plane a sampling grid is assigned, holding either zero for blocking or one for passing a ray. From each source point, whose intensity corresponds to the numbering in Fig. 15 (left), a ray is directed onto each of the nonblocking aperture grid points and traced to the detector. Then, a simple incoherent additive sampling is done on the detector grid.

 figure: Fig. 15.

Fig. 15. Left: distribution of chosen source points on the input plane for the test of reachability of the detector sampling grid positions of an optical system. The numbering corresponds to the respective source intensity. Right: example of an optical system under test (only marginal rays from a single source point are shown).

Download Full Size | PDF

For the central source point, i.e., point 9 in Fig. 15, ray aiming is carried out and the positioning error is evaluated according to

δra=(xxsgp)2+(yysgp)2Δxsgp2+Δysgp2,
where (x,y) are already the ray intersection coordinates and Δxsgp and Δysgp are the sampling intervals of the detector-plane grid. Therefore, successful ray aiming should at least result in δra<0.5 but preferably into a very small number thereof. Thus, when δra0.5 the ray aiming is poor.

In Fig. 16, the result for a simple imaging system [distance g=200mm from input (b0,x=b0,y=5mm) to lens middle plane], consisting of a biconvex spherical lens (focal length f1=50mm, diameter dL1=40mm, thickness t1=10mm) and a circular aperture (diameter dA1=4mm) directly in front of the lens, is shown at Δz51.7mm behind the rear lens surface. One can see that the nine source points (Fig. 15, left) yield separated spots. Thus, there is almost no overlap. The values for bx,y and b¯x,y given in Fig. 16 are quite different. For a good overlap, these values should become almost equal. The ray aiming error for the central source point is quite small. Figure 17 shows the same for different image distances and a larger aperture dA2=30mm. Directly behind the lens (Δz=1mm) and far away (Δz=1m) overlap and ray aiming are good, although the ray aiming for the larger aperture is not as good as for the small aperture. Close to the image plane at Δz61.7mm, there is insufficient to no overlap, and the ray aiming error becomes very large.

 figure: Fig. 16.

Fig. 16. Test of overlap and ray aiming for a simple imaging system with a 4 mm aperture. Left: additive sampling of the rays from the nine input source points yields nine separated spots on the detector grid. The plotted quantity is arbitrary and corresponds to the ray density times the respective source intensity. The depicted values are bx,y, total point spread width including all sources; b¯x,y, mean of the point spread widths taken from the individual source points; b, largest width of the point spread of the central source point, i.e., source point 9 in Fig. 15, which is also the calculation window size of the right picture; Δz, distance from the last lens surface apex to the detector plane. Right: the ray aiming error according to Eq. (A7) for the central source point. The viewing angle is perpendicular to the detector plane.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. Test of overlap and ray aiming for a simple imaging system with a 30 mm aperture. Left: additive sampling of the rays from the nine input source points on the detector grid. The plotted quantity is arbitrary and corresponds to the ray density times the respective source intensity. The depicted values are bx,y, total point spread width including all sources; b¯x,y, mean of the point spread widths taken from the individual source points; b, largest width of the point spread of the central source point, i.e., source point 9 in Fig. 15, which is also the calculation window size of the right picture; Δz, distance from the last lens surface apex to the detector plane. Right: the ray aiming error according to Eq. (A7) for the central source point. The viewing angle is perpendicular to the detector plane.

Download Full Size | PDF

In order to illustrate the application to a different optical system and to show that the optimal overlap and ray aiming are not always close to or far away from the last lens surface, a second lens (f2=100mm, dL2=40mm, t2=10mm) is placed confocally to the image position of the first lens. The results for various distances are shown in Fig. 18. At Δz=50mm, the ray aiming and overlap are better than at Δz=5mm. For Δz=5m, both are poor.

 figure: Fig. 18.

Fig. 18. Test of overlap and ray aiming for a telescope with a 30 mm aperture. Left: additive sampling of the rays from the nine input source points on the detector grid. The plotted quantity is arbitrary and corresponds to the ray density times the respective source intensity. The depicted values are bx,y, total point spread width including all sources; b¯x,y, mean of the point spread widths taken from the individual source points; b, largest width of the point spread of the central source point, i.e., source point 9 in Fig. 15, which is also the calculation window size of the right picture; Δz, distance from the last lens surface apex to the detector plane. Right: the ray aiming error according to Eq. (A7) for the central source point. The viewing angle is perpendicular to the detector plane.

Download Full Size | PDF

 figure: Fig. 19.

Fig. 19. Situation after ray tracing: the points Q±x,y on the plane normal to r^ have to be found from known intersections P and P±x,y and known versors r^ and r^±x,y.

Download Full Size | PDF

At caustics, image, or focal regions, certain locations in the area of the detector cannot be reached by rays, while in other locations rays accumulate in vicinity to certain sampling coordinates, i.e., the overlap is poor. Therefore, the relation between starting condition and position on the detector plane is not unique, i.e., Eq. (A2) is no longer bijective. Use of Eq. (A6) yields distances much larger than Δxsgp or Δysgp between (x,y) and (xsgp,ysgp) and δra gets poor.

One can conclude that the test tool enables one to find positions where the features, which are necessary for the calculation of a ray-based diffraction integral, i.e., overlap and hitting of sampling points, are established. Generally, it should now be clear that caustics as well as image and focal regions have to be avoided.

APPENDIX B: DETAILED DESCRIPTION OF FIELD TRACING

From each aimed ray, i.e., a ray that gets close to an assigned detector pixel and is found by ray aiming, a ray tube is constructed at an input position. Such a ray tube consists of the base ray, which is identical to the aimed ray, and four parabasal rays. The ray tubes are generated by either translating the base ray along orthogonal directions or by tilting the base ray around two orthogonal axes. The finite differences, i.e., the translation of magnitude D or the tilt angle β, should be chosen to be as small as numerically reasonable [46]. The orthogonal directions can be obtained from the arbitrary orthonormal set (e^x,e^y,r^), which is defined by

e^x=[r^z0r^x]1r^x2+r^z2,e^y=r^×e^x.
Then, the parallel rays start from
P0,±x=P0±D·e^x,P0,±y=P0±D·e^y,
while the divergent ray directions are given by
r^±x=r^cosβ±e^xsinβ,r^±y=r^cosβ±e^ysinβ.
Parallel ray initialization is used when a plane wave decomposition of the input field is chosen, while a spherical wave decomposition implies the use of the divergent one.

When the rays of a ray tube are traced through an optical system, they stay sufficiently paraxial to the base ray if D or β are chosen sufficiently small, so that the positions and directions of the output rays can be described by matrix optics [45,46]:

[x˘y˘x˘y˘]2=[ABCD][x˘y˘x˘y˘]1=[A11A12B11B12A21A22B21B22C11C12D11D12C21C22D21D22][x˘y˘x˘y˘]1.
It is worth noting that, in this paper, the reduced slope, e.g., x˘=n(z˘)dx˘(z˘)/dz˘, is used [18]. The coordinates x˘ and y˘ live in a plane orthogonal to the base ray r^, i.e., the base plane, while their slopes x˘ and y˘ are taken along r^. Obviously, e^x and e^y can be readily used to initially define the base plane. Thus, the orthonormal set (e^x,e^y,r^) defines the base ray frame, i.e., the coordinate system that is fixed to the respective base ray. However, when the base ray undergoes refractions or reflections its direction changes. Therefore, at every interface of the optical system, when r^r,t=Mr^ the same rotation OM has to be applied to e^x or e^y. For example, e^x,r,t=Me^x, while e^y,r,t=r^r,t×e^x,r,t ensures the right-hand chirality of the orthonormal set. In order to find OM, Rodrigues’ rotation formula [62] is used, which describes how a vector is rotated around an arbitrary rotation axis that is given by a versor v. For the present problem, one can obtain v by
v=r^×r^r,t|r^×r^r,t|,
while the rotation angle is
θ=arccos(r^r^r,t).
The special case r^r,t=r^ would yield division by zero in Eq. (B5), which has to be prevented by explicitly setting v=0 in this case. There are two possible rotation directions, which have to be considered by probing with ±v and taking the solution that minimizes |M(v)r^r^r,t|, where M(v) is obtained by Rodrigues’ rotation formula [62]:
M(v)=I+sinθV×+(1cosθ)V×2,
where I is the identity matrix and
V×=[0vzvyvz0vxvyvx0].
Extensive matrix multiplication yields
M(v)=[1+(1cosθ)(vy2vz2)vzsinθ+(1cosθ)vxvyvysinθ+(1cosθ)vxvzvzsinθ+(1cosθ)vxvy1+(1cosθ)(vx2vz2)vxsinθ+(1cosθ)vyvzvysinθ+(1cosθ)vxvzvxsinθ+(1cosθ)vyvz1+(1cosθ)(vx2vy2)].
It is worth noting that, in the special case where v=0, it follows that M(v)=I.

While the initial ray vectors are simply given by [46]

p1,±x=[±D000],p1,±y=[0±D00],q1,±x=[00±n0tanβ0],q1,±y=[000±n0tanβ],
where the p1,±x,y are needed for the plane wave decomposition and the q1,±x,y for the spherical wave decomposition of the input field, the final ray vectors must be obtained from the intersections of the parabasal rays with the x˘, y˘ plane of the base ray frame where, in turn, the base ray intersects the detector plane. Figure 19 illustrates the problem of finding these intersection points. In analogy to Eq. (31), one can obtain the sought intersection points from any known ray positions and versors by
Q±x,y=(PP±x,y)r^r^±x,yr^r^±x,y+P±x,y.
The points Q±x,y are still 3D vectors in the global frame. The local coordinates on the x˘, y˘-plane can be obtained from
x˘±x,y=(Q±x,yP)e^x,y˘±x,y=(Q±x,yP)e^y.
The versors r^±x,y also have to be transformed into the base ray frame:
s^±x,y=[r^±x,ye^xr^±x,ye^yr^±x,yr^].
Now the final ray vectors can be obtained:
p2,±x,q2,±x=[x˘±xy˘±xnm1s^x˘,±x/s^z˘,±xnm1s^y˘,±x/s^z˘,±x],p2,±y,q2,±y=[x˘±yy˘±ynm1s^x˘,±y/s^z˘,±ynm1s^y˘,±y/s^z˘,±y].
The A, B, C, and D matrices are directly related to the input and output ray vectors, whose components are denoted by the pre-superscripts [46]:
[ABCD]=[p2,±x1/1p1,±xp2,±y1/2p1,±yp2,±x2/1p1,±xp2,±y2/2p1,±yp2,±x3/1p1,±xp2,±y3/2p1,±yp2,±x4/1p1,±xp2,±y4/2p1,±yq2,±x1/3q1,±xq2,±y1/4q1,±yq2,±x2/3q1,±xq2,±y2/4q1,±yq2,±x3/3q1,±xq2,±y3/4q1,±yq2,±x4/3q1,±xq2,±y4/4q1,±y].
Therefore, the parallel initial ray vectors are chosen if A and C are sought, while the divergent initial ray vectors lead to B and D. Fortunately, as is shown below, the VRBDI also needs no more than either A and C in the plane wave or B and D in the spherical wave decomposition case.

With Eqs. (33) and (34) repeatedly using Eqs. (21) or (22) after m1 surfaces and with Eq. (35), a field contribution on the base plane can now be given either as [16,58]

(dE,dH)bp=exp(ik0OPL)det(A)exp{ik02det(A)·[x˘2(C11A22C12A21)+y˘2(C22A11C21A12)+x˘y˘(C12A11C11A12+C21A22C22A21)]}·(dE,dH)m1
for the plane or
(dE,dH)bp=iλ0exp(ik0OPL)det(B)(1ik0OPL)exp{ik02det(B)·[x˘2(D11B22D12B21)+y˘2(D22B11D21B12)+x˘y˘(D12B11D11B12+D21B22D22B21)]}·(dE,dH)m1
for the spherical wave decomposition and the optical path OPL obtained by Eq. (36).

However, the field in the base plane (dE,dH)bp is not the sought quantity. Additionally, the opportunity to choose only one of four possible ±-combinations for the ray vectors implies an ambiguity. Figure 20 illustrates these problems and offers solutions. First, the nearby sample-grid point Psgp is expressed in the base ray frame:

[x˘Ly˘Lz˘L]=[(PsgpP)e^x(PsgpP)e^y(PsgpP)r^].

 figure: Fig. 20.

Fig. 20. Situation after ray tracing. Left: the intersection P of the base ray with the detector plane defines the orthogonal base plane, which does, in general, not contain the nearby sample-grid point Psgp. Further propagation by Δz along the base ray is necessary. Right: the four possible ±- combinations of ray vectors are decided by the position of Psgp.

Download Full Size | PDF

Depending on the signs of x˘L and y˘L, the initial and final ray vectors are chosen as indicated by Fig. 20. Then, by use of Eq. (B15), the corresponding submatrices A and C or B and D are calculated. Further propagation into base ray direction can simply be described by

[A˜B˜CD]=[IΔz/nm1I0I][ABCD].

Therefore, only

A˜=A+Δz/nm1CorB˜=B+Δz/nm1D
has to be calculated. It is worth noting that Eq. (B20) holds also if Δz<0, which corresponds to a backward propagation. For example, this is the case when Psgp lies on the other side of P in Fig. 20 (left).

To properly include the well-known Gouy phase anomaly, which causes a π/2 shift at each astigmatic focus (π at a symmetric double focus) [39,63] and which was also confirmed experimentally [64], it is still necessary to modify det(A˜)1/2 or det(B˜)1/2, respectively. It can be shown that

1det(A˜)σ(À11)À11σ(À22)À22,1det(B˜)σ(11)11σ(22)22,
where À and are the diagonalized matrices of A˜ and B˜, respectively, describes the Gouy phase correctly. It should be noted that, e.g., in the case of a higher-order Hermite–Gaussian beam, additional order-dependent contributions to the Gouy phase, as they typically appear in respective analytical formulas, are implicitly included by the integration of all input field components at the output plane and need not to be considered explicitly here.

Eventually, a field contribution on a detector sampling grid point can be written as

(dE,dH)sgp=σ(À11)À11σ(À22)À22exp(ik0OPL)·exp{ik02det(A˜)[x˘L2(C11A˜22C12A˜21)+y˘L2(C22A˜11C21A˜12)+x˘Ly˘L(C12A˜11C11A˜12+C21A˜22C22A˜21)]}·(dE,dH)m1
for the plane or
(dE,dH)sgp=iλ0σ(11)11σ(22)22exp(ik0OPL)·(1ik0OPL)exp{ik02det(B˜)·[L2(D11B˜22D12B˜21)+L2(D22B˜11D21B˜12)+LL(D12B˜11D11B˜12+D21B˜22D22B˜21)]}·(dE,dH)m1
for the spherical wave decomposition. Therefore, AFPW,SW and SPW,SW in Eqs. (37) and (38) are
AFPW=σ(À11)À11σ(À22)À22,SPW=12det(A˜)[x˘L2(C11A˜22C12A˜21)+y˘L2(C22A˜11C21A˜12)+x˘Ly˘L(C12A˜11C11A˜12+C21A˜22C22A˜21)]
for the plane or
AFSW=σ(11)11σ(22)22,SSW=12det(B˜)[L2(D11B˜22D12B˜21)+L2(D22B˜11D21B˜12)+LL(D12B˜11D11B˜12+D21B˜22D22B˜21)]
for the spherical wave decomposition. The term SSW is the paraxial point-to-point and SPW the paraxial angle-to-point eikonal, respectively [58].

APPENDIX C: MAGNETIC FIELD COMPONENTS

In this section, the magnetic field components corresponding to the main text figures showing the electric ones are collected in Figs. 2125 the sake of completeness.

 figure: Fig. 21.

Fig. 21. Magnetic field components corresponding to Fig. 3.

Download Full Size | PDF

 figure: Fig. 22.

Fig. 22. Magnetic field components corresponding to Fig. 4.

Download Full Size | PDF

 figure: Fig. 23.

Fig. 23. Magnetic field components corresponding to Fig. 6.

Download Full Size | PDF

 figure: Fig. 24.

Fig. 24. Magnetic field components corresponding to Fig. 10.

Download Full Size | PDF

 figure: Fig. 25.

Fig. 25. Magnetic field components corresponding to Fig. 12.

Download Full Size | PDF

FUNDING INFORMATION

EMRP Joint Research Project (SIB08); European Union.

This work is a study for the EMRP Joint Research Project SIB08: Traceability of sub-nm length measurements (subnano). The EMRP is jointly funded by the European Union. The authors gratefully acknowledge funding by the EMRP.

REFERENCES

1. J. N. Dukes and G. B. Gordon, “A two-hundred-foot yardstick with graduations every microinch,” Hewlett-Packard J. 21, 2–8 (1970).

2. K. Dorenwendt and G. Bönsch, “Über den Einfluß der Beugung auf die interferentielle Längenmessung,” Metrologia 12, 57–60 (1976). [CrossRef]  

3. J.-P. Monchalin, M. J. Kelly, J. E. Thomas, N. A. Kurnit, A. Szöke, F. Zernike, P. H. Lee, and A. Javan, “Accurate laser wavelength measurement with a precision two-beam scanning Michelson interferometer,” Appl. Opt. 20, 736–757 (1981). [CrossRef]  

4. G. Bönsch, “Wavelength ratio of stabilized laser radiation at 3.39 μm and 0.633 μm,” Appl. Opt. 22, 3414–3420 (1983). [CrossRef]  

5. G. Mana, “Diffraction effects in optical interferometers illuminated by laser sources,” Metrologia 26, 87–93 (1989). [CrossRef]  

6. A. Bergamin, G. Cavagnero, and G. Mana, “Observation of Fresnel diffraction in a two-beam laser interferometer,” Phys. Rev. A 49, 2167–2173 (1994). [CrossRef]  

7. A. Bergamin, G. Cavagnero, L. Cordiali, and G. Mana, “Beam-astigmatism in laser interferometry,” IEEE Trans. Instr. Meas. 46, 196–200 (1997). [CrossRef]  

8. A. Bergamin, G. Cavagnero, L. Cordiali, and G. Mana, “A Fourier optics model of two-beam scanning laser interferometers,” Eur. Phys. J. D 5, 433–440 (1999). [CrossRef]  

9. R. Masui, “Effect of diffraction in a Saunders-type optical interferometer,” Metrologia 34, 125–131 (1997). [CrossRef]  

10. G. Cavagnero, G. Mana, and E. Massa, “Aberration effects in two-beam laser interferometers,” J. Opt. Soc. Am. A 23, 1951–1959 (2006). [CrossRef]  

11. N. Kuramoto, K. Fujii, and K. Yamazawa, “Volume measurement of 28Si spheres using an interferometer with a flat etalon to determine the Avogadro constant,” Metrologia 48, S83–S95 (2011). [CrossRef]  

12. B. Andreas, L. Ferroglio, K. Fujii, N. Kuramoto, and G. Mana, “Phase corrections in the optical interferometer for Si sphere volume measurements at NMIJ,” Metrologia 48, S104–S111 (2011). [CrossRef]  

13. B. Andreas, K. Fujii, N. Kuramoto, and G. Mana, “The uncertainty of the phase-correction in sphere-diameter measurements,” Metrologia 49, 479–486 (2012). [CrossRef]  

14. E. Wolf, “Electromagnetic diffraction in optical systems I. An integral representation of the image field,” Proc. R. Soc. A 253, 349–357 (1959). [CrossRef]  

15. B. Richards and E. Wolf, “Electromagnetic diffraction in optical systems II. Structure of the image field in an aplanatic system,” Proc. R. Soc. A 253, 358–379 (1959). [CrossRef]  

16. S. A. Collins, “Lens-system diffraction integral written in terms of matrix optics,” J. Opt. Lett. 60, 1168–1177 (1970).

17. H. Kogelnik and T. Li, “Laser beams and resonators,” Appl. Opt. 5, 1550–1567 (1966). [CrossRef]  

18. A. E. Siegman, Lasers (University Science Books, 1986).

19. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1968).

20. E. Byckling and J. Simola, “Wave optical calculation of diffraction effects in optical systems,” Opt. Acta 27, 337–344 (2010). [CrossRef]  

21. A. S. Glassner, An Introduction to Ray Tracing (Morgan Kaufmann, 1989).

22. N. G. Douglas, A. R. Jones, and F. J. van Hoesel, “Ray-based simulation of an optical interferometer,” J. Opt. Soc. Am. A 12, 124–131 (1995). [CrossRef]  

23. F. Riechert, F. Dürr, U. Rohlfing, and U. Lemmer, “Ray-based simulation of the propagation of light with different degrees of coherence through complex optical systems,” Appl. Opt. 48, 1527–1534 (2009). [CrossRef]  

24. F. Wyrowski and M. Kuhn, “Introduction to field tracing,” J. Mod. Opt. 58, 449–466 (2011). [CrossRef]  

25. A. W. Greynolds, “Propagation of generally astigmatic Gaussian beams along skew ray paths,” Proc. SPIE 560, 33–50 (1985).

26. M. J. Bastiaans, “The expansion of an optical signal into a discrete set of Gaussian beams,” Optik 57, 95–102 (1980).

27. R. Herloski, S. Marshall, and R. Antos, “Gaussian beam ray-equivalent modeling and optical design,” Appl. Opt. 22, 1168–1174 (1983). [CrossRef]  

28. J. Arnaud, “Representation of Gaussian beams by complex rays,” Appl. Opt. 24, 538–543 (1985). [CrossRef]  

29. G. W. Forbes and M. A. Alonso, “The Holy Grail of ray-based optical modeling,” Proc. SPIE 4832, 186–197 (2002).

30. M. A. Alonso, “Ray-based diffraction calculations using stable aggregate of flexible elements,” J. Opt. Soc. Am. A 30, 1223–1235 (2013). [CrossRef]  

31. A. V. Gitin, “Huygens-Feynman-Fresnel principle as the basis of applied optics,” Appl. Opt. 52, 7419–7434 (2013). [CrossRef]  

32. R. P. Feynman, “Space-time approach to non-relativistic quantum mechanics,” Rev. Mod. Phys. 20, 367–387 (1948). [CrossRef]  

33. R. Mittra, ed., Computer Techniques for Electromagnetics (Pergamon, 1973).

34. G. A. Deschamps, “Ray techniques in electromagnetics,” Proc. IEEE 60, 1022–1035 (1972). [CrossRef]  

35. S. W. Lee, P. Cramer Jr., K. Woo, and Y. Rahmat-Samii, “Diffraction by an arbitrary subreflector: GTD solution,” IEEE Trans. Antennas Propagat. AP-27, 305–316 (1979).

36. R. Mittra and A. Rushdi, “An efficient approach for computing the geometrical optics field reflected from a numerically specified surface,” IEEE Trans. Antennas Propagat. 27, 871–877 (1979).

37. S. W. Lee, M. S. Sheshadri, V. Jamnejad, and R. Mittra, “Refraction at a curved dielectric interface: geometrical optics solution,” IEEE Trans. Microwave Theory Trans. MTT-30, 12–19 (1982).

38. S. W. Lee and P. T. Lam, “Correction to diffraction by an arbitrary subreflector: GTD solution,” IEEE Trans. Antennas Propagat. AP-34, 272 (1986).

39. H. Ling, R. C. Chou, and S. W. Lee, “Shooting and bouncing rays: calculating the RCS of an arbitrarily shaped cavity,” IEEE Trans. Antennas Propagat. 37, 194–205 (1989).

40. S. W. Lee, H. Ling, and R. C. Chou, “Ray-tube integration in shooting and bouncing ray method,” Microwave Opt. Technol. Lett. 1, 286–289 (1988). [CrossRef]  

41. J. Baldauf, S. W. Lee, L. Lin, S. K. Jeng, S. M. Scarborough, and C. L. Yu, “High frequency scattering from trihedral corner reflectors and other benchmark targets: SBR versus experiment,” IEEE Trans. Antennas Propagat. 39, 1345–1351 (1991).

42. Y. Tao, H. Lin, and H. Bao, “GPU-based shooting and bouncing ray method for fast RCS prediction,” IEEE Trans. Antennas Propagat. 58, 494–502 (2010).

43. C. Y. Kee and C. F. Wang, “Efficient GPU implementation of the high-frequency SBR-PO method,” IEEE Antennas Wireless Propagat. Lett. 12, 941–944 (2013). [CrossRef]  

44. B. Andreas, G. Mana, E. Massa, and C. Palmisano, “Modeling laser interferometers for the measurement of the Avogadro constant,” Proc. SPIE 8789, 87890W (2013).

45. A. Gerrard and J. M. Burch, Introduction to Matrix Methods in Optics (Wiley, 1975).

46. M. Harrigan, “General beam propagation through non-orthogonal optical systems,” Proc. SPIE 4832, 379–389 (2002).

47. C. G. Chen, P. T. Konkola, J. Ferrera, R. K. Heilmann, and M. L. Schattenburg, “Analyses of vector Gaussian beam propagation and the validity of paraxial and spherical approximations,” J. Opt. Soc. Am. A 19, 404–412 (2002). [CrossRef]  

48. G. S. Smith, An Introduction to Classical Electromagnetic Radiation (Cambridge University, 1997).

49. J. D. Jackson, Classical Electrodynamics (Wiley, 1999).

50. M. Born and E. Wolf, Principles of Optics (Pergamon, 1964).

51. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 2008).

52. W. R. Smythe, “The double current sheet in diffraction,” Phys. Rev. 72, 1066–1070 (1947). [CrossRef]  

53. A. Drezet, J. C. Woehl, and S. Huant, “Diffraction of light by a planar aperture in a metallic screen,” J. Math. Phys. 47, 072901 (2006). [CrossRef]  

54. J. A. Stratton and L. J. Chu, “Diffraction theory of electromagnetic waves,” Phys. Rev. 56, 99–107 (1939). [CrossRef]  

55. S. Guha and G. D. Gillen, “Vector diffraction theory of refraction of light by a spherical surface,” J. Opt. Soc. Am. B 24, 1–8 (2007). [CrossRef]  

56. S. R. Ghorpade and B. V. Limaye, A Course in Multivariable Calculus and Analysis (Springer, 2010).

57. M. Hillenbrand, D. P. Kelly, and S. Sinzinger, “Numerical solution of nonparaxial scalar diffraction integrals for focused fields,” J. Opt. Soc. Am. A 31, 1832–1841 (2014). [CrossRef]  

58. A. Walther, The Ray and Wave Theory of Lenses (Cambridge University, 1995).

59. R. P. Feynman, R. B. Leighton, and M. Sands, The Feynman Lectures on Physics (California Institute of Technology, 2010), http://www.feynmanlectures.caltech.edu.

60. K. Y. Bliokh, A. Y. Bekshaev, A. G. Kofman, and F. Nori, “Photon trajectories, anomalous velocities and weak measurements: a classical interpretation,” New J. Phys. 15, 1–17 (2013).

61. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes: The Art of Scientific Computing, 3rd ed. (Cambridge University, 2007).

62. http://en.wikipedia.org/wiki/Rodrigues'_rotation_formula.

63. T. D. Visser and E. Wolf, “The origin of the Gouy phase anomaly and its generalization to astigmatic wavefields,” Opt. Commun. 283, 3371–3375 (2010). [CrossRef]  

64. J. P. Rolland, K. P. Thompson, K. S. Lee, and J. Tamkin Jr., T. Schmid and E. Wolf, “Observation of the Gouy phase anomaly in astigmatic beams,” Appl. Opt. 51, 2902–2908 (2012). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (25)

Fig. 1.
Fig. 1. Propagation through interface. The versors for reflection r ^ r and refraction r ^ t are calculated for each versor r ^ .
Fig. 2.
Fig. 2. Toy models 1 (top) and 2 (bottom) for the numerical test of power conservation for reflection, refraction, and propagation of electromagnetic fields by vectorial diffraction integrals. An interface separates two transparent, isotropic, homogeneous, and nonmagnetic media with refractive indices n 1 and n 2 . The fixed parameters are listed in Table 1; the variable parameters are given in Table 2.
Fig. 3.
Fig. 3. Directly from S 0 to S 2 [Fig. 2 (top), Test 1 in Tables 1 and 2] propagated electric field components of a linearly x -polarized Gaussian beam by use of vectorial diffraction integrals. The viewing angle is perpendicular to S 2 . The corresponding magnetic components can be found in Appendix C.
Fig. 4.
Fig. 4. Relative deviations of directly [ S 0 to S 2 , Fig. 2 (top), Test 1 in Tables 1 and 2] propagated electric field components of a linearly x -polarized Gaussian beam by use of vectorial diffraction integrals from indirectly propagated ones ( S 0 via S 1 to S 2 ). The viewing angle is perpendicular to S 2 . The large relative deviation for the y -component results from the fact that the y -component of the directly propagated field, i.e., E 2 , y , is exactly zero. The corresponding magnetic components can be found in Appendix C.
Fig. 5.
Fig. 5. Irradiance (left) and its relative deviation of the directly [ S 0 to S 2 , Fig. 2 (top), Test 1 in Tables 1 and 2] propagated linearly x -polarized Gaussian beam by use of vectorial diffraction integrals from the indirectly propagated one ( S 0 via S 1 to S 2 ). The viewing angle is perpendicular to S 2 .
Fig. 6.
Fig. 6. Relative deviations of directly [ S 0 to S 2 , Fig. 2 (bottom), Test 2 in Tables 1 and 2] propagated electric field components of a linearly x -polarized Gaussian beam by use of vectorial diffraction integrals from indirectly propagated ones ( S 0 via S 1 to S 2 ). The viewing angle is perpendicular to S 2 . The large relative deviation for the y -component results from the fact that the y component of the directly propagated field, i.e., E 2 , y , is exactly zero. The corresponding magnetic components can be found in Appendix C.
Fig. 7.
Fig. 7. Left: from S 0 via S 1 to S 2 [Fig. 2 (bottom), Test 2 in Tables 1 and 2] propagated y component of the electric field of a linearly x -polarized Gaussian beam by use of vectorial diffraction integrals. Right: relative deviation of the irradiance between the directly from S 0 to S 2 and the indirectly ( S 0 via S 1 to S 2 ) propagated beam. The viewing angle is perpendicular to S 2 .
Fig. 8.
Fig. 8. Types of used input field decompositions in the ray picture. Only one component is shown, respectively. Initially, a grid on the aperture plane defines the ray spacing. Top: decomposition into plane waves: each component is represented by parallel rays starting from the wave front, which intersects the origin of the input plane, i.e., the plane where the decomposition is done by Eqs. (1) and (2). Bottom: decomposition into spherical waves: each component is represented by divergent rays starting from one of the sampling points where the input field is given.
Fig. 9.
Fig. 9. Flow chart of the VRBDI algorithm.
Fig. 10.
Fig. 10. Comparison between different propagation methods by toy model 2 shown in Fig. 2 with the parameters from Table 7. Left: electric field components propagated by a vectorial ray-based diffraction integral where a linearly x -polarized Gaussian beam is decomposed by spherical waves (VRBDI-SW). Middle: relative deviation from a stepwise propagation by vectorial diffraction integrals. Right: relative deviation of the electric field components propagated by a VRBDI with plane wave decomposition (VRBDI-PW) from the ones obtained by the stepwise method. The viewing angle is perpendicular to S 2 . The corresponding magnetic components can be found in Appendix C.
Fig. 11.
Fig. 11. Comparison between different propagation methods by toy model 2 shown in Fig. 2 with the parameters from Table 7. Left: irradiance obtained by a vectorial ray-based diffraction integral where a linearly x -polarized Gaussian beam is decomposed by spherical waves (VRBDI-SW). Middle: relative deviation from a stepwise propagation by vectorial diffraction integrals. Right: relative deviation of the irradiance obtained by a vectorial diffraction integral with plane wave decomposition (VRBDI-PW) from the one obtained by VRBDI-SW. The viewing angle is perpendicular to S 2 .
Fig. 12.
Fig. 12. Comparison between different propagation methods by toy model 2 shown in Fig. 2 with the parameters from Table 7: relative deviation of the electric field components propagated by a vectorial diffraction integral with spherical wave decomposition (VRBDI-SW) of a linearly x -polarized Gaussian beam from the ones propagated by a VRBDI with plane wave decomposition (VRBDI-PW). The viewing angle is perpendicular to S 2 . The corresponding magnetic components can be found in Appendix C.
Fig. 13.
Fig. 13. Phase bias for empty and loaded cavity in the ideal configuration of the interferometer described in [1113] but calculated by the vectorial ray-based diffraction integral with spherical wave decomposition (VRBDI-SW).
Fig. 14.
Fig. 14. Difference of the phase bias for empty (top) and loaded cavity in the ideal configuration of the interferometer described in [1113] between the calculation by a vectorial ray-based diffraction integral with spherical wave decomposition (VRBDI-SW) and a vectorial general astigmatic Gaussian beam-tracing algorithm (VGAGB) [13].
Fig. 15.
Fig. 15. Left: distribution of chosen source points on the input plane for the test of reachability of the detector sampling grid positions of an optical system. The numbering corresponds to the respective source intensity. Right: example of an optical system under test (only marginal rays from a single source point are shown).
Fig. 16.
Fig. 16. Test of overlap and ray aiming for a simple imaging system with a 4 mm aperture. Left: additive sampling of the rays from the nine input source points yields nine separated spots on the detector grid. The plotted quantity is arbitrary and corresponds to the ray density times the respective source intensity. The depicted values are b x , y , total point spread width including all sources; b ¯ x , y , mean of the point spread widths taken from the individual source points; b , largest width of the point spread of the central source point, i.e., source point 9 in Fig. 15, which is also the calculation window size of the right picture; Δ z , distance from the last lens surface apex to the detector plane. Right: the ray aiming error according to Eq. (A7) for the central source point. The viewing angle is perpendicular to the detector plane.
Fig. 17.
Fig. 17. Test of overlap and ray aiming for a simple imaging system with a 30 mm aperture. Left: additive sampling of the rays from the nine input source points on the detector grid. The plotted quantity is arbitrary and corresponds to the ray density times the respective source intensity. The depicted values are b x , y , total point spread width including all sources; b ¯ x , y , mean of the point spread widths taken from the individual source points; b , largest width of the point spread of the central source point, i.e., source point 9 in Fig. 15, which is also the calculation window size of the right picture; Δ z , distance from the last lens surface apex to the detector plane. Right: the ray aiming error according to Eq. (A7) for the central source point. The viewing angle is perpendicular to the detector plane.
Fig. 18.
Fig. 18. Test of overlap and ray aiming for a telescope with a 30 mm aperture. Left: additive sampling of the rays from the nine input source points on the detector grid. The plotted quantity is arbitrary and corresponds to the ray density times the respective source intensity. The depicted values are b x , y , total point spread width including all sources; b ¯ x , y , mean of the point spread widths taken from the individual source points; b , largest width of the point spread of the central source point, i.e., source point 9 in Fig. 15, which is also the calculation window size of the right picture; Δ z , distance from the last lens surface apex to the detector plane. Right: the ray aiming error according to Eq. (A7) for the central source point. The viewing angle is perpendicular to the detector plane.
Fig. 19.
Fig. 19. Situation after ray tracing: the points Q ± x , y on the plane normal to r ^ have to be found from known intersections P and P ± x , y and known versors r ^ and r ^ ± x , y .
Fig. 20.
Fig. 20. Situation after ray tracing. Left: the intersection P of the base ray with the detector plane defines the orthogonal base plane, which does, in general, not contain the nearby sample-grid point P sgp . Further propagation by Δ z along the base ray is necessary. Right: the four possible ±- combinations of ray vectors are decided by the position of P sgp .
Fig. 21.
Fig. 21. Magnetic field components corresponding to Fig. 3.
Fig. 22.
Fig. 22. Magnetic field components corresponding to Fig. 4.
Fig. 23.
Fig. 23. Magnetic field components corresponding to Fig. 6.
Fig. 24.
Fig. 24. Magnetic field components corresponding to Fig. 10.
Fig. 25.
Fig. 25. Magnetic field components corresponding to Fig. 12.

Tables (8)

Tables Icon

Table 1. Fixed Simulation Parameters for the Toy Models Shown in Fig. 2

Tables Icon

Table 2. Variable Simulation Parameters for Toy Models Shown in Fig. 2 a

Tables Icon

Table 3. Numerical Test of Power Conservation for Propagation through a Real Interface between Two Transparent, Isotropic, Homogeneous, and Nonmagnetic Media a

Tables Icon

Table 4. Numerical Test of Power Conservation for Different Sampling Resolutions for Propagation through an Interface between Two Transparent, Isotropic, Homogeneous, and Nonmagnetic Media a

Tables Icon

Table 5. Numerical Test of Power Conservation for Different Radii of Curvature of an Interface between Two Transparent, Isotropic, Homogeneous, and Nonmagnetic Media a

Tables Icon

Table 6. Numerical Test of Power Conservation for Different Sampling Resolutions for Propagation in a Focusing Toy Model with Stronger Refractive Index Contrast n 2 / n 1 3 a

Tables Icon

Table 7. Simulation Parameters for the Comparison Between the Vectorial Ray-Based Diffraction Integral and Stepwise Method Based on Vectorial Diffraction Integrals a

Tables Icon

Table 8. Measurement Bias for Empty Gap Δ D E , Loaded Gap Δ D L , and Resulting Sphere Diameter Δ D a

Equations (72)

Equations on this page are rendered with MathJax. Learn more.

E x , y ( k x , k y ) = + d x 0 d y 0 E x , y ( x 0 , y 0 ) exp [ i ( k x x 0 + k y y 0 ) ] ,
E z = E x k x + E y k y k z ,
H ( k x , k y ) = n ϵ 0 μ 0 k ^ × E ( k x , k y ) ,
E ( x , y , z ) = 1 4 π 2 + d k x d k y · E ( k x , k y ) exp [ i ( k x x + k y y + k z z ) ] .
E = 1 4 π 2 + d k x d k y [ E x x ^ + E y y ^ ( E x k x k z + E y k y k z ) z ^ ] exp [ i ( k x x + k y y + k z z ) ] ,
E ( x , y , z ) = 2 × + d x 0 d y 0 z ^ × E ( x 0 , y 0 ) { i 8 π 2 + d k x d k y · exp [ i ( k x ( x x 0 ) + k y ( y y 0 ) + k z z ) ] k z } .
exp ( i k r ) 4 π r = i 8 π 2 + d k x d k y · exp [ i ( k x ( x x 0 ) + k y ( y y 0 ) + k z z ) ] k z ,
E ( x , y , z ) = 1 2 π × + d x 0 d y 0 z ^ × E ( x 0 , y 0 ) exp ( i k r ) r .
E ( x , y , z ) = 1 2 π × S 0 d x 0 d y 0 z ^ × E ( x 0 , y 0 ) exp ( i k r ) r .
E ( P 1 ) = 1 2 π × S 0 d A 0 N ^ 0 × E ( P 0 ) exp ( i k r ) r ,
E 1 = i λ S 0 d A 0 exp ( i k r ) r ( 1 i k r ) ( N ^ 0 × E 0 ) × r ^ + 1 2 π S 0 d A 0 exp ( i k r ) r × ( N ^ 0 × E 0 ) ,
E 1 = i λ S 0 d A 0 exp ( i k r ) r ( 1 i k r ) ( N ^ 0 × E 0 ) × r ^ ,
H 1 = i λ S 0 d A 0 exp ( i k r ) r ( 1 i k r ) ( N ^ 0 × H 0 ) × r ^ .
S = 1 2 Re ( E × H * ) ,
d S = d A 0 2 λ r 2 ( 1 + 1 k 2 r 2 ) Re { r ^ [ r ^ N ^ ( N ^ ( E 0 × H 0 ) ) ] } r ^
r ^ r = r ^ 2 ( r ^ N ^ 1 ) N ^ 1 ,
r ^ t = n 1 n 2 [ r ^ ( r ^ N ^ 1 ) ] + σ ( r ^ N ^ 1 ) 1 ( n 1 n 2 ) 2 [ 1 ( r ^ N ^ 1 ) 2 ] N ^ 1 ,
ξ ^ = r ^ × ( N ^ 1 × r ^ ) | r ^ × ( N ^ 1 × r ^ ) | , η ^ = r ^ × ξ ^ , ξ ^ r = η ^ × r ^ r , ξ ^ t = η ^ × r ^ t .
r TM = n 2 cos θ n 1 cos θ t n 2 cos θ + n 1 cos θ t , r TE = n 1 cos θ n 2 cos θ t n 1 cos θ + n 2 cos θ t , t TM = 2 n 1 cos θ n 2 cos θ + n 1 cos θ t , t TE = 2 n 1 cos θ n 1 cos θ + n 2 cos θ t ,
d E 1 = i n 1 λ 0 d A 0 exp ( i k 0 n 1 r ) r ( 1 i k 0 n 1 r ) ( N ^ 0 × E 0 ) × r ^ ;
d E 1 , r = r TM ( d E 1 ξ ^ ) ξ ^ r + r TE ( d E 1 η ^ ) η ^ ;
d E 1 , t = t TM ( d E 1 ξ ^ ) ξ ^ t + t TE ( d E 1 η ^ ) η ^ ;
d H 1 = i n 1 λ 0 d A 0 exp ( i k 0 n 1 r ) r ( 1 i k 0 n 1 r ) ( N ^ 0 × H 0 ) × r ^ ;
d H 1 , r = r TM ( d H 1 ξ ^ ) ξ ^ r + r TE ( d H 1 η ^ ) η ^ ;
d H 1 , t = n 2 n 1 [ t TM ( d H 1 ξ ^ ) ξ ^ t + t TE ( d H 1 η ^ ) η ^ ] .
z = f ( x , y ) = C z σ ( C z ) R 2 ( x C x ) 2 ( y C y ) 2 ,
d A sph ( x , y ) = R 2 R 2 ( x C x ) 2 ( y C y ) 2 d x d y .
I i = | S i N ^ i | ,
P i = S i d A i I i .
E 0 , x = exp [ ( x 2 + y 2 ) / w 0 2 ] V / m ,
l = ( T P 0 ) N ^ z r ^ N ^ z .
l 1,2 = ( C P 0 ) r ^ ± [ ( C P 0 ) r ^ ] 2 | C P 0 | 2 + R 2 .
d E 0 = d A 0 d k x d k y 4 π 2 E ( k x , k y ) ,
d E 0 = d A 0 ( N ^ 0 × E 0 ) × r ^ initial ,
d H m 1 = n m 1 ϵ 0 μ 0 r ^ × d E m 1 , r ^ = r ^ final .
OPL = i = 0 m 1 n i | P i + 1 P i | ,
( d E , d H ) sgp = AF PW exp ( i k 0 OPL ) exp ( i k 0 S PW ) ( d E , d H ) m 1 ,
( d E , d H ) sgp = i λ 0 AF SW exp ( i k 0 OPL ) ( 1 i k 0 OPL ) · exp ( i k 0 S SW ) ( d E , d H ) m 1 ,
( E , H ) sgp = input plane ( d E , d H ) sgp .
Δ x · Δ p x 2 sin Δ α x λ 4 π Δ x .
x = ( P i T ) N ^ x , y = ( P i T ) N ^ y ,
r ^ x = r ^ x ( x , y ) , r ^ y = r ^ y ( x , y ) .
ρ = ( x ) 2 + ( y ) 2 ρ max , φ = atan 2 ( y , x ) .
[ Z ] a = [ Z 0 0 ( ρ 1 , φ 1 ) Z 1 1 ( ρ 1 , φ 1 ) Z 0 0 ( ρ ν , φ ν ) Z 1 1 ( ρ ν , φ ν ) Z u u ( ρ 1 , φ 1 ) Z u u ( ρ ν , φ ν ) ] [ a 1 a ζ ] = [ F ( ρ 1 , φ 1 ) F ( ρ ν , φ ν ) ] = F ,
a = ( [ Z ] T [ Z ] ) 1 [ Z ] T F .
F sgp = [ Z ] sgp a ,
δ ra = ( x x sgp ) 2 + ( y y sgp ) 2 Δ x sgp 2 + Δ y sgp 2 ,
e ^ x = [ r ^ z 0 r ^ x ] 1 r ^ x 2 + r ^ z 2 , e ^ y = r ^ × e ^ x .
P 0 , ± x = P 0 ± D · e ^ x , P 0 , ± y = P 0 ± D · e ^ y ,
r ^ ± x = r ^ cos β ± e ^ x sin β , r ^ ± y = r ^ cos β ± e ^ y sin β .
[ x ˘ y ˘ x ˘ y ˘ ] 2 = [ A B C D ] [ x ˘ y ˘ x ˘ y ˘ ] 1 = [ A 11 A 12 B 11 B 12 A 21 A 22 B 21 B 22 C 11 C 12 D 11 D 12 C 21 C 22 D 21 D 22 ] [ x ˘ y ˘ x ˘ y ˘ ] 1 .
v = r ^ × r ^ r , t | r ^ × r ^ r , t | ,
θ = arc cos ( r ^ r ^ r , t ) .
M ( v ) = I + sin θ V × + ( 1 cos θ ) V × 2 ,
V × = [ 0 v z v y v z 0 v x v y v x 0 ] .
M ( v ) = [ 1 + ( 1 cos θ ) ( v y 2 v z 2 ) v z sin θ + ( 1 cos θ ) v x v y v y sin θ + ( 1 cos θ ) v x v z v z sin θ + ( 1 cos θ ) v x v y 1 + ( 1 cos θ ) ( v x 2 v z 2 ) v x sin θ + ( 1 cos θ ) v y v z v y sin θ + ( 1 cos θ ) v x v z v x sin θ + ( 1 cos θ ) v y v z 1 + ( 1 cos θ ) ( v x 2 v y 2 ) ] .
p 1 , ± x = [ ± D 0 0 0 ] , p 1 , ± y = [ 0 ± D 0 0 ] , q 1 , ± x = [ 0 0 ± n 0 tan β 0 ] , q 1 , ± y = [ 0 0 0 ± n 0 tan β ] ,
Q ± x , y = ( P P ± x , y ) r ^ r ^ ± x , y r ^ r ^ ± x , y + P ± x , y .
x ˘ ± x , y = ( Q ± x , y P ) e ^ x , y ˘ ± x , y = ( Q ± x , y P ) e ^ y .
s ^ ± x , y = [ r ^ ± x , y e ^ x r ^ ± x , y e ^ y r ^ ± x , y r ^ ] .
p 2 , ± x , q 2 , ± x = [ x ˘ ± x y ˘ ± x n m 1 s ^ x ˘ , ± x / s ^ z ˘ , ± x n m 1 s ^ y ˘ , ± x / s ^ z ˘ , ± x ] , p 2 , ± y , q 2 , ± y = [ x ˘ ± y y ˘ ± y n m 1 s ^ x ˘ , ± y / s ^ z ˘ , ± y n m 1 s ^ y ˘ , ± y / s ^ z ˘ , ± y ] .
[ A B C D ] = [ p 2 , ± x 1 / 1 p 1 , ± x p 2 , ± y 1 / 2 p 1 , ± y p 2 , ± x 2 / 1 p 1 , ± x p 2 , ± y 2 / 2 p 1 , ± y p 2 , ± x 3 / 1 p 1 , ± x p 2 , ± y 3 / 2 p 1 , ± y p 2 , ± x 4 / 1 p 1 , ± x p 2 , ± y 4 / 2 p 1 , ± y q 2 , ± x 1 / 3 q 1 , ± x q 2 , ± y 1 / 4 q 1 , ± y q 2 , ± x 2 / 3 q 1 , ± x q 2 , ± y 2 / 4 q 1 , ± y q 2 , ± x 3 / 3 q 1 , ± x q 2 , ± y 3 / 4 q 1 , ± y q 2 , ± x 4 / 3 q 1 , ± x q 2 , ± y 4 / 4 q 1 , ± y ] .
( d E , d H ) bp = exp ( i k 0 OPL ) det ( A ) exp { i k 0 2 det ( A ) · [ x ˘ 2 ( C 11 A 22 C 12 A 21 ) + y ˘ 2 ( C 22 A 11 C 21 A 12 ) + x ˘ y ˘ ( C 12 A 11 C 11 A 12 + C 21 A 22 C 22 A 21 ) ] } · ( d E , d H ) m 1
( d E , d H ) bp = i λ 0 exp ( i k 0 OPL ) det ( B ) ( 1 i k 0 OPL ) exp { i k 0 2 det ( B ) · [ x ˘ 2 ( D 11 B 22 D 12 B 21 ) + y ˘ 2 ( D 22 B 11 D 21 B 12 ) + x ˘ y ˘ ( D 12 B 11 D 11 B 12 + D 21 B 22 D 22 B 21 ) ] } · ( d E , d H ) m 1
[ x ˘ L y ˘ L z ˘ L ] = [ ( P sgp P ) e ^ x ( P sgp P ) e ^ y ( P sgp P ) r ^ ] .
[ A ˜ B ˜ C D ] = [ I Δ z / n m 1 I 0 I ] [ A B C D ] .
A ˜ = A + Δ z / n m 1 C or B ˜ = B + Δ z / n m 1 D
1 det ( A ˜ ) σ ( À 11 ) À 11 σ ( À 22 ) À 22 , 1 det ( B ˜ ) σ ( 11 ) 11 σ ( 22 ) 22 ,
( d E , d H ) sgp = σ ( À 11 ) À 11 σ ( À 22 ) À 22 exp ( i k 0 OPL ) · exp { i k 0 2 det ( A ˜ ) [ x ˘ L 2 ( C 11 A ˜ 22 C 12 A ˜ 21 ) + y ˘ L 2 ( C 22 A ˜ 11 C 21 A ˜ 12 ) + x ˘ L y ˘ L ( C 12 A ˜ 11 C 11 A ˜ 12 + C 21 A ˜ 22 C 22 A ˜ 21 ) ] } · ( d E , d H ) m 1
( d E , d H ) sgp = i λ 0 σ ( 11 ) 11 σ ( 22 ) 22 exp ( i k 0 OPL ) · ( 1 i k 0 OPL ) exp { i k 0 2 det ( B ˜ ) · [ L 2 ( D 11 B ˜ 22 D 12 B ˜ 21 ) + L 2 ( D 22 B ˜ 11 D 21 B ˜ 12 ) + L L ( D 12 B ˜ 11 D 11 B ˜ 12 + D 21 B ˜ 22 D 22 B ˜ 21 ) ] } · ( d E , d H ) m 1
A F PW = σ ( À 11 ) À 11 σ ( À 22 ) À 22 , S PW = 1 2 det ( A ˜ ) [ x ˘ L 2 ( C 11 A ˜ 22 C 12 A ˜ 21 ) + y ˘ L 2 ( C 22 A ˜ 11 C 21 A ˜ 12 ) + x ˘ L y ˘ L ( C 12 A ˜ 11 C 11 A ˜ 12 + C 21 A ˜ 22 C 22 A ˜ 21 ) ]
A F SW = σ ( 11 ) 11 σ ( 22 ) 22 , S SW = 1 2 det ( B ˜ ) [ L 2 ( D 11 B ˜ 22 D 12 B ˜ 21 ) + L 2 ( D 22 B ˜ 11 D 21 B ˜ 12 ) + L L ( D 12 B ˜ 11 D 11 B ˜ 12 + D 21 B ˜ 22 D 22 B ˜ 21 ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.