Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast Zernike fitting of freeform surfaces using the Gauss-Legendre quadrature

Open Access Open Access

Abstract

Zernike polynomial orthogonality, an established mathematical principle, is leveraged with the Gauss-Legendre quadrature rule in a rapid novel approach to fitting data over a circular domain. This approach provides significantly faster fitting speeds, in the order of thousands of times, while maintaining comparable error rates achieved with conventional least-square fitting techniques. We demonstrate the technique for fitting mid-spatial-frequencies (MSF) prevalent in small-tool-manufacturing typical of aspheric and freeform optics that are poised to soon permeate a wide range of optical technologies.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Freeform surfaces offer flexibility in surface definition and aberration compensation in optical system design [1]. Nevertheless, the complexity of surface definition challenges the freeform fabrication process, where small-tool manufacturing techniques are required. During the material removal process, the small tool footprints on the surface create mid-spatial frequency (MSF) errors [2], also known to aspheric manufacturing, which, if not appropriately controlled or mitigated can negatively affect the performance of the optical systems [38].

Toward analyzing the impact of MSF surfaces on image performance, one valid approach has its first step to fit measured MSF surfaces to polynomials, such as Zernike polynomials [911], 2D-Qs [12,13], and Rapidly Decaying Fourier-like (RDF) polynomials [14,15]. The coefficient maps can then be filtered to reveal unique spatial and angular frequency information for further analysis [8,13,16]. The challenge is that in order to capture the MSF structures, it may be necessary to fit more than a hundred thousand coefficients.

The conventional least-squares (LSQ) fitting method is most commonly employed in this process. LSQ fitting necessitates the computation of the sag table for each individual polynomial term, as well as matrix calculations to identify the coefficients that minimize the sum square of the fitting residuals [17]. This widely used fitting method is not limited to polynomial basis types as long as the sag table for each basis is available.

There are two potential issues brought by the LSQ fitting method when fitting the MSF structures to polynomials. The first challenge is the potential memory overflow caused by solving a large linear algebraic matrix, the size of which is determined by both the number of fitting terms and the quantity of measured data points. One approach to mitigate this challenge is dividing the fitting terms into several smaller groups and performing fittings consecutively. This workaround leads to non-destructive results if the fit is unique, which is often ensured when using orthogonal polynomials such as Zernike polynomials with adequate sampling. The second challenge is the time consumption. Solving linear algebraic matrices can be time-consuming, especially when we fit hundreds of thousands of polynomial terms with a densely measured MSF surface sag table [10].

To address this second most limiting challenge, we have developed a rapid algorithm that leverages both the orthogonality of Zernike polynomials and the Gauss-Legendre quadrature rule for fitting MSF surfaces to Zernike polynomials, bypassing the need for traditional LSQ fitting. The algorithm will be referred to as the GLQ Zernike fitting algorithm. The GLQ Zernike fitting was inspired by a methodology for fitting 2D-Q polynomials without resorting to conventional LSQ fitting [12,13,16]. The differences in the two algorithms relate to the fact that the Zernike polynomials are sag orthogonal, while the 2D-Qs are slope orthogonal. In the broader context of numerical integration techniques, it is noteworthy that alternative approaches like the cubature schemes over the unit disk, as discussed in [18], offer different perspectives, particularly in multi-dimensional integration.

In the remainder of this paper, in Section 2, we will detail the GLQ Zernike fitting algorithm. In Section 3, we will evaluate the algorithm performance in speed and accuracy and compare it to the LSQ fitting before concluding the paper.

2. Fitting method

We shall first rewrite the Zernike polynomials in terms of the Jacobi polynomials, as shown in Section 2.1. The rewrite will allow first to fit the azimuthal components (See Section 2.2), followed by fitting the radial components (See Section 2.3). Central to the GLQ Zernike fitting algorithm is the fact that Jacobi polynomials are orthogonal, which enables the direct calculation of the coefficients. The well-established orthogonality formula will be leveraged together with the Gauss-Legendre quadrature rule to ensure the numerical accuracy of the required integrals. As the quadrature helps to seamlessly transfer integrations to discrete summations, the discrete sampling points will be chosen based on the recurrence matrix of Jacobi. An example of calculating a related matrix is shown in Section 2.4. A discussion of the LSQ fitting method is presented in Section 2.5.

Before delving into the intricacies of the GLQ Zernike fitting algorithm, it is pivotal to understand the foundational representation of freeform optical surfaces. This understanding is rooted in the Zernike polynomials, a key aspect we will explore in Section 2.1.

2.1 Zernike surface

A freeform optical surface can be described as the summation of a set of Zernike polynomials as

$$S({u,\theta } )= \mathop \sum \limits_{m,n} c_n^mZ_n^m({u,\theta } )\; \; ,$$
where $u = \rho /{\rho _{max}}$ is the normalized radius, n denotes Zernike radial order, and $m $ denotes Zernike azimuthal order. Each Zernike term $Z_n^m({u,\theta } )$ can be derived based on Jacobi polynomials as
$$Z_n^m({u,\theta } )= {u^{\tilde{m}}}P_{\tilde{n}}^{({0,\tilde{m}} )}({2{u^2} - 1} )\left\{ {\begin{array}{{c}} {\cos ({\tilde{m}\theta } ){\; \; \; \; \; \; \; \; \; \; }m \ge 0}\\ {\sin ({\tilde{m}\theta } )\; \; \; \; \; \; \; \; \; \; m < 0} \end{array}} \right.\; \; ,$$
where $\tilde{m} = |m |,\; \tilde{n} = ({n - |m |} )/2$. Combining Eq. (1) and (2), we can rewrite the unknown coefficients $c_n^m$ and the freeform surface description as
$$c_n^m = \left\{ {\begin{array}{{c}} {a_{\tilde{n}}^{\tilde{m}}\; \; \; \; \; \; \; \; \; \; m \ge 0}\\ {b_{\tilde{n}}^{\tilde{m}}\; \; \; \; \; \; \; \; \; \; m < 0} \end{array}} \right.\; \; ,$$
$$S({u,\theta } )= \mathop \sum \limits_{\tilde{m},\tilde{n}} [{a_{\tilde{n}}^{\tilde{m}}{u^{\tilde{m}}}\cos ({\tilde{m}\theta } )P_{\tilde{n}}^{({0,\tilde{m}} )}({2{u^2} - 1} )+ b_{\tilde{n}}^{\tilde{m}}{u^{\tilde{m}}}\sin ({\tilde{m}\theta } )P_{\tilde{n}}^{({0,\tilde{m}} )}({2{u^2} - 1} )} ]\; .$$
Considering the one-to-one projection relationship between $m,n$ and $\tilde{m},\tilde{n}$ to simplify the notation in the rest of this article, we rewrite all the $\tilde{m},\tilde{n}$ at the right-hand side of Eq. (4) with $m,n$ and rewrite $P_\beta ^{({0,\alpha } )}$ as $P_\beta ^\alpha $. Furthermore, we factor out terms to separate the sum over m from the sum over n, which yields
$$S({u,\theta } )= \mathop \sum \limits_m \left\{ {{u^m}\mathop \sum \limits_n [{a_n^m\cos ({m\theta } )+ b_n^m\sin ({m\theta } )} ]P_n^m({2{u^2} - 1} )} \right\}\; .$$
Equation is reminiscent of the second term of Eq (1.7) in [13], however, where the $Q_n^m$ polynomials are now the Jacobi polynomials $P_n^m$. The required coefficients in the MSF surface fitting task become solving a set of $a_n^m$ and $b_n^m$ given in Eq. (5) for different $m,n$ order.

2.2 Azimuthal fitting

The azimuthal fitting isolates the $\cos ({m\theta } )$ and $\sin ({m\theta } )$ components and provides the intermediate fitting results with only radial dependencies ${A_m}(u )$ and ${B_m}(u )$ as

$$S({u,\theta } )= \mathop \sum \limits_{m,n} [{{A_m}(u )\cos ({m\theta } )+ {B_m}(u )\sin ({m\theta } )} ]\; .$$
While Ref. [13] offers valuable insights into azimuthal fitting using the Fourier series and the discrete Fourier transform (DFT), the DFT is ultimately implemented by using a Fast Fourier Transform (FFT). The FFT is simply an efficient way to evaluate the DFT.

Starting with treating the radial position ${u_i}$ as a known parameter while the azimuthal position ${\theta _i}$ is the variable, we evenly sample the azimuthal angle from 0 to 2$\pi $ and set up an array $S_i^{array}$ for each ${u_i}$ as

$$S_i^{array}: = [{S({{u_i},{\; }{\theta_1}} ),{\; }S({{u_i},{\; }{\theta_2}} ), \ldots ,S({{u_i},{\theta_j}} ), \ldots ,S({{u_i},{\theta_J}} )} ]\; ,$$
$${\theta _j} = 2\pi {\; }\frac{{j - 1}}{J}\; ,{\; \; \; \; \; \; \; \; \; \; \; \; \; \; }j = 1,2, \ldots ,J\; ,$$
where $S({{u_i},{\theta_1}} )$ is the first element in the array, $S({{u_i},{\theta_2}} )$ is the second, etc.

Then we apply the FFT to $S_i^{array}$, whose real and imaginary parts are exactly the coefficients for $\textrm{the}\; \textrm{cos}({m\theta } )$ and $\sin ({m\theta } )$ bases, respectively.

$$\textrm{Re}\{{\textrm{FFT}\{{S_i^{array}} \}} \}= [{{A_0}({{u_i}} ),{A_1}({{u_i}} ), \ldots ,{A_j}({{u_i}} ), \ldots ,{\; }{A_J}({{u_i}} )} ],$$
$$\textrm{Im}\{{\textrm{FFT}\{{S_i^{array}} \}} \}= [{{B_0}({{u_i}} ),{\; }{B_1}({{u_i}} ), \ldots ,{B_j}({{u_i}} ),{\; } \ldots ,{B_J}({{u_i}} )} ]\; .$$
The FFT produces an output array that matches the length of its input array $S_i^{array}$, which means the number of elements in $S_i^{array}$ equals to the number of elements in the output array, i.e., the highest azimuthal frequency it can analyze. To compute the coefficients corresponding to the $\sin ({m\theta } )$ and $\cos ({m\theta } )$ bases for each m value, it is crucial that the angular sampling size, denoted by J, is adequately large. In our approach, to prevent aliasing, we require $J \ge 2M + 1$, where M signifies the highest azimuthal term we must fit for capturing the MSF details [13]. Consequently, an input array of size $2M + 1$ produces an output array of equivalent size. For our purpose, we then truncate the latter half of the real and imaginary components of the output and retain only the initial $M + 1$ elements for each, representing the elements corresponding to the coefficients for bases up to $\sin ({M\theta } )$ and $\cos ({M\theta } )$, respectively. Because we are sampling on a polar grid, surface interpolation must be applied. Here we chose cubic interpolation.

Please note that we will repeat this FFT operation multiple times with different radial positions ${u_i}$, whose arrangement should follow a specific numerical set, which will be elaborated on in Section 2.3.

2.3 Radial fitting

After getting all the numerical coefficients for each sampled parameter ${u_i}$, the radial component for each m is shown as

$${A_m}(u )= {u^m}\mathop \sum \limits_n a_n^mP_n^m({2{u^2} - 1} )= {\left( {\frac{{1 + x}}{2}} \right)^{\frac{m}{2}}}\mathop \sum \limits_n a_n^mP_n^m(x )\; ,$$
$${B_m}(u )= {u^m}\mathop \sum \limits_n b_n^mP_n^m({2{u^2} - 1} )= {\left( {\frac{{1 + x}}{2}} \right)^{\frac{m}{2}}}\mathop \sum \limits_n b_n^mP_n^m(x )\; ,$$
where $x = 2{u^2} - 1$ to transfer the parameter range from $u \in [{0,\; 1} ]$ to $x \in [{ - 1,1} ]$ to enable the integration in the next step from -1 to 1.

The radial fitting leverages the orthogonality of the Jacobi polynomials $P_n^m$, which can be written as

$$\mathop \smallint \nolimits_{ - 1}^1 {({1 + x} )^m}P_n^m(x )P_k^m(x )dx = \frac{{{2^{m + 1}}}}{{2n + m + 1}}{\delta _{nk}}\; .$$
Thus, for the known azimuthal order m and the radial order n we are fitting, the Zernike coefficients $a_n^m,\; b_n^m$ can be obtained from this relation:
$$\begin{aligned} \mathop \smallint \nolimits_{ - 1}^1 {C_m}(x ){\left( {\frac{{1 + x}}{2}} \right)^{\frac{m}{2}}}P_n^m(x ){\; }dx &= {\left( {\frac{1}{2}} \right)^m}\mathop \sum \limits_k \left[ {c_k^m\mathop \smallint \limits_{ - 1}^1 {{({1 + x} )}^m}P_k^m(x )P_n^m(x )dx} \right]\\ &= c_n^m\frac{2}{{2n + m + 1}}{\; }, \end{aligned}$$
where C and c can be either $A,\; a$ or $B,\; b$.

Next, we aim to compute $c_n^m$ from Eq. (14). Continuous integration, a natural choice for this process, is however unfeasible here because the measured data are discrete point clouds, and the fitting process occurs within a digital computing environment. Therefore, we need a solution to handle discrete data effectively while maintaining numerical accuracy. A powerful tool in numerical integration named quadrature is adopted. The general format of the Gauss-Jacobi quadrature rule is given as

$$\mathop \smallint \limits_{ - 1}^1 {({1 - x} )^\alpha }{({1 + x} )^\beta }f(x ){\; }dx \approx \mathop \sum \limits_{i = 1}^K {w_i}f({{x_i}} )\; ,$$
where ${x_1},\; {x_2},\; \ldots ,\; {x_K}$ are the roots of Jacobi polynomials $P_K^{({\alpha ,\beta } )}(x )$; ${w_1},\; {w_2},\; \ldots ,\; {w_K}$ are the corresponding weights for this quadrature summation [19]. Given that the Gauss-Jacobi quadrature rule is exact for all polynomials of degree $t \le 2K - 1$ [19] and the degree of Jacobi polynomials can be calculated with t = 2n + m [13], implying the Nyquist radial sampling requirement should be $K > n + m/2$, the number of sample points used in this quadrature rule is defined with $K = 2({n + 1} )$ for good numerical accuracy.

Although the Gauss-Jacobi quadrature rule can be leveraged in Zernike fitting once we have the corresponding roots and weights, we choose not to use the Gauss-Jacobi quadrature rule because its root-weight combination is dependent on the factor $\alpha $ and $\beta $, yet $\beta $ varies with the azimuthal order m in our application, which means that for each m order to be fit, a unique set of $({x_1^m,\; x_2^m,\; \ldots ,\; x_K^m;k_1^m,\; k_2^m, \ldots ,k_K^m} )$ needs to be calculated, increasing the computational complexity.

When $\alpha = \beta = 0$, the Jacobi polynomials turns to a particular case, the Legendre polynomials, thus the Gauss-Jacobi quadrature rule becomes the Gauss-Legendre quadrature rule, which is applied in our fitting process. Equation (15) then reduces to

$$\mathop \smallint \limits_{ - 1}^1 f(x )dx \approx \mathop \sum \limits_{i = 1}^K {w_i}f({{x_i}} )\; .$$
Given Eq. (14), $f(x )= {C_m}(x ){\left( {\frac{{1 + x}}{2}} \right)^{\frac{m}{2}}}P_n^m(x )$. Furthermore, the root-weight combination is independent of the azimuthal order and the actual freeform surface being fitted; thus, it is possible for us to pre-calculate these parameters and save the root-weight sets for future usage.

Now the challenge becomes getting the accurate numerical values for all the K roots for the Legendre polynomials ${P_K}(x )$ and the corresponding weights. Here, we bold matrices and put arrows on top of vectors. Fortunately, the root-weight set $[{{x_i},\; {w_i}} ]$ can be calculated with the eigenvalues ${\lambda _i}$ and the corresponding eigenvectors ${\vec{X}_i}$ of ${\boldsymbol J}_{\boldsymbol K}^0$, the tridiagonal recurrence matrix for Jacobi polynomials that satisfy ${\boldsymbol J}_{\boldsymbol K}^0{\vec{X}_i} - {\lambda _i}{\vec{X}_i} = 0$, as [20]

$${x_i} = {\lambda _i};{\; \; \; \; \; \; \; }{w_i} = \frac{2}{{({1 - x_i^2} ){{({P_K^\mathrm{^{\prime}}({{x_i}} )} )}^2}}}\; ,$$
where $P_K^{\prime}$ denotes the derivative of the Legendre polynomials ${P_K}$. An efficient calculation of the recurrence matrix ${\boldsymbol J}_{\boldsymbol K}^{\boldsymbol m}$ with arbitrary parameters m and K is included in Section 2.4. Please note that the weights ${w_i}$ can also be calculated by [19]
$${w_i} = 2{|{{X_{i,1}}} |^2}\; ,$$
where ${X_{i,1}}$ denotes the first element in the eigenvector ${\vec{X}_i}$. Equation (18) simplifies the computation process as the eigenvectors and eigenvalues can be calculated simultaneously.

After we replaced the integration according to the Gauss-Legendre quadrature rule, the value of the unknown coefficients in Eq. (14) can be numerically calculated with

$$c_n^m \approx \frac{{2n + m + 1}}{2}\mathop \sum \limits_{i = 1}^K {w_i}{\; }{C_m}({{x_i}} ){\left( {\frac{{1 + {x_i}}}{2}} \right)^{\frac{m}{2}}}P_n^m({{x_i}} )\; .$$
As the calculation of the coefficients requires the numerical components ${A_m}$ and ${B_m}$ for the Legendre roots ${x_i}$ specifically, we must keep our radial sample position ${u_i}$ in Eq. (7) to correspond to the roots ${x_i}$.

Please note that there is a term ${\left( {\frac{{1 + x}}{2}} \right)^{\frac{m}{2}}} = {u^m}$ in Eq. (19), also the calculation for weight ${w_i}$ requires the recurrence of n. The higher m and n order will lead to more calculation complexity, which may come with numerical errors. With that said, the impact of such errors on the overall fitting process is mitigated by the fact that in MSF surface characterization, the higher-order Zernike coefficients typically have significantly smaller amplitudes. Therefore, while the theoretical precision of these high-order terms is affected due to the recurrence relationship, their practical contribution to the shape description of freeform surfaces is often negligible.

The fitting procedure is summarized in Fig. 1. We also published a MATLAB function named “ZernikeLegendreFit” to disseminate the new fitting algorithm [21]. Note that the MATLAB routine calls a prior routine written by Ilhan Kaya that computes the Zernike polynomials recursively [10].

 figure: Fig. 1.

Fig. 1. The flowchart of the GLQ Zernike fitting of a freeform surface. The two dashed boxes on the right calculate related roots and weights to use the Gauss-Legendre quadrature rule in the fitting, and these two can be calculated separately even without knowing the measured freeform surface. The measured freeform surface, the highest required fitting terms $N,M$, and the roots ${x_i}$ of the recurrence matrix are used in the azimuthal fitting process, which provides the coefficients of different $\sin (. )$ and $\cos (. )$ bases. Using the Gauss-Legendre quadrature rule, the radial fitting calculates the final coefficients based on Eq. (19).

Download Full Size | PDF

2.4 Tridiagonal recurrence matrix for Jacobi polynomials

As the three-term recursive formula for calculating Jacobi polynomials can be written as

$$P_{n + 2}^m(x )= ({{a_n}x + {b_n}} )P_{n + 1}^m(x )- {c_n}P_n^m(x )\; ,$$
as the superscript m a known constant parameter. The symmetric tridiagonal recurrence matrix for $P_K^m(x )$ can be presented as [19]
$${i_n} ={-} \frac{{{b_{n - 1}}}}{{{a_{n - 1}}}};{\; \; \; \; \; }{j_n} = \sqrt {\frac{{{c_n}}}{{{b_{n - 1}}{b_n}}}} ;{\; \; \; \; \; }{a_0} ={-} ({m + 1} );{\; \; \; \; \; }{b_0} = m + 2;$$
$${\boldsymbol J}_{\boldsymbol K}^{\boldsymbol m} = \left[ {\begin{array}{{ccccc}} {{i_1}}&{{j_1}}&0&0&0\\ {{j_1}}&{{i_2}}&{{j_2}}&0&0\\ 0&{{j_2}}&{{i_3}}&0&0\\ 0&0&0& \ldots &{{j_{K - 1}}}\\ 0&0&0&{{j_{K - 1}}}&{{i_K}} \end{array}} \right]\; \; .$$
For example, the tridiagonal recurrence matrix of $P_3^0(x )$ is listed as below.
$${a_1} = 3,{\; \; \; }{b_1} ={-} \frac{3}{2},{\; \; \; }{c_1} = \frac{1}{2};{\; \; }{a_2} = \frac{{10}}{3},{\; \; \; }{b_2} ={-} \frac{5}{3},{\; \; \; }{c_2} = \frac{2}{3};$$
$${\boldsymbol J}_3^0 = \left[ {\begin{array}{{ccc}} {1/2}&{\sqrt {1/12} }&0\\ {\sqrt {1/12} }&{1/2}&{\sqrt {1/15} }\\ 0&{\sqrt {1/15} }&{1/2} \end{array}} \right] \approx \left[ {\begin{array}{{ccc}} {0.5}&{0.2887}&0\\ {0.2887}&{0.5}&{0.2582}\\ 0&{0.2582}&{0.5} \end{array}} \right]$$

2.5 Least squares fitting

The fundamental principle for LSQ fitting is to solve the least squares problem represented by ${\boldsymbol A}\vec{x} = \vec{b}$. In the context of our application, $\vec{x}$ represents the vector of all the unknown coefficients ${c_i}$, $\vec{b}$ is the vector of measured surface sag $S({\vec{r}} )$, and ${\boldsymbol A}$ is the matrix consisting of calculated values of arbitrary single-indexed Zernike polynomials ${Z_j}({\overrightarrow {{r_i}} } ) $ at the corresponding surface positions $\overrightarrow {{r_i}} $. The matrix equation can be written as

$$\left[ \begin{array}{cccc} {Z_1}({{{\vec{r}}_1}} )&{{Z_2}({{{\vec{r}}_1}} )}&{{Z_ {\ldots} }({{{\vec{r}}_1}} )}&{{Z_M}({{{\vec{r}}_1}} )}\\ {{Z_1}({{{\vec{r}}_2}} )}&{{Z_2}({{{\vec{r}}_2}} )}&{{Z_{\ldots} }({{{\vec{r}}_2}} )}&{{Z_M}({{{\vec{r}}_2}} )}\\ {{Z_1}({{{\vec{r}}_ {\ldots} }} )}&{{Z_2}({{{\vec{r}}_ {\ldots} }} )}&{{Z_ {\ldots}}({{{\vec{r}}_{\ldots}}} )}&{{Z_M}({{{\vec{r}}_{\ldots}}} )}\\ {{Z_1}({{{\vec{r}}_N}} )}&{{Z_2}({{{\vec{r}}_N}} )}&{{Z_{\ldots}}({{{\vec{r}}_N}} )}&{{Z_M}({{{\vec{r}}_N}} )} \end{array} \right]\left[ {\begin{array}{{c}} {{c_1}}\\ {{c_2}}\\ {{c_{\ldots}}}\\ {{c_M}} \end{array}} \right] = \left[ {\begin{array}{{c}} {S({{{\vec{r}}_1}} )}\\ {S({{{\vec{r}}_2}} )}\\ {S({{{\vec{r}}_{\ldots}}} )}\\ {S({{{\vec{r}}_N}} )} \end{array}} \right]$$
For the measured surface sat at any position ${\vec{r}_i}$, we have a linear equation noted as
$${c_1}{Z_1}({{{\vec{r}}_i}} )+ {c_2}{Z_2}({{{\vec{r}}_i}} )+ \ldots + {c_M}{Z_M}({{{\vec{r}}_i}} )= S({{{\vec{r}}_i}} )$$
and the whole fitting process is considered a linear least square problem with respect to the coefficients ${c_j}$ because the polynomials ${Z_j}({{{\vec{r}}_i}} )$ are not being altered during the fitting process yet must be pre-calculated before feeding them into the matrix, and the whole matrix is a linear combination of all the basis functions. In that case, a simple matrix calculation is able to represent the fitting process. We therefore define the conventional LSQ fitting method noted here as
$${{\boldsymbol A}^{ - 1}}{\boldsymbol A}\vec{x} = \vec{x} = {{\boldsymbol A}^{ - 1}}\vec{b}\; \; .$$
There are several advanced LSQ algorithms available. For instance, MATLAB has “lsqlin” and “lsqnonlin” functions which are designed for linear and nonlinear fitting scenarios, respectively, and allow for more bounds and constraints. However, in our application where the surface fitting matrix is a dense linear matrix, we found that these advanced functions did not surpass traditional matrix inversion in terms of accuracy or efficiency.

MATLAB also offers iterative-based LSQ functions such as “bicgstab” and “gmres”. Both require the matrix ${\boldsymbol A}$ to be square, meaning the number of polynomials M must equal the number of sample points N on the surface. Properly sampling the surface for MSF fitting requires further exploration and discussion, which is beyond the scope of this paper. In our study, we utilized all data points on the measured surface to ensure robust fitting results.

In conclusion, we find that matrix calculation is the most suitable method for our needs, outperforming other advanced LSQ techniques. Therefore, we will define direct matrix calculation as the “conventional” LSQ fitting method in this research.

3. Evaluation

As introduced in the previous section, the conventional LSQ fitting necessitates the pre-computation of matrix ${\boldsymbol A}$, which includes each individual polynomial value at every surface map position ${Z_j}({{{\vec{r}}_i}} )$. In contrast, the proposed GLQ method still requires pre-computation on each individual polynomial ${P_j}$ value, but the samples are taken only in the radial direction ${u_i}$ rather than from the 2D vector ${\vec{r}_i}$. The ability to adjust the number of radial sample points, K, based on the highest Zernike term, further enhances the efficiency of the proposed fitting algorithm.

3.1 Examples: a synthetic surface and a measured surface

Here, we examine the efficacy of the GLQ Zernike fitting algorithm by applying it to both synthetic and real-measured MSF surfaces. These surfaces are also fitted using the conventional least-squares approach for comparison.

The synthetic MSF surface we chose is a $1024 \times 1024$ matrix defined by $z = 100\sin ({5\pi x} )$ with unitless circular aperture $- 1 < r < 1$. The real-measured MSF surface came as a $925 \times 925$ matrix. Only the pixels within the circular aperture (the inscribed circle of this matrix) of both surfaces have valid sag values. The Zernike coefficients required for fitting are $0 \le m \le M,\; 0 \le n \le N$ and we use the recursive formula to calculate all the Zernike and Jacobi polynomials [22]. It is worth noting that the $m,\; n$ indices we are fitting here are indeed $\tilde{m},\; \tilde{n}$ in Eq. (4), which have been simplified to $m,n $ in Eq. (5).

We consider both the quality and efficiency of each fitting process. The quality is quantified by the root-mean-square (RMS) and peak-to-valley (PV) values of the fitting residual map compared to the original surface. More specifically, the 98% PV (99% highest value – 1% lowest value) and 90% PV are considered to avoid extreme outlier data points [23]. The performed RMS and PV were evaluated within a circular area of normalized radius u ranging from 0 to 0.99. Please note that we do not mask any data or fitting results out when calculating the percentage PV values,

The efficiency, however, is determined by tracking the total time taken for each method. The time consumption, measured using the “tic()” and “toc()” functions in MATLAB, is detailed in these tables under two sections: “Fitting” and “Eval.” The “Fitting” section documents the time spent deriving the coefficients. In contrast, the “Eval” section records the time spent reconstructing the surface based on the coefficients. The evaluation time is not reported for LSQ fitting because the sag maps of each Zernike term are required for the fitting procedure. That is, when LSQ procedure does its coefficient fitting, it calculates the required sag map already, and these sags map can be directly reused in the evaluation process, making the evaluation time negligible.

We have also introduced a time ratio in our analysis. This ratio, which represents the time consumed by the conventional LSQ fitting method divided by the time taken by our proposed GLQ fitting method, has been calculated under two specific conditions: when only calculating the coefficients and when both the coefficients and the fitting error are evaluated.

Both the LSQ and GLQ fittings were done in MATLAB software (2022b) on the same workstation with one AMD Threadripper 3960X processor and 64 GB memory. No parallel computing was allowed during the fitting process. In the LSQ fitting, we divide the whole $({M + 1} )\times ({N + 1} )$ coefficient map into multiple $1 \times 10$ groups and fit them sequentially using matrix inversion; in the GLQ fitting we set $J = 2M + 1,\; K = 2({N + 1} )$ as the azimuthal and radial sampling factors.

In practice, to reduce the impact of aliasing, it may be helpful to first smooth the original image to remove noise before fitting the MSF structure [16]. However, we did not process that here since this evaluation aims to reveal the characteristics of these two algorithms.

The fitting results for the synthetic and the real-measured MSF surfaces are shown in Table 1 and Table 2, respectively, with the residual fitting error maps shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Zernike fitting results. (a) Surface $z = 100\sin ({5\pi x} )$ (left) is fitted to Zernike polynomials with $0 \le m \le 40$; $0 \le n \le 20$ orders. The absolute residual maps from the conventional LSQ Zernike fitting (middle) and the proposed GLQ Zernike fitting (right) are presented with the RMS and PV values listed in their titles. This synthetic surface is unitless. (b) A measured MSF surface named GESPHERE (left) is fitted to Zernike polynomials with the same parameters. This surface’s aperture diameter is 4 mm, and the sag map has its unit in nm.

Download Full Size | PDF

Tables Icon

Table 1. Results of fitting a synthetic ${\boldsymbol z} = 100\sin ({5{\boldsymbol \pi x}} )$ MSF surface

Tables Icon

Table 2. Results of fitting a real-measured MSF surface

Upon analyzing the fitting residual generated by these two methods for the synthetic surface fitting, shown in Table 1 and Fig. 2(a), we conclude that the proposed GLQ method is superior for this type of surface. This is evidenced by the RMS error from the GLQ fitting being significantly lower than the RMS from the LSQ method. Conversely, for fitting real measured MSF surfaces, as shown in Table 2 and Fig. 2(b), the fitting quality of both methods is comparable, with the RMS and PV values falling within the same order of magnitude. Nevertheless, the speed advantage of GLQ method is consistent across all the fitting experiments. This advantage becomes more pronounced with increasing polynomial order M; and is initially increasing and then decreasing with order N. In fact, specifically for real MSF surfaces, the GLQ method demonstrates a speed improvement of at least several thousand times compared to the conventional LSQ fitting when generating the fitting coefficient maps, which are capable of implying the MSF structures hidden in the surface.

Please note that the two methods (the proposed one and the conventional LSQ fitting) do not usually provide the exact same fitting results or coefficient maps, even if the RMS error of both fittings is comparable. As an example, shown in Fig. 3, we use both methods to fit the sample surface GESPHERE up to $M = 100,\; N = 50$ Zernike terms while the pre- and after-process procedures for these two fittings are kept the same. Although the RMS and PV values from the two methods are comparable, some differences between the two residual maps are noticeable. As shown in Fig. 3(a), the GLQ method fully fits the vertical raster band, whereas the LSQ method still displays a small residual at the location of the band in its residual map. In this specific application, while the LSQ method may be adequate, we conclude that the proposed GLQ method is superior as it provides a cleaner fit.

 figure: Fig. 3.

Fig. 3. The results from Zernike fitting the surface GESPHERE with two different methods. $M = 100,\; N = 50.$ (a) The residual maps from fitting surface GESPHERE with the LSQ fitting method (left) and the proposed GLQ fitting (right). Different residual patterns can be observed while the RMS and PV values are comparable. (b) The absolute difference between the two residual maps in (a). (c) The coefficient maps exported from the two methods in (a), respectively, in ${\log _{10}}(. )$ scale. (d) The absolute difference between the two coefficient maps in (c), in ${\log _{10}}(. )$ scale.

Download Full Size | PDF

This comprehensive evaluation substantiates the advantages of the GLQ Zernike fitting algorithm over traditional LSQ fitting. Specifically, we highlight its potential to deliver fitting results with the same accuracy level while significantly reducing the processing time.

Furthermore, another demonstration focusing on the effect of interpolation is discussed in the supplemental material. We also evaluate the aliasing performance of the proposed GLQ fitting algorithm by comparing it with the existing Gauss-Chebyshev fitting method [13], which was originally implemented to fit surfaces to 2D-Q polynomials. Please refer to the supplemental material (Supplement 1) for more information.

3.2 Generalization beyond the Zernike polynomial

This proposed method employs the Gauss-Legendre quadrature rule to convert integration into discrete summation, a process which is succinctly described Eq. (16). Due to its simplicity, this fitting algorithm can be generalized to other types of polynomials as long as the orthogonality relationship of those polynomials are maintained.

For example, to fit a signal to Chebyshev polynomials shown as $f(x )= \mathop \sum \limits_n {t_n}{T_n}(x )$ on $[{ - 1,\; 1} ]$, as the orthogonality relationship of Chebyshev polynomial is shown as

$$\mathop \smallint \nolimits_{ - 1}^1 {T_n}(x ){T_m}(x )\frac{1}{{\sqrt {1 - {x^2}} }}{\; }dx = \left\{ {\begin{array}{{cc}} 0&{({n \ne m} )}\\ \pi &{({n = m = 0} )}\\ {\pi /2}&{({n = m \ne 0} )} \end{array}} \right.\; \; .$$
We therefore can write the equation that connects the integration with the desired coefficient ${t_n}$. The final step is using the Gauss-Legendre quadrature rule of Eq. (16) to replace the integration with summation.
$$\begin{aligned} \mathop \smallint \nolimits_{ - 1}^1 f(x ){T_m}(x )\frac{1}{{\sqrt {1 - {x^2}} }}{\; }dx &= \mathop \sum \limits_n \left[ {{t_n}\mathop \smallint \limits_{ - 1}^1 {T_n}(x ){T_m}(x )\frac{1}{{\sqrt {1 - {x^2}} }}{\; }dx} \right]\\ &= \left\{ {\begin{array}{{cc}} {{t_m}\; \pi }&{({m = 0} )}\\ {{t_m}\pi {\; }/2}&{({m \ne 0} )} \end{array}} \right.\; \; \; , \end{aligned}$$
$$\mathop \smallint \nolimits_{ - 1}^1 f(x ){T_m}(x )\frac{1}{{\sqrt {1 - {x^2}} }}{\; }dx \approx \mathop \sum \limits_{i = 1}^K {w_i}f({{x_i}} ){T_m}({{x_i}} )\frac{1}{{\sqrt {1 - x_i^2} }}\; \; \; .$$

Combining Eq. (29) and (30), the Chebyshev coefficients can be calculated as

$${t_0} = \frac{1}{\pi }\mathop \sum \limits_{i = 1}^K {w_i}f({{x_i}} ){T_0}({{x_i}} )\frac{1}{{\sqrt {1 - x_i^2} }}\; \; \; ,$$
$${t_m} = \frac{2}{\pi }\mathop \sum \limits_{i = 1}^K {w_i}f({{x_i}} ){T_m}({{x_i}} )\frac{1}{{\sqrt {1 - x_i^2} }}\; \; ,{\; \; \; \; \; \; \; \; }({m \ne 0} )$$
where ${w_i}$ and ${x_i}$ are the weights and roots that can be calculated with Eq. (17) and (18). The left-hand-side of Eq. (29) can also be calculated with the Chebyshev-Gauss quadrature as it fits the format of $\mathop \smallint \limits_{ - 1}^1 \frac{{f(x )}}{{\sqrt {1 - {x^2}} }}dx$, which has been introduced in details when fitting Q polynomials [13]. Please note that in this example, the orthogonality relationship and the quadrature equation share the same integration range $[{ - 1,\; 1} ]$. If this is not the case, the dependent variable must be transformed so that they have a common integration range.

When applying the proposed GLQ fitting method to RDF $({{N_b} = 1} )$ polynomials [14,15], which have a well-defined radial component with clear orthogonality as indicated in Eq. (30) and Eq. (27a) in [14], respectively, we encounter a specific challenge. Although we can fit MSF surfaces to RDF using GLQ, the lack of a rapid and accurate calculation method for $\mathrm{\Re }_n^m(r )$ diminishes the speed advantage reported in fitting Zernike polynomials.

4. Conclusion

In a manner akin to the approach in the previous paper [13], which implemented a robust method for fitting MSF surfaces to 2D-Q polynomials, our study is motivated to fit Zernike polynomials, which are more commonly used in various contexts. The fitting algorithm for Zernike polynomials developed in this work, which leverages the Gauss-Legendre quadrature rule, has proven to maintain equivalent fitting accuracy while significantly increasing fitting speed. It is also noteworthy that [13] discusses an alternative approach for fitting Zernike polynomials, which are compared with the proposed Gauss-Legendre fitting in the supplemental material with respect to fitting aliasing, contributing to the diversity of methods available for surface characterization.

Our study demonstrated a quadrature fitting algorithm capable of computing Zernike coefficients without the necessity of evaluating each individual polynomial term at every map position. This process increases the speed by over three orders of magnitude (i.e., more than 1000 times) compared to the conventional least-squares method, when only the coefficient map is needed without a subsequent surface reconstruction. Even when surface reconstruction is required – a process involving the calculation of the surface sag map based on the fitted coefficients – the Zernike GLQ fitting algorithm remains significantly more efficient, approximately three times faster than the conventional approach, highlighting the method’s robust efficiency in diverse application scenarios.

A point to note is that the proposed GLQ fitting algorithm does have limitations, particularly its dependence on interpolation. Since interpolation aims to generate a smooth surface or curve from surrounding data points, it is important to note that this method is unsuitable for signals with large areas or sections removed within their aperture, as they may not be accurately reconstructed.

Funding

National Science Foundation (EEC-2310640, EEC-2310681, IIP-1822026, IIP-1822049).

Acknowledgments

This project was founded by the Center for Freeform Optics. We thank Dr. Thomas Suleski and Dr. Luke DeMars at the University of North Carolina at Charlotte for providing the measured mid-spatial frequency freeform surface named GESPHERE and for stimulating discussions about MSFs.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. J. P. Rolland, M. A. Davies, T. J. Suleski, et al., “Freeform optics for imaging,” Optica 8(2), 161–176 (2021). [CrossRef]  

2. L. He, C. J. Evans, and A. Davies, “Optical surface characterization with the area structure function,” CIRP Ann. 62(1), 539–542 (2013). [CrossRef]  

3. R. N. Youngworth and B. D. Stone, “Simple estimates for the effects of mid-spatial-frequency surface errors on image quality,” Appl. Opt. 39(13), 2198–2209 (2000). [CrossRef]  

4. J. M. Tamkin and T. D. Milster, “Effects of structured mid-spatial frequency surface errors on image performance,” Appl. Opt. 49(33), 6522–6536 (2010). [CrossRef]  

5. K. Liang and M. A. Alonso, “Effects on the OTF of MSF structures with random variations,” Opt. Express 27(24), 34665–34680 (2019). [CrossRef]  

6. H. Aryan, G. D. Boreman, and T. J. Suleski, “Simple methods for estimating the performance and specification of optical components with anisotropic mid-spatial frequency surface errors,” Opt. Express 27(22), 32709–32721 (2019). [CrossRef]  

7. K. Liang, G. W. Forbes, and M. A. Alonso, “Validity of the perturbation model for the propagation of MSF structure in 2D,” Opt. Express 27(3), 3390–3408 (2019). [CrossRef]  

8. L. A. DeMars and T. J. Suleski, “Pupil-difference moments for estimating relative modulation from general mid-spatial frequency surface errors,” Opt. Lett. 48(9), 2492–2495 (2023). [CrossRef]  

9. von F. Zernike, “Beugungstheorie des schneidenver-fahrens und seiner verbesserten form, der phasenkontrastmethode,” Physica 1(7-12), 689–704 (1934). [CrossRef]  

10. I. Kaya, K. P. Thompson, and J. P. Rolland, “Edge clustered fitting grids for φ-polynomial characterization of freeform optical surfaces,” Opt. Express 19(27), 26962 (2011). [CrossRef]  

11. I. Kaya, K. P. Thompson, and J. P. Rolland, “Comparative assessment of freeform polynomials as optical surface descriptions,” Opt. Express 20(20), 22683–22691 (2012). [CrossRef]  

12. G. W. Forbes, “Characterizing the shape of freeform optics,” Opt. Express 20(3), 2483–2499 (2012). [CrossRef]  

13. G. W. Forbes, “Fitting freeform shapes with orthogonal bases,” Opt. Express 21(16), 19061–19081 (2013). [CrossRef]  

14. K. Liang, G. W. Forbes, and M. A. Alonso, “Rapidly decaying Fourier-like bases,” Opt. Express 27(22), 32263 (2019). [CrossRef]  

15. L. A. DeMars, A. Bauer, B. D. Stone, et al., “Workflow for modeling of generalized mid-spatial frequency errors in optical systems,” Opt. Express 32(2), 2688–2703 (2024). [CrossRef]  

16. G. W. Forbes, “Never-ending struggles with mid-spatial frequencies,” in Optical Measurement Systems for Industrial Inspection IX (International Society for Optics and Photonics, 2015), Vol. 9525, p. 95251B.

17. W. H. Press, Numerical Recipes 3rd Edition: The Art of Scientific Computing (Cambridge University Press, 2007).

18. N. Takaki, G. W. Forbes, and J. P. Rolland, “Schemes for cubature over the unit disk found via numerical optimization,” Journal of Computational and Applied Mathematics 407, 114076 (2022). [CrossRef]  

19. G. H. Golub and J. H. Welsch, “Calculation of Gauss quadrature rules,” Math. Comp. 23(106), 221–230 (1969). [CrossRef]  

20. M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (U.S. Government Printing Office, 1968).

21. Y. Fan, “ZernikeLegendreFit: Fast, precise, high-capacity fitting,” MaTLAB Central File Exchange, 2024, https://www.mathworks.com/matlabcentral/fileexchange/132713-zernikelegendrefit-fast-precise-high-capacity-fitting.

22. G. W. Forbes, “Robust and fast computation for the polynomials of optics,” Opt. Express 18(13), 13851–13862 (2010). [CrossRef]  

23. P. E. Murphy, “Updating the peak-to-valley (PV) irregularity specification for the modern world,” in Optifab 2023 (SPIE, 2023), Vol. 12778, pp. 237–247.

Supplementary Material (1)

NameDescription
Supplement 1       Interpolation and aliasing analysis

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. The flowchart of the GLQ Zernike fitting of a freeform surface. The two dashed boxes on the right calculate related roots and weights to use the Gauss-Legendre quadrature rule in the fitting, and these two can be calculated separately even without knowing the measured freeform surface. The measured freeform surface, the highest required fitting terms $N,M$, and the roots ${x_i}$ of the recurrence matrix are used in the azimuthal fitting process, which provides the coefficients of different $\sin (. )$ and $\cos (. )$ bases. Using the Gauss-Legendre quadrature rule, the radial fitting calculates the final coefficients based on Eq. (19).
Fig. 2.
Fig. 2. Zernike fitting results. (a) Surface $z = 100\sin ({5\pi x} )$ (left) is fitted to Zernike polynomials with $0 \le m \le 40$; $0 \le n \le 20$ orders. The absolute residual maps from the conventional LSQ Zernike fitting (middle) and the proposed GLQ Zernike fitting (right) are presented with the RMS and PV values listed in their titles. This synthetic surface is unitless. (b) A measured MSF surface named GESPHERE (left) is fitted to Zernike polynomials with the same parameters. This surface’s aperture diameter is 4 mm, and the sag map has its unit in nm.
Fig. 3.
Fig. 3. The results from Zernike fitting the surface GESPHERE with two different methods. $M = 100,\; N = 50.$ (a) The residual maps from fitting surface GESPHERE with the LSQ fitting method (left) and the proposed GLQ fitting (right). Different residual patterns can be observed while the RMS and PV values are comparable. (b) The absolute difference between the two residual maps in (a). (c) The coefficient maps exported from the two methods in (a), respectively, in ${\log _{10}}(. )$ scale. (d) The absolute difference between the two coefficient maps in (c), in ${\log _{10}}(. )$ scale.

Tables (2)

Tables Icon

Table 1. Results of fitting a synthetic z = 100 sin ( 5 π x ) MSF surface

Tables Icon

Table 2. Results of fitting a real-measured MSF surface

Equations (32)

Equations on this page are rendered with MathJax. Learn more.

S ( u , θ ) = m , n c n m Z n m ( u , θ ) ,
Z n m ( u , θ ) = u m ~ P n ~ ( 0 , m ~ ) ( 2 u 2 1 ) { cos ( m ~ θ ) m 0 sin ( m ~ θ ) m < 0 ,
c n m = { a n ~ m ~ m 0 b n ~ m ~ m < 0 ,
S ( u , θ ) = m ~ , n ~ [ a n ~ m ~ u m ~ cos ( m ~ θ ) P n ~ ( 0 , m ~ ) ( 2 u 2 1 ) + b n ~ m ~ u m ~ sin ( m ~ θ ) P n ~ ( 0 , m ~ ) ( 2 u 2 1 ) ] .
S ( u , θ ) = m { u m n [ a n m cos ( m θ ) + b n m sin ( m θ ) ] P n m ( 2 u 2 1 ) } .
S ( u , θ ) = m , n [ A m ( u ) cos ( m θ ) + B m ( u ) sin ( m θ ) ] .
S i a r r a y := [ S ( u i , θ 1 ) , S ( u i , θ 2 ) , , S ( u i , θ j ) , , S ( u i , θ J ) ] ,
θ j = 2 π j 1 J , j = 1 , 2 , , J ,
Re { FFT { S i a r r a y } } = [ A 0 ( u i ) , A 1 ( u i ) , , A j ( u i ) , , A J ( u i ) ] ,
Im { FFT { S i a r r a y } } = [ B 0 ( u i ) , B 1 ( u i ) , , B j ( u i ) , , B J ( u i ) ] .
A m ( u ) = u m n a n m P n m ( 2 u 2 1 ) = ( 1 + x 2 ) m 2 n a n m P n m ( x ) ,
B m ( u ) = u m n b n m P n m ( 2 u 2 1 ) = ( 1 + x 2 ) m 2 n b n m P n m ( x ) ,
1 1 ( 1 + x ) m P n m ( x ) P k m ( x ) d x = 2 m + 1 2 n + m + 1 δ n k .
1 1 C m ( x ) ( 1 + x 2 ) m 2 P n m ( x ) d x = ( 1 2 ) m k [ c k m 1 1 ( 1 + x ) m P k m ( x ) P n m ( x ) d x ] = c n m 2 2 n + m + 1 ,
1 1 ( 1 x ) α ( 1 + x ) β f ( x ) d x i = 1 K w i f ( x i ) ,
1 1 f ( x ) d x i = 1 K w i f ( x i ) .
x i = λ i ; w i = 2 ( 1 x i 2 ) ( P K ( x i ) ) 2 ,
w i = 2 | X i , 1 | 2 ,
c n m 2 n + m + 1 2 i = 1 K w i C m ( x i ) ( 1 + x i 2 ) m 2 P n m ( x i ) .
P n + 2 m ( x ) = ( a n x + b n ) P n + 1 m ( x ) c n P n m ( x ) ,
i n = b n 1 a n 1 ; j n = c n b n 1 b n ; a 0 = ( m + 1 ) ; b 0 = m + 2 ;
J K m = [ i 1 j 1 0 0 0 j 1 i 2 j 2 0 0 0 j 2 i 3 0 0 0 0 0 j K 1 0 0 0 j K 1 i K ] .
a 1 = 3 , b 1 = 3 2 , c 1 = 1 2 ; a 2 = 10 3 , b 2 = 5 3 , c 2 = 2 3 ;
J 3 0 = [ 1 / 2 1 / 12 0 1 / 12 1 / 2 1 / 15 0 1 / 15 1 / 2 ] [ 0.5 0.2887 0 0.2887 0.5 0.2582 0 0.2582 0.5 ]
[ Z 1 ( r 1 ) Z 2 ( r 1 ) Z ( r 1 ) Z M ( r 1 ) Z 1 ( r 2 ) Z 2 ( r 2 ) Z ( r 2 ) Z M ( r 2 ) Z 1 ( r ) Z 2 ( r ) Z ( r ) Z M ( r ) Z 1 ( r N ) Z 2 ( r N ) Z ( r N ) Z M ( r N ) ] [ c 1 c 2 c c M ] = [ S ( r 1 ) S ( r 2 ) S ( r ) S ( r N ) ]
c 1 Z 1 ( r i ) + c 2 Z 2 ( r i ) + + c M Z M ( r i ) = S ( r i )
A 1 A x = x = A 1 b .
1 1 T n ( x ) T m ( x ) 1 1 x 2 d x = { 0 ( n m ) π ( n = m = 0 ) π / 2 ( n = m 0 ) .
1 1 f ( x ) T m ( x ) 1 1 x 2 d x = n [ t n 1 1 T n ( x ) T m ( x ) 1 1 x 2 d x ] = { t m π ( m = 0 ) t m π / 2 ( m 0 ) ,
1 1 f ( x ) T m ( x ) 1 1 x 2 d x i = 1 K w i f ( x i ) T m ( x i ) 1 1 x i 2 .
t 0 = 1 π i = 1 K w i f ( x i ) T 0 ( x i ) 1 1 x i 2 ,
t m = 2 π i = 1 K w i f ( x i ) T m ( x i ) 1 1 x i 2 , ( m 0 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.