Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient layout-aware statistical analysis for photonic integrated circuits

Open Access Open Access

Abstract

Fabrication variability significantly impacts the performance of photonic integrated circuits (PICs), which makes it crucial to quantify the impact of fabrication variations before the final fabrication. Such analysis enables circuit and system designers to optimize their designs to be more robust and obtain maximum yield when designing for manufacturing. This work presents a simulation methodology, Reduced Spatial Correlation Matrix-based Monte-Carlo (RSCM-MC), to efficiently study the impact of spatially correlated fabrication variations on the performance of PICs. First, a simple and reliable method to extract physical correlation lengths, variability parameters that define the inverse of the spatial frequencies of width and height variations over a wafer, is presented. Then, the process of generating correlated variations for MC simulations using RSCM-MC methodology is presented. The methodology generates correlated variations by first creating a reduced correlation matrix containing spatial correlations between all the circuit components, and then processing it using Cholesky decomposition to obtain correlated variations for all circuit components. These variations are then used to conduct MC simulations. The accuracy and the computation performance of the proposed methodology are compared with other layout-dependent Monte-Carlo simulation methodologies, such as Virtual wafer-based Monte-Carlo (VW-MC). A Mach-Zehnder lattice filter is used to study the accuracy, and a second-order Mach-Zehnder filter and a 16x16 optical switch matrix system are used to compare the computational performance.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Silicon photonics (SiP) has been rapidly growing in the fields of communications, biomedical and computing [1]. The high refractive index contrast in silicon-on-insulator (SOI) designs have allowed the confinement of light in tightly packed sub-micron waveguides with sharp bends. However, the high index contrast in SOI makes these sub-micron waveguides more prone to manufacturing variations. It has been a challenge to deal with the manufacturing variability in photonic integrated circuits (PICs) and systems [1,2], and this variability can cause fabrication errors in waveguide width and thickness, which can lead to significant changes in light propagation constants [3,4]. These changes in propagation constants can affect the device performance, especially interferometers with long waveguide arm lengths [3]. Therefore, it is crucial to include variability when designing a photonic device to make the device more robust.

It is a challenge to characterize manufacturing variations on a wafer scale. There are several techniques such as atomic force microscope (AFM) mapping [5], scanning electron microscope (SEM) imaging [6], analyzing spectral response variations of microdisk resonators [7], and Bragg gratings [8,9] that can be used to characterize manufacturing variations. However, these techniques are costly, time-consuming, and in the case of microdisk resonators and Bragg gratings, require complex measurements and processing. Z. Lu et al. [3] presented a more straightforward and efficient technique to extract waveguide width and thickness variations on a wafer scale. The technique extracts waveguide width and height variations from the transverse electric (TE) mode spectral response of a racetrack resonator. Z. Lu et al. [3] also identified six key variability parameters that describe width and thickness variations, and are as follows: width mean($\mu _w$), width standard deviation ($\sigma _w$), width correlation length($\xi _w$), thickness mean($\mu _h$), thickness standard deviation ($\sigma _h$), and thickness correlation length($\xi _h$). The paper presented methods to extract the mean and the sigma for the width and the thickness variations but not the correlation lengths. A correlation length (CL) is a crucial parameter that defines the inverse of the frequency of spatial variations, or in other words, describes how the width and thickness variations are distributed along a surface [10]. If the correlations along a chip surface are represented using a 2D Gaussian function [3,11] and the function is given as ${e}^{-\frac {x^2+y^2}{2\sigma ^2_{gauss}}}$, then we can write the correlation length as a function of the variance ($\sigma ^{2}_{gauss}$) of the Gaussian function, i.e. $\xi = 2\sqrt {\sigma ^{2}_{gauss}}$. In other words, $\xi /2$ determines the width of the Gaussian function. Therefore, we need a simple method to extract this parameter from the data extracted using the technique mentioned above.

In electronics, the variability analysis typically includes conducting a corner analysis or a Monte-Carlo(MC) analysis. However, in photonics, it is difficult to capture all effects of variability with just corner analysis as it only predicts the performance at the process corners and does not explore the cases in between the corners [3,12]. Moreover, when conducting MC analysis in electronics, differential variations with correlation constraints are applied to the circuit components, but the correlation constraints are applied only to the critical components that require matching such as a differential amplifier that require matching resistor pairs [3,12]. This is because the size of the electrical devices is smaller than the operation wavelengths. In the case of photonics, the device sizes are much larger than the operation wavelengths. This means that small changes in waveguide width or thickness can lead to significant phase errors. The variations for both, width and thickness, tend to be spatially correlated, i.e. the spatially closer components will have more similar variations than those that are far apart. Therefore, it is required to include spatial correlations for all circuit components when conducting MC analysis [3,12]. The very first simulation methodology to incorporate layout-dependency in MC simulations for PICs was introduced by Z. Lu et al. [3]. The methodology generates correlated variations for the circuit components by generating correlated virtual wafers for both width and thickness variations, and sample MC variations from the generated virtual wafer maps. Since its introduction, the methodology has been implemented in commercial photonic circuit design and simulation frameworks, such as IPKISS design framework by Luceda Photonics [13,14], in the form of Caphe Variability Extension (CapheVE) [4,15], which is implemented on top of the design framework. However, the simulation methodology does not scale well with the number of simulations and die size, especially when dealing with small correlation lengths.

In this paper, we present a method to extract physical correlation lengths for wafer-scale variation measurements, and an alternative MC simulation methodology to efficiently conduct layout-aware statistical analysis for PICs. In section 2, we present the methodology to extract correlation lengths, and the experimental results from a chip fabricated using electron beam lithography (EBL) technology. In section 3, we briefly discuss the VW-MC technique and present the alternative MC simulation technique, RSCM-MC, for spatially-dependent MC simulations in PICs. In section 4, we use a Mach-Zehnder lattice filter to compare the performance prediction results between the two methodologies, RSCM-MC and VW-MC. We then use a second-order MZI filter and a 16x16 ring matrix to compare the computational performance of both methods. Finally, in section 5, we discuss the impact of parameters such as sampling type on the estimation of correlation length and also compare computation requirements as a function of correlation length for both, VW-MC and RSCM-MC.

2. Correlation length extraction

In this section, we present the methodology to extract correlation length from surface variations. We then perform the correlation length extraction on the width and the thickness variations characterized from a 9 mm x 30 mm chip fabricated using EBL technology.

2.1 Simulating extraction

In this section, we demonstrate the process of extracting physical correlation lengths by generating a correlated rough surface and recovering input correlation length parameter using the steps described below.

  • 1. First, we generate a 2-dimensional correlated surface (see Fig. 1(a)) using the steps described here [3,11]. This requires inputs such as true correlation length ($\xi _t$), rms-roughness(${w}$), surface/die side length(${L}$), points per surface length(${N}_L$). For the example presented in Figs. 1(a)–1(e), we used $\xi _t$ = 200 µm, ${w}$ = 5 nm, ${L}$ = 1200 µm and ${N_L}$ = 1200.
  • 2. We then randomly sample a number of points from the generated surface, and this number is denoted by ${N}_{samples}$ (see Fig. 1(b)). In order to decide the $N_{samples}$ value, we first select an arbitrary number of samples value for a 1:1 ratio of $\xi _t$ & L, and then scale this value for a desired ratio while keeping the sampling density constant. In this case, we picked a value of 1000 samples for a 1:1 ratio, which scales to 36000 points for a ratio of 1:6. For each sample point $p_i$, we obtain its x,y coordinates and corresponding roughness value ($f(p_i)$). The accuracy of the estimation is dependent on the number of points sampled and the ratio between true correlation length and surface side length.
  • 3. For each pair of sample points, $p_a$ & $p_b$, we calculate their euclidean distance($r_{ab}$) and difference of squared roughness, $H(r)$, i.e. ($H(r) = |f(p_a)^2 - f(p_b)^2|$) [10]. For 36000 sample points, we get 647,982,000 pair combinations.
  • 4. Next, we fit a logistic function (sigmoid curve) to the data,
    $$f(x) = \frac{L}{1 + e^{-k(x - {x_0})}}$$
    where L is maximum amplitude of the curve, k is the slope of the curve and $x_0$ is the sigmoid’s midpoint’s x-value. The accuracy of the sigmoid fit depends on the data range considered for the fit. The Fig. 1(c) shows the sigmoid fits for a number of ranges of r, where $r_{max}$ is the maximum Euclidean distance between two sample points. A general way to find the best sigmoid fit for CL estimation is by calculating sigmoid fits for several data ranges and choosing the fit that captures the plateau and has the highest R-squared value to the data considered for the fit.
  • 5. The correlation length is estimated by the point along the fit that is 5% lower than the plateau, and is denoted by the symbol $E[\xi ]$. In order to find the point of estimation, we conducted a number of simulations with high density of sample points and the estimation point was found to be 5% lower than the plateau. For the case presented in Figs. 1(a)–1(c), we calculated the CL estimation error for 10 independent simulations with input correlation length of 200 µm (shown in Fig. 1(d)). The CL estimation errors were found to be less than 3.8%.
  • 6. In order to demonstrate the impact of ratio between the correlation length and the die side length, we repeated the experiment multiple times for different correlation length to die side length ratios. The sample density is kept constant for all simulations. The results are shown in Fig. 1(e), where the horizontal divider in each box represents the median of each dataset. As the ratios get bigger, we see better CL estimations. For smaller ratios, the data curve is noisy for the estimation region, which affects the sigmoid fit and correlation length estimations.

 figure: Fig. 1.

Fig. 1. Correlation length extraction results from a simulated correlated rough surface. (a) Generated 2D surface with ${{\xi }_t}$ = 200 µm, $w$ = 5 nm, $L$ = 1200 µm and $N_L$ = 1200. (b) Randomly sampled 36000 points from the surface shown in (a). (c) Data showing the absolute difference of squared variation, $H(r)$, between each pair of points $(p_i, p_j)$ in (b) as a function of euclidean distance between the points. Solid colored lines show sigmoid fits for different ranges of r. (d) Extracted correlation lengths from 10 independent simulations for input correlation length ($\xi$) = 200 µm. (e) Distribution of correlation length estimates as a function of the ratio between input correlation length ($\xi _t$) and die length, L.

Download Full Size | PDF

2.2 Experimental results

In order to demonstrate the correlation length extraction experimentally, we designed and fabricated a 9 mm x 30 mm chip using Applied Nanotools’ (ANT) [16] EBL technology. We used the methodology described in [3] to characterize the manufacturing variability of the chip, i.e. to extract the width and the thickness variations. We covered the entire chip with 2705 identical racetrack resonators, separated by a distance of 300 µm. The nominal width and thickness of the waveguides are 500 nm and 200 nm, respectively. The device has a radius, coupling length, and coupling gap of 20 µm, 10 µm, and 200 nm, respectively [3]. Figure 2(a) shows the distribution of racetrack resonators on the chip.

We measured the chip using an automated photonics testing setup. Cumulative distribution function (CDF) curves for $\Delta W$ (width variations) and $\Delta H$ (thickness variations) are shown in Figs. 2(b) and 2(c), respectively. The mean and the standard deviation of the width variations relative to the nominal width (500 nm) are found to be 6.295 nm and 1.132nm, respectively. The mean and the standard deviation of the thickness variations relative to the nominal thickness (200 nm) are found to be -1.99 nm and 0.585 nm, respectively. The extracted width variations versus position are shown in Fig. 2(d). We then calculated H(r) vs r curve for each pair of sample points, which is shown in Fig. 2(f). Similarly, the extracted thickness variations versus position are shown in Fig. 2(e). We then calculated H(r) for every r, which is shown in Fig. 2(g). The extracted correlation lengths for width and thickness variations are found to be 12.23 mm and 8.72 mm, respectively. The extracted results from the chip are summarized in Table  1 below:

Tables Icon

Table 1. Statistical results for the manufacturing variations of a 9x30 mm chip fabricated through e-beam lithography process.

We have compiled a manufacturing variability analysis utility in the form of a python Jupyter notebook [17]. The tool takes the measurement data from racetrack resonators, and outputs mean, sigma and correlation lengths for both, width and thickness variations. The tool is available for download along with the variation data characterized from the 9 mm x 30 mm chip [18].

3. Layout-dependent MC simulation methodologies

Once the variability parameters are extracted, they can be used to estimate the impact of fabrication variations on the performance of PICs. As the variations are spatially correlated, a Monte-Carlo methodology that accounts for layout dependence is required for this purpose, such as Virtual wafer-based MC [3]. In this section, we discuss virtual wafer-based MC and present an alternative MC simulation methodology, Reduced Spatial Correlation Matrix-based Monte-Carlo.

 figure: Fig. 2.

Fig. 2. Experimental data from a 9x30mm silicon photonic chip fabricated using EBL. (a) Distribution of racetrack resonators on the chip. (b) Cumulative Distribution Function (CDF) for extracted width variations. (c) CDF for extracted thickness variations. (d) Extracted $\Delta W$ versus position. (e) Extracted $\Delta H$ versus position. (f) H(r) vs r curve (in grey) and sigmoid fit(in red) for width variations. (g) H(r) vs r curve (in grey) and sigmoid fit(in red) for thickness variations.

Download Full Size | PDF

3.1 Virtual wafer based MC (VW-MC)

Virtual wafer-based MC is a technique to generate correlated variations for MC simulations in PICs. The simulation flow for VW-MC simulations is as follows: First, the netlist of a layout is extracted using an open-source tool [19] developed in KLayout [20], a layout design software. The extracted netlist is then imported into Lumerical INTERCONNECT [21], a commercial photonic circuit simulator. Based on the mean, standard deviation and the correlation length values provided by the user, virtual wafers for width and thickness variations are generated. The correlated variations are then sampled from the virtual wafers and mapped onto the individual circuit components, and their performances are updated (using parametrized compact models [3,12]) based on the width ($\Delta$w) and thickness variations ($\Delta$h). Once the components’ performance is updated, circuit simulations are conducted, and the results are recorded for further analysis.

This methodology has several advantages. First, this is the first methodology to consider layout-dependency in MC simulations for PICs. Second, as long as the die size and the number of simulations remain constant, the computation cost to generate correlated samples for the MC simulations is fixed and is independent of the circuit complexity. This makes the methodology useful for changing complex circuits and systems as long as the die size remains unchanged as the size of the virtual wafer depends mainly on the die size and number of simulations. In order to improve the computation efficiency of the method, a sparse grid of 500 µm for a correlation length of 4500 µm was used when generating virtual wafers, and then linear interpolation was used to interpolate this data to a finer grid when taking samples for circuit simulations [3]. We believe it would be better to represent grid size as a ratio of correlation length rather than a fixed value, as this value will scale up or down with the correlation length. From the parameters used in the paper [3], we can see that the authors considered a grid size to correlation length ratio of 1:9. So for further comparisons, we will use a grid size to correlation length ratio of 1:9, as it is not too large and should offer maximum computation efficiency while maintaining excellent accuracy when interpolating data to finer grid sizes.

An issue with the virtual wafer technique is that the methodology does not scale well with the number of simulations and die size, especially when dealing with small correlation lengths. The virtual wafers’ size increases drastically with an increase in the die size and the number of simulations. This can significantly increase the computation cost, as shown in Fig. 3. The time and memory comparisons in the figure only include the time and memory it takes to compute virtual wafers with a sparse grid, interpolate to a fine grid of 1 µm, and sample correlated width and thickness variations.

 figure: Fig. 3.

Fig. 3. Time taken and memory required to generate correlated samples as a function of number of MC runs for VW-MC. The estimations were performed for a die size of 4 mm, grid to correlation length ratio of 1:9, and correlation length of 200 µm.

Download Full Size | PDF

The Big O [22] time and space complexities of the method are mainly dictated by the requirements [23,24] of the 2-dimensional fast Fourier transforms (2D-ffts) required for generating virtual wafers. This leads to time and space complexities of $O(A^2 log A)$ and $O(A^2 + k^2)$ respectively, where $A = \sqrt {m}p$, p is the die length (in microns) divided by the grid size (in microns), m is the number of MC runs, and k represents the points per die length for interpolated dies. The grid size for the virtual wafer depends on the correlation length. All computation benchmarks were performed on a windows machine with an intel i5-8250 CPU and 8 GB of physical memory.

3.2 Reduced spatial correlation matrix based MC (RSCMMC)

In this section, we present an alternative MC simulation methodology, which has better computation efficiency than VW-MC. The principal idea behind the improved methodology is to use a correlation matrix-based technique [25,26], to generate correlated variations for individual components without the need of any virtual wafers. The work presented in [12] demonstrated a use case of the correlation matrix technique to conduct MC simulations in PICs. However, the work was limited to small-sized photonic circuits such as a ring modulator. The correlation matrix method is also limited by the size of the correlation matrix, which can expand dramatically when we treat waveguide segments as individual components. Therefore, a dimensionality reduction is needed for better scalability. In this work, we introduce an improvement to the correlation matrix technique that allows the method to handle PICs of any size and number of components. The reduction of the correlation matrix is performed using the assumptions made based on how variations are handled for waveguide components in [3], i.e. variations for a waveguide component are averaged along its path. This allows us to obtain a single sample covariance value by comparing the variations of a point component (a primitive component whose location is represented by a single layout coordinate) and the averaged variations of segments of a waveguide component ; this covariance value is the result of the individual covariance between a point component and the segments of a waveguide component. Based on this, we can obtain a single correlation value for the cases including waveguide components. This is explained in more depth in Appendix A and the second step of the simulation methodology presented below.

A general description of the steps to generate correlated variations is as follows. We first record spatial correlations between components in a correlation matrix. A 2D Gaussian function is used to obtain these spatial correlation values. Then a reduction of the correlation matrix is performed, and a covariance matrix is generated from the reduced correlation matrix. Finally, Cholesky Decomposition [2527] of the covariance matrix is computed which decomposes the covariance matrix into a product of $U$ and $U^T$, where U is the lower triangular matrix, and $U^T$ is the upper triangular matrix. Then the lower triangular matrix ($U$) is multiplied with a matrix of random uncorrelated samples to generate correlated variations for each component. The whole process is repeated for both width and thickness variations as they both would have a separate set of variation parameters. The simulation methodology is discussed in detail below:

  • 1. Extract netlist: The first step would be to start with a layout of a photonic circuit. Then the netlist of the layout would be extracted (in this case, we used an open-source tool [19] to extract this information) and imported into Lumerical INTERCONNECT [21], a schematic circuit simulator. The netlist includes details such as layout coordinates, connections and design parameters of circuit components.
  • 2. Generate spatial correlation matrix: In this step, we would take mean, sigma and correlation lengths [3] for both width and thickness variations, and generate spatial correlation matrices using a 2D gaussian function [3,11]. A Spatial correlation value between two components is calculated using the following equation:
    $$c [(x_{i}, y_{i}), (x_{j}, y_{j})] = exp \Bigg[ - \frac{[(x_{j}-x_{i})^2 + (y_{j} - y_{i})^2]}{l^2/2}\Bigg]$$
    where $(x_{i}, y_{i})$, $(x_{j}, y_{j})$ and $l$ are coordinates of first element, coordinates of second element and correlation length, respectively.

    All primitive components except waveguides are classified as point components. This means that the location of a point component can be represented by a single layout coordinate whereas for a continuous component, its location is represented by an array of coordinates along its length.

    There are three types of correlations between circuit components.

    • (a) Point to Point e.g. a grating coupler and a grating coupler
    • (b) Point to Continuous e.g. a grating coupler and a waveguide
    • (c) Continuous to Continuous e.g. a waveguide and a waveguide
    Spatial correlations of type (a) (between Point components) are easier and simpler to calculate. However, when calculating spatial correlation values of type (b) and (c), it is required to split continuous components into smaller segments. Then spatial correlation values for each segment are calculated and recorded in the spatial correlation matrix. This means that each segment occupies a row and a column in the correlation matrix. This can cause problems for waveguide heavy circuits as the spatial correlation matrix grows at a rate of $n^2$ where n is the total number of components (including waveguide segments). This means that the method’s effectiveness will decrease as the number of continuous components increases. One approach is to reduce the matrix dimensionality by representing correlations of type (b) and (c) with a single correlation value. As shown in Appendix A, we can represent correlations of type (b) as
    $$C(P,Q) = \dfrac{1}{m}\sum_{j=1}^{m}C(P, Q_{j})$$
    where P is a point component, Q is a continuous component, and m is the number of segments in Q.

    We can generalize Eq. 3 to accommodate complex type (c) correlations.

    $$C(P,Q) = \dfrac{1}{nm}\sum_{k=1}^{n}\sum_{j=1}^{m}C(P_k, Q_j)$$
    where P is a continuous component, Q is a continuous component, n is the number of segments in P, and m is the number of segments in Q. Correlated variations generated by full and reduced matrices are compared in section 3.2.1.

  • 3. Generate correlated variations: We can then process correlation matrices either by scripting the processing routine or by simply using a utility we have implemented in SiEPIC tools [19] (More information about this utility is given below).

    Using processing routine:

    • (a) Generate a Covariance matrix from the reduced correlation matrix.
    • (b) Perform Cholesky Decomposition of the covariance matrix such that the product $U U^T$ equals to the covariance matrix, where $U$ is the lower triangular matrix and $U^{T}$ denotes matrix transpose of $U$.
    • (c) Generate normally distributed random numbers in a matrix (X) of size $n*m$, where n= number of elements and m = number of simulations with mean = 0, standard deviation = 1.
    • (d) Obtain correlated samples by calculating the dot product of $U$ and $X$.

  • 4. Interpolate components’ performance and conduct circuit simulation: Finally, the performance of the circuit components is updated based on the generated samples and the circuit simulations are conducted.

    When running MC simulations, the method can be used with any optical simulation techniques such as traditional MC, gPC based MC [28], etc. We discuss this in more detail in the next section.

We have made an implementation of RSCM-MC available as a part of SiEPIC tools [19], which is available to download as an open-source package. The tool extracts the layout netlist, takes variability parameters as an input, calculates spatial correlations between all circuit components, parses this information to INTERCONNECT’s Monte-Carlo functionality, and carries out the variability analysis.

3.2.1 Comparison of full vs reduced matrix

 figure: Fig. 4.

Fig. 4. (a) Placement of point and continuous elements in the case example. (b) Full spatial correlation matrix. (c) Reduced spatial correlation matrix where correlation value between A‘& B‘ represents the mean of correlation values for range [A, B(1:5)] in (b), similarly C‘ & B‘ represents mean of values for range [C, B(1:5)] in (b), and the correlation of the continuous element with itself is set to one as we are treating it as a lumped element. (d) Overlaid CDF curves for all circuit components. The CDF curves for data generated using full matrix are shown in blue and reduce matrix in red. (e) Differences in cumulative probabilities between A & B(top), B & C (middle) and C & A (bottom), for both, full matrix(blue) and reduced matrix (red).

Download Full Size | PDF

In order to compare the correlated values generated by the reduced matrix to the ones generated by the full matrix, we created a case example with three components, 2 point components and one continuous (with five segments). The placement of the components is shown in Fig. 4(a). This leads to a full matrix of size 7x7, as shown in Fig. 4(b). The reduced spatial matrix is shown in Fig. 4(c), where the correlation value between A‘& B‘ represents the average of correlation values for range [A, B(1:5)] in (b). Similarly C‘ & B‘ represents mean of values for range [C, B(1:5)] in (b).

The spatial correlation values in the range [A, B(1:5)] in (b) represent spatial correlation values between A and each segment of B. When reducing the matrix, we set the correlation of element 2 with itself as one because when reducing matrix, we treat the element as a lumped element. For the simulation, we selected $\mu$ = 0, variation $\sigma$ = 5 and correlation length $\xi$ = 40 µm. As expected, the CDF curves of variations for all components converge to the same mean and standard deviation values (shown in 4(d)). In order to compare the differences between the CDFs of different components, we computed the differences in cumulative probabilities of A & B, B & C, and C&A, as shown in Fig. 4(e), for both methods. The absolute difference between cumulative probabilities of A and B is denoted as $\Delta P_{\Sigma }(A, B)$. The absolute errors/differences between cumulative probabilities for both cases are contained in the same error range (<1%). The errors for the full matrix are shown in the blue and reduced matrix in the red, in Fig. 4(e).

4. Numerical experiments

In order to compare the accuracy of RSCM-MC method, we conducted a variability analysis of a lattice filter using both methods, VW-MC and RSCM-MC. In the second subsection, we compared the time and memory performance of both methods.

4.1 Performance analysis of a lattice filter

In this section, we present the performance analysis of a lattice filter using both methodologies. The filter layout is shown in Fig. 5(a). The purpose of the filter is to obtain a flat-top response by comparing multiple stages of MZIs [29]. The nominal waveguide width and thickness are 500 nm and 220 nm, respectively. We can observe the filter’s flat-top response in Fig. 5(b). In order to evaluate the performance of the filter, we are interested in -1dB bandwidth and maximum transmission power of both outputs. These performance markers are sensitive to small changes in waveguide width and thickness. -1dB bandwidth is defined as a wavelength range where the maximum output transmission is reduced by 1 dB. From 5(b), we can see that the filter has an ideal -1 dB bandwidth of 4.1 nm around wavelength of 1550 nm, and a maximum transmission power of -0.9 dB. The input parameters for the Monte-Carlo simulations are presented in Table 2 below.

 figure: Fig. 5.

Fig. 5. Comparison of Monte-Carlo results obtained from RSCM-MC and VW-MC (a) Schematic of the multi-stage Mach-Zehnder lattice filter. (b) Ideal response of the flat-top filter. Maximum Transmission vs -1dB bandwidth for both (c) Output 1 and (d) Output 2, (e) Cumulative Distribution Function (CDF) curve for -1dB bandwidth (output 1) distribution, (f) CDF curve for Maximum transmission (output 1) distribution. The data presented in (c-f) is from three independent batches of MC simulations. This was done to observe any systematic errors.

Download Full Size | PDF

Tables Icon

Table 2. Input parameters for the Monte-Carlo simulations

We ran three independent MC batches with 1000 simulations each. This was done to observe any systematic errors. Figure 5(c) and Fig. 5(d) shows the maximum transmission as a function of the -1dB bandwidth for output 1 and output 2, respectively. In Fig. 5(e) and Fig. 5(f), we are showing the cumulative distribution function curves of maximum transmission and -1dB bandwidth, respectively. All of these plots show a high correlation between the results obtained using both methods. The results are summarized in Table 3, where $\mu _{1dB}$, $\sigma _{1dB}$, $\mu _{maxT}$ and $\sigma _{maxT}$ denote mean of -1dB bandwidth, standard deviation of -1dB bandwidth, mean of maximum transmission, and standard deviation of maximum transmission, respectively.

Tables Icon

Table 3. MC simulation results summary

4.2 Time and memory comparison

In order to compare the time and the memory requirements between RSCM-MC and VW-MC, we used a second-order Mach-Zehnder filter [28] (shown in Fig. 6(a)). The results reported in Fig. 6(b) only include the time taken and memory required to generate correlated samples. For $10^4$ MC runs, die size of 0.8 mm and correlation length of 200 µm, VW-MC took 255.27 seconds to generate correlated samples whereas RSCM-MC took only 2.64 seconds. In terms of memory required, VW-MC required 851 MB, whereas RSCM-MC required 246 MB.

 figure: Fig. 6.

Fig. 6. (a) Schematic of the second order Mach-Zehnder filter. (b) Time(top) and Memory(bottom) comparison for $10^4$ MC runs, die size = 800 µm, $\xi$ = 200 µm, and $grid size : \xi$ = 1:9.

Download Full Size | PDF

The optical simulation run times for $10^4$ MC runs using different MC simulation techniques are summarized in Table 4. Any memory and time speedups achieved using RSCM-MC would be more significant in the cases of non-classical MC analysis techniques, such as gPC based MC analysis [28].

Tables Icon

Table 4. A summary of computation times for different MC analysis techniques [28]

Next, we compared the computation requirements for a 16x16 switch matrix [30]. The layout of the system is shown in Fig. 7(a). In the system, there are about 1700 total components. For $10^4$ MC runs, correlation length of 500 µm and a die size of 2.7 mm, VW-MC computed variations in 4614 seconds whereas RSCM-MC took 6.48 seconds. VW-MC operations required 2138 MB of physical memory, whereas RSCM-MC operations required 280 MB.

 figure: Fig. 7.

Fig. 7. (a) Chip layout of 16x16 switch matrix system. (b) Time(top) and Memory(bottom) comparison for $10^4$ MC runs, $\xi$ = 500 µm, and $grid size: \xi$ = 1:9.

Download Full Size | PDF

The time and space complexities of RSMC-MC are mainly dictated by the requirements [31] of the cholesky decomposition, and are written as $O(n^3)$ and $O(n^2 + nm)$ respectively, where n is the number of components and m is the number of MC runs. As mentioned in Section 3.1, the time and space complexities of VW-MC depend on the die area, correlation length and MC runs rather than the number of components. However, we can get the expressions in terms of number of components by assuming an average size for the circuit components, which can be used to represent the total die area. Using this assumption, we can rewrite time and space complexities for VW-MC as $O(B^2 logB)$ and $O(B^2 + k^2)$ respectively, where $B = \frac {\sqrt {m n A}}{S}$, n is the number of components, m is the number of MC runs, A is the average area of the circuit components, S is the wafer grid (in microns), and k represents the points per die length for interpolated dies. The VWMC complexity expressions mentioned in this section are rough estimates and the results would heavily depend on the assumed average area of the circuit components. From the expressions, we can see that as the grid size increases as the correlation length increases, which reduces the computation requirements for VWMC. This is discussed in more detail in the next section.

5. Discussion

When extracting correlation length from a wafer or a die/chip dedicated to extract manufacturing variations, it is advised to sample the surface uniformly. The expected correlation length dictates the maximum spacing between the racetrack resonators and the maximum spacing needs to be less than $1/3^{rd}$ of the expected correlation length. As it is not always feasible to dedicate a chip/wafer to study manufacturing variations, users can sample surface randomly, i.e. sprinkle ring resonators wherever space permits. However, when sampling randomly, it is crucial to distribute ring resonators all over the chip with a mix of more and less closely placed ring resonators. If there are not enough data points that sample variations closer, we can miss higher frequency noise. The correlation length can change based on how we sample the surface. Therefore, it is necessary to have a mix of small and large scale scans to obtain a good estimation of the correlation length.

Next, we compared the computation requirements for the circuit shown in Fig. 7 as a function of correlation length, and the comparison results are shown in Fig. 8. We can see that, in this case, RSCM-MC has better computation efficiency than VW-MC. For VW-MC, the computation efficiency gets better as the correlation length increases because the grid size of the sparse virtual wafers increases with the correlation length, which leads to virtual wafers with fewer data points.

 figure: Fig. 8.

Fig. 8. Comparison of computation requirements for a die size of 2.7 mm and $10^4$ MC runs. (a) Time taken to generate correlated variations as a function of correlation length for both, VW-MC (black) and RSCM-MC (green). (b) Memory required as a function of correlation length for both, VW-MC (black) and RSCM-MC (green).

Download Full Size | PDF

However, as shown in Fig. 8, the time and the memory requirements for VW-MC saturate to 2526 seconds and 370 MB, respectively. This saturation point is mainly dictated by the value of the grid size of the interpolated wafer blocks (1 µm in this case). This is also explained by the space complexity expression presented in Section 3.1 and Section 4.2, i.e. the saturation region is represented by the region where the memory requirements for the interpolated dies are greater than the virtual wafer. For RSCM-MC, the time and memory requirements remain constant as its computation requirements are only affected by the number of components in the circuit, which remained unchanged.

For small-sized circuits or a small number of simulations, users can consider either methodology, RSCM-MC or VW-MC, to generate correlated variations. From the comparison data presented in section 4, we can say that RSCM-MC method is more useful for a large number of simulations (> 1000) and larger die sizes (> 1 mm). The method is also useful when users are trying to run MC simulations on their personal computers as it has light computation requirements. A bottleneck of RSCM-MC method is that the method scales with the number of components in a circuit. When working with circuits that have a large number of circuit components, it is advised to use the VW-MC method. VW-MC method also works better for circuits and systems containing Bragg gratings, as they cannot be treated as continuous components. This is because each grating period will require to be treated as an individual component when generating width and thickness variations. This will increase the computation cost in the case of RSCM-MC method.

In this work, we presented a method to model manufacturing variations on the die level. There are other effects that can have additional impacts on the waveguide width and thickness, such as lithography proximity and etch loading effects that are local to the device area. These effects are classified as within-device variations and can be observed in various devices, such as Bragg gratings, Contra Directional Couplers, Grating Couplers. Both RSCMMC and VWMC do not account for such local effects. However, when modelling manufacturing variations, the users can employ other methods, such as lithography simulations [32] along with the MC analysis to account for these local effects.

6. Conclusion

This paper addressed the challenges of extracting physical correlation lengths from fabrication variations and efficiently generating correlated samples for Monte-Carlo simulations in photonic integrated circuits. We have presented a simple and reliable method to extract physical correlation lengths for width and thickness variations from the variability data extracted using the technique described in [3]. We also demonstrated the method on a 9 mm x 30 mm chip fabricated using electron beam lithography process. The estimated correlation lengths for width and thickness variations are found to be 12.23 mm and 8.72 mm, respectively. Then, we have presented an efficient way to generate spatially correlated variations for Monte-Carlo simulations in photonic integrated circuits & systems. We have detailed the full simulation recipe and proved its reliability with the help of numerical experiments conducted on a lattice filter. We also benchmarked the performance of the proposed methodology against the previously proposed methodology, VW-MC, using a second-order MZI filter and a 16x16 optical switch matrix system. Moreover, we have made the correlation extraction utility and implementation of RSCM-MC, open-source and available for download.

Appendix A: Averaging of correlation / covariance values for cases including continuous components

In this section, we derive an expression that justifies the averaging of correlation/ covariance values for continuous components. For simplicity, we assume a simple case of type b i.e. a point component (P) and a continuous component (Q). The expression then can be extended to more complex type c cases i.e. between two continuous components. The sample covariance between two variables can be written as [33]:

$$Cov(P,Q) = \sum_{i=1}^{x} \frac{(P_i - \bar P)(Q_i -\bar Q)}{(x-1)}$$
where x is the number of samples, and $\bar P$ and $\bar Q$ are the sample means of variables P and Q, respectively.

As the number of samples (x) increases, the sample means of both variables converge to the input mean $\bar u$. Therefore, the expression can be re-written as :

$$Cov(P,Q) = \sum_{i=1}^{x} \frac{(P_i - \bar u)(Q_i -\bar u)}{(x-1)}$$
Lets assume that the component Q has m number of segments. As shown in virtual wafer method [3], the sample variation for the whole component Q at sample instance i, can be written as the average of sample variations for all segments of Q at instance i.
$$Q_i = \frac{Q_{1i} + Q_{2i} + Q_{3i} \ldots. Q_{mi}}{m}$$
where i is the sample/instance number.

From Eqs. 6 & 7, we get:

$$Cov(P,Q) = \sum\limits_{i=1}^{x} \frac{(P_i - \bar u)( \frac{Q_{1i} + Q_{2i} + Q_{3i} \ldots. Q_{mi}}{m} -\bar u)}{(x-1)}\\ = \sum\limits_{i=1}^{x} \frac{(P_i - \bar u)(\sum\limits_{j=1}^{m}\frac{Q_{ji}}{m} -\bar u)}{(x-1)}$$
For reduced matrix case, we suppose that the sample covariance between P and Q can be written as the average sample covariance between P and the segments of Q. These assumptions are based on the idea that we can obtain a single sample covariance value by comparing the variations of component P and the averaged variations of segments of waveguide component Q, and this covariance value is the result of individual covariance between the component P and the segments of Q.
$$\begin{aligned} Cov(P,Q) &= \frac{Cov(P, Q_{1}) + Cov(P, Q_2) + Cov(P, Q_3) \ldots. Cov(P,Q_m)}{m} \\ &= \sum\limits_{i=1}^{x} \dfrac{(P_i - \bar u)(Q_{1i} - \bar u)}{m(x-1)} + \sum\limits_{i=1}^{x} \dfrac{(P_i - \bar u)(Q_{2i} - \bar u)}{m(x-1)} + \ldots. \sum\limits_{i=1}^{x} \dfrac{(P_i - \bar u)(Q_{mi} - \bar u)}{m(x-1)} \\ & = \sum\limits_{i=1}^{x} (P_i - \bar u) \dfrac{(Q_{1i} - \bar u + Q_{2i} - \bar u + \ldots. Q_{mi} - \bar u)}{m(x-1)} \\ & = \sum\limits_{i=1}^{x} (P_i - \bar u) \dfrac{(Q_{1i} + Q_{2i} + \ldots. Q_{mi} - m\bar u)}{m(x-1)} \\ & = \sum\limits_{i=1}^{x} \frac{(P_i - \bar u)(\sum\limits_{j=1}^{m}\frac{Q_{ji}}{m} -\bar u)}{(x-1)} \end{aligned}$$
This proves that the sample covariance/correlation between a point component and a continuous component can be written as a single averaged covariance/correlation value.

For type b correlations, the sample correlation between P and Q can be re-written as the average sample correlation between P and the segments of Q.

$$\begin{aligned}C(P,Q) & = \frac{C(P, Q_{1}) + C(P, Q_2) + C(P, Q_3) \ldots. C(P,Q_m)}{m} \\ & = \dfrac{1}{m}\sum\limits_{j=1}^{m}C(P, Q_j) \end{aligned}$$
For type c correlations, the sample correlation expression can be re-written as :
$$C(P,Q) = \dfrac{1}{nm}\sum_{k=1}^{n}\sum_{j=1}^{m}C(P_k, Q_j)$$
where P is a continuous component, Q is a continuous component, n is the number of segments in P, and m is the number of segments in Q.

Funding

Natural Sciences and Engineering Research Council of Canada.

Acknowledgments

We acknowledge the Silicon Electronic-Photonic Integrated Circuits (SiEPIC) Fabrication (SiEPICfab) program at the University of British Columbia for fabrication access. We would like to thank Dr. Stefan Preble from Rochester Institute of Technology for the Mach-Zehnder lattice filter design. We want to give our special thanks to Mr. Cameron Horvath and Dr. Jocelyn Bachman from Applied Nanotools.

Disclosures

The authors declare no conflicts of interest.

References

1. L. Chrostowski and M. Hochberg, Silicon Photonics Design: From Devices to Systems (Cambridge University Press, 2016).

2. W. Bogaerts, M. Fiers, and P. Dumon, “Design Challenges in Silicon Photonics,” IEEE J. Sel. Top. Quantum Electron. 20(4), 1–8 (2014). [CrossRef]  

3. Z. Lu, J. Jhoja, J. Klein, X. Wang, A. Liu, J. Flueckiger, J. Pond, and L. Chrostowski, “Performance prediction for silicon photonics integrated circuits with layout-dependent correlated manufacturing variability,” Opt. Express 25(9), 9712–9733 (2017). [CrossRef]  

4. W. Bogaerts, Y. Xing, and U. Khan, “Layout-Aware Variability Analysis, Yield Prediction, and Optimization in Photonic Integrated Circuits,” IEEE J. Sel. Top. Quantum Electron. 25(5), 1–13 (2019). [CrossRef]  

5. S. K. Selvaraja, “Wafer scale fabrication technology for silicon photonic integrated circuit,” (2011).

6. S. K. Selvaraja, W. Bogaerts, P. Dumon, D. Van Thourhout, and R. Baets, “Subnanometer Linewidth Uniformity in Silicon Nanophotonic Waveguide Devices Using CMOS Fabrication Technology,” IEEE J. Sel. Top. Quantum Electron. 16(1), 316–324 (2010). [CrossRef]  

7. W. A. Zortman, D. C. Trotter, and M. R. Watts, “Silicon photonics manufacturing,” Opt. Express 18(23), 23598–23607 (2010). [CrossRef]  

8. N. Ayotte, A. D. Simard, and S. LaRochelle, “Long Integrated Bragg Gratings for SoI Wafer Metrology,” IEEE Photonics Technol. Lett. 27(7), 755–758 (2015). [CrossRef]  

9. X. Wang, W. Shi, H. Yun, S. Grist, N. A. F. Jaeger, and L. Chrostowski, “Narrow-band waveguide Bragg gratings on SOI wafers with CMOS-compatible fabrication process,” Opt. Express 20(14), 15547–15558 (2012). [CrossRef]  

10. M. Bloomfield, “Roughness Concepts,” http://homepages.rpi.edu/~bloomm2/roughness.pdf.

11. D. Bergström, http://www.mysimlabs.com/matlab/surfgen/rsgeng2D.m

12. J. Pond, J. Klein, J. Flückiger, X. Wang, Z. Lu, J. Jhoja, and L. Chrostowski, “Predicting the yield of photonic integrated circuits using statistical compact modeling,” Proc. SPIE 10242, 102420S (2017). [CrossRef]  

13. W. Bogaerts, M. Fiers, M. Sivilotti, and P. Dumon, “The IPKISS photonic design framework,” in Optical Fiber Communication Conference, OSA Technical Digest (online) (Optical Society of America, 2016), paper W1E.1.

14. “IPKISS.eda,” https://www.lucedaphotonics.com/en/product/ipkiss-eda.

15. “CAPHE Circuit Simulator IPKISS.eda,” https://www.lucedaphotonics.com/en/product/caphe-circuit-simulator-ipkisseda.

16. “Applied Nanotools Inc.,” https://www.appliednt.com/.

17. “Project Jupyter,” https://jupyter.org/.

18. jaspreetj, “jaspreetj/manufacturing-variability-analysis-tool,” https://github.com/jaspreetj/manufacturing-variability-analysis-tool.

19. lukasc-ubc, “lukasc-ubc/SiEPIC-Tools,” https://github.com/lukasc-ubc/SiEPIC-Tools.

20. “KLayout Layout Viewer And Editor,” https://www.klayout.de.

21. “PIC Design and Simulation Software - Lumerical INTERCONNECT,” https://www.lumerical.com/products/interconnect/.

22. R. Stephens, Essential Algorithms : A Practical Approach to Computer Algorithms (Indianapolis, In Wiley, 2019).

23. F. Mahmood, M. Toots, L.-G. Ofverstedt, and U. Skoglund, “2D Discrete Fourier Transform with simultaneous edge artifact removal for real-time applications,” in Proceedings of International Conference on Field Programmable Technology (IEEE, 2015), pp. 236–239.

24. J. Demmel, “Parallel Spectral Methods: Fast Fourier Transform (FFTs) with Applications,” https://people.eecs.berkeley.edu/~demmel/cs267_Spr15/Lectures/lecture23_FFT_jwd15_4pp.pdf.

25. “Correlated Random Samples - SciPy Cookbook documentation,” https://scipy-cookbook.readthedocs.io/items/CorrelatedRandomSamples.html.

26. R. Wicklin, “Use the Cholesky transformation to correlate and uncorrelate variables,” https://blogs.sas.com/content/iml/2012/02/08/use-the-cholesky-transformation-to-correlate-and-uncorrelate-variables.html.

27. A. Chakraborty, “Generating multivariate correlated samples,” Computational Stat. 21(1), 103–119 (2006). [CrossRef]  

28. A. Waqas, D. Melati, P. Manfredi, and A. Melloni, “Stochastic process design kits for photonic circuits based on polynomial chaos augmented macro-modelling,” Opt. Express 26(5), 5894–5907 (2018). [CrossRef]  

29. K. Jinguji and M. Oguma, “Optical half-band filters,” J. Lightwave Technol. 18(2), 252–259 (2000). [CrossRef]  

30. H. Jayatilleka, H. Shoman, L. Chrostowski, and S. Shekhar, “Photoconductive heaters enable control of large-scale silicon photonic ring resonator circuits,” Optica 6(1), 84–91 (2019). [CrossRef]  

31. K. X. Zhou and S. I. Roumeliotis, “A sparsity-aware QR decomposition algorithm for efficient cooperative localization,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2012), pp. 799–806.

32. C. A. Mack, “Lithographic simulation: a review,” Proc. SPIE 4440, 59–72 (2001). [CrossRef]  

33. B. G. Tabachnick and L. S. Fidell, Using Multivariate Statistics (Pearson Education, Cop, 2007).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Correlation length extraction results from a simulated correlated rough surface. (a) Generated 2D surface with ${{\xi }_t}$ = 200 µm, $w$ = 5 nm, $L$ = 1200 µm and $N_L$ = 1200. (b) Randomly sampled 36000 points from the surface shown in (a). (c) Data showing the absolute difference of squared variation, $H(r)$ , between each pair of points $(p_i, p_j)$ in (b) as a function of euclidean distance between the points. Solid colored lines show sigmoid fits for different ranges of r. (d) Extracted correlation lengths from 10 independent simulations for input correlation length ( $\xi$ ) = 200 µm. (e) Distribution of correlation length estimates as a function of the ratio between input correlation length ( $\xi _t$ ) and die length, L.
Fig. 2.
Fig. 2. Experimental data from a 9x30mm silicon photonic chip fabricated using EBL. (a) Distribution of racetrack resonators on the chip. (b) Cumulative Distribution Function (CDF) for extracted width variations. (c) CDF for extracted thickness variations. (d) Extracted $\Delta W$ versus position. (e) Extracted $\Delta H$ versus position. (f) H(r) vs r curve (in grey) and sigmoid fit(in red) for width variations. (g) H(r) vs r curve (in grey) and sigmoid fit(in red) for thickness variations.
Fig. 3.
Fig. 3. Time taken and memory required to generate correlated samples as a function of number of MC runs for VW-MC. The estimations were performed for a die size of 4 mm, grid to correlation length ratio of 1:9, and correlation length of 200 µm.
Fig. 4.
Fig. 4. (a) Placement of point and continuous elements in the case example. (b) Full spatial correlation matrix. (c) Reduced spatial correlation matrix where correlation value between A‘& B‘ represents the mean of correlation values for range [A, B(1:5)] in (b), similarly C‘ & B‘ represents mean of values for range [C, B(1:5)] in (b), and the correlation of the continuous element with itself is set to one as we are treating it as a lumped element. (d) Overlaid CDF curves for all circuit components. The CDF curves for data generated using full matrix are shown in blue and reduce matrix in red. (e) Differences in cumulative probabilities between A & B(top), B & C (middle) and C & A (bottom), for both, full matrix(blue) and reduced matrix (red).
Fig. 5.
Fig. 5. Comparison of Monte-Carlo results obtained from RSCM-MC and VW-MC (a) Schematic of the multi-stage Mach-Zehnder lattice filter. (b) Ideal response of the flat-top filter. Maximum Transmission vs -1dB bandwidth for both (c) Output 1 and (d) Output 2, (e) Cumulative Distribution Function (CDF) curve for -1dB bandwidth (output 1) distribution, (f) CDF curve for Maximum transmission (output 1) distribution. The data presented in (c-f) is from three independent batches of MC simulations. This was done to observe any systematic errors.
Fig. 6.
Fig. 6. (a) Schematic of the second order Mach-Zehnder filter. (b) Time(top) and Memory(bottom) comparison for $10^4$ MC runs, die size = 800 µm, $\xi$ = 200 µm, and $grid size : \xi$ = 1:9.
Fig. 7.
Fig. 7. (a) Chip layout of 16x16 switch matrix system. (b) Time(top) and Memory(bottom) comparison for $10^4$ MC runs, $\xi$ = 500 µm, and $grid size: \xi$ = 1:9.
Fig. 8.
Fig. 8. Comparison of computation requirements for a die size of 2.7 mm and $10^4$ MC runs. (a) Time taken to generate correlated variations as a function of correlation length for both, VW-MC (black) and RSCM-MC (green). (b) Memory required as a function of correlation length for both, VW-MC (black) and RSCM-MC (green).

Tables (4)

Tables Icon

Table 1. Statistical results for the manufacturing variations of a 9x30 mm chip fabricated through e-beam lithography process.

Tables Icon

Table 2. Input parameters for the Monte-Carlo simulations

Tables Icon

Table 3. MC simulation results summary

Tables Icon

Table 4. A summary of computation times for different MC analysis techniques [28]

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

f ( x ) = L 1 + e k ( x x 0 )
c [ ( x i , y i ) , ( x j , y j ) ] = e x p [ [ ( x j x i ) 2 + ( y j y i ) 2 ] l 2 / 2 ]
C ( P , Q ) = 1 m j = 1 m C ( P , Q j )
C ( P , Q ) = 1 n m k = 1 n j = 1 m C ( P k , Q j )
C o v ( P , Q ) = i = 1 x ( P i P ¯ ) ( Q i Q ¯ ) ( x 1 )
C o v ( P , Q ) = i = 1 x ( P i u ¯ ) ( Q i u ¯ ) ( x 1 )
Q i = Q 1 i + Q 2 i + Q 3 i . Q m i m
C o v ( P , Q ) = i = 1 x ( P i u ¯ ) ( Q 1 i + Q 2 i + Q 3 i . Q m i m u ¯ ) ( x 1 ) = i = 1 x ( P i u ¯ ) ( j = 1 m Q j i m u ¯ ) ( x 1 )
C o v ( P , Q ) = C o v ( P , Q 1 ) + C o v ( P , Q 2 ) + C o v ( P , Q 3 ) . C o v ( P , Q m ) m = i = 1 x ( P i u ¯ ) ( Q 1 i u ¯ ) m ( x 1 ) + i = 1 x ( P i u ¯ ) ( Q 2 i u ¯ ) m ( x 1 ) + . i = 1 x ( P i u ¯ ) ( Q m i u ¯ ) m ( x 1 ) = i = 1 x ( P i u ¯ ) ( Q 1 i u ¯ + Q 2 i u ¯ + . Q m i u ¯ ) m ( x 1 ) = i = 1 x ( P i u ¯ ) ( Q 1 i + Q 2 i + . Q m i m u ¯ ) m ( x 1 ) = i = 1 x ( P i u ¯ ) ( j = 1 m Q j i m u ¯ ) ( x 1 )
C ( P , Q ) = C ( P , Q 1 ) + C ( P , Q 2 ) + C ( P , Q 3 ) . C ( P , Q m ) m = 1 m j = 1 m C ( P , Q j )
C ( P , Q ) = 1 n m k = 1 n j = 1 m C ( P k , Q j )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.