## Abstract

Some inputs of computational models are commonly retrieved from external sources (handbooks, articles, dedicated measurements), and therefore are subject to uncertainties. The known experimental dispersion of the inputs can be propagated through the numerical models to produce samples of outputs. The stemming propagation of uncertainties is already significant in metrology but also has applications in optimization and inverse problem resolution of the modeled physical system. Moreover, the information on uncertainties can be used to characterize and compare models, and to deduce behavior laws. This tutorial gives tools and applications of the propagation of experimental uncertainties through models. To illustrate the method and its applications, we propose to investigate the scattering of light by gold nanoparticles, which also enables the comparison of the full Mie theory and the dipole approximation. The position of the localized surface plasmon resonance and the corresponding value of the scattering efficiency are more specifically studied.

© 2017 Optical Society of America

## 1. INTRODUCTION

The basic idea of the propagation of experimental uncertainties through models comes from the handling of uncertainties in experiments. The Guide to the Expression of Uncertainty in Measurement [1–3] and its companion [4] are the founding references on the scientific exploitation of uncertainties [5]: “It is now widely recognized that, when all of the known or suspected components of error have been evaluated and the appropriate corrections have been applied, there still remains an uncertainty about the correctness of the stated result, that is, a doubt about how well the result of the measurement represents the value of the quantity being measured.” Therefore, the uncertainty of measurement is defined as a “parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand” (Section B.2.18 of [1]). Actually, in the case of the comparison between experimental results and theoretical ones, the overlap of uncertainty intervals obtained from experiments and numerical simulations is the necessary condition to assess their agreement [6]. Indeed, some inputs of numerical simulations come from experimental data (the optical properties, for instance) and therefore are subject to uncertainties whose influence deserves to be taken into account to refine the discussion on the theoretical results. As uncertainties of model inputs can be considered to be the result of a random phenomenon, propagating a random sample of inputs through the numerical model produces an output sample. The purpose of this tutorial is to show how a sample of output numerical results, produced from statistical distributions of the inputs, can be processed. More specifically, models and numerical simulations are very helpful to explain physical phenomena, to deduce behavior laws, to optimize systems, and to solve the inverse problems. In this tutorial, these applications of the propagation of uncertainties are addressed.

This tutorial is based on the propagation of distributions using a Monte Carlo method described in Ref. [2]. No commercial software is required. All results of this tutorial are obtained with software under open source governance: GNU Octave [7] and GNU R. The paper is organized as follows: Section 2 introduces the basis of the propagation of uncertainties and its applications. Section 3 is devoted to the numerical and physical discussions of the results. For this, the example of the scattering of light by gold nanoparticles is considered. The results of the propagation of uncertainties are used to compare models, to optimize the modeled physical system, to solve the inverse problems, and to deduce behavior laws, before concluding. The algorithms of the method of propagation of uncertainties are given in Appendix A. The glossary of notations (Appendix B) may help the reader.

## 2. PROPAGATION OF UNCERTAINTIES THROUGH NUMERICAL MODELS

Methods for handling uncertainties are used by experimentalists to determine significant digits of a measurement, especially in the case of successive measurements of the same measurand, carried out under the same conditions of measurement [1,4]. From statistical processing of the measured data, coverage intervals can be deduced. Undoubtedly, the evaluation of uncertainties is valuable in experiments [5]. Conversely, the numerical model is often evaluated only once by using either a specific value or an estimation of the mean value of each of the inputs. However, the experimental uncertainties of the inputs of a model can also be used in numerical simulations. Indeed, repeated evaluations of the model using the random generation of the inputs in accordance with their uncertainty produces samples of output results that can be processed in the same way as any other result of repeated experiments. The following subsections introduce the basis of the propagation of uncertainties through analytical and numerical approaches. The concept of coverage intervals is introduced. The use of an output sample of the numerical model for comparison of models is also described, for applications in optimization and in inverse problems, as well as in the possible deduction of behavior laws. Indeed, a deterministic link between outputs and inputs can emerge from the random results.

#### A. Analytical Approach to the Propagation of Uncertainties: Sensitivity Coefficients

Let us consider an *analytical* model ${\mathcal{M}}_{a}(\mathcal{P},\mathcal{X})$ that depends on $N$ continuous inputs $\mathcal{I}$ that are parameters $\mathcal{P}$ and variables $\mathcal{X}$. The analytical model enables us to calculate the partial derivatives with respect to each input ${\mathcal{I}}_{i}$, and therefore to produce an analytical formula of the uncertainty $u(\mathbf{o})$ on the output quantity $\mathbf{o}={\mathcal{M}}_{a}(\mathcal{I})$. This approach is directly inspired by that of the indirect measurement (see Eq. (16) on p. 21 of Ref. [1]):

*statistically*independent, $\rho ({\mathcal{I}}_{i},{\mathcal{I}}_{j})=0$, but the converse is not necessary true. Evaluations of standard uncertainty components are founded either on frequency distributions of the repeated measurements (type A) or on

*a priori*distributions of probability using information from references, manuals, technical data sheets, or independent characterization of the measurement apparatus (type B).

The numerical approach of the propagation of uncertainties is an alternative method if no analytical model is available, or if the partial derivative cannot be easily calculated [2]. It is based on a Monte Carlo method to determine the size of a sample that ensures control of the accuracy of the output sample characteristics (mean value, standard deviation, and coverage interval).

#### B. Numerical Approach to the Propagation of Uncertainties

Let us consider a *numerical* model ${\mathcal{M}}_{n}(\mathcal{P},\mathcal{X})$ that depends on the input parameters $\mathcal{P}$ and on the variables $\mathcal{X}$. We suppose that inputs $\mathcal{I}=(\mathcal{P},\mathcal{X})$ are subject to uncertainties that are supplied from experimental sources. For example, ${\mathcal{P}}_{i}$ is supposed to follow a Gaussian (or normal) distribution of probability, with mean value $\mu ({\mathcal{P}}_{i})$ and standard deviation $\sigma ({\mathcal{P}}_{i})$. Therefore, the random generation of $M$ values of ${\mathcal{P}}_{i}$ allows us to compute a sample of output values for the same variable $\mathcal{X}$. Thus, the mean value $\mu (\mathbf{o})$ and the standard deviation $\sigma (\mathbf{o})$ of the output sample are similar to those obtained from repeated measurements [1,2]. The size $M$ of the sample is determined as follows.

### 1. Quantities of Interest and Sample Size

The propagation of the input uncertainties through the model produces a sample of output results. Therefore, the output sample $\mathbf{o}$ is characterized by its estimated mean value $\mu (\mathbf{o})$ and its estimated standard deviation $\sigma (\mathbf{o})$. The standard deviation measures the dispersion of possible outputs around the mean value. Moreover, coverage intervals can be deduced from the sample [2]. The coverage interval ${\mathrm{CI}}_{p\%}^{c}=[{\mathbf{o}}_{\text{low}};{\mathbf{o}}_{\text{high}}]$ contains a specified proportion of the outputs, given an arbitrary chosen coverage probability $p$ (see Algorithm 3 in Appendix A). Section 7.7 of Ref. [2] states that the boundaries of the coverage interval are the minimum and the maximum of the $M\times p$ values of the sorted sample ($M\times p$ is the product of the size $M$ of the output sample $\mathbf{o}$ and the coverage probability $p$; see Algorithm 3 in Appendix A). The lower and the upper bounds of the interval ${\mathrm{CI}}_{p\%}^{c}$ (${\mathbf{o}}_{\text{low}}$ and ${\mathbf{o}}_{\text{high}}$, respectively), as well as the estimated mean value $\mu (\mathbf{o})$ and standard deviation $\sigma (\mathbf{o})$, depend on the realization of the random drawing of the inputs. In Ref. [2], the minimum size of the sample $M$ is the maximum integer number in $\{{10}^{4};100/(1-p)\}$. For example, $p=0.95$ leads to a recommended sample size of ${10}^{4}$. Nevertheless, the delivery of this quantity of numerical results may require time-consuming computation. Therefore, we prefer to determine the size $M$ of the sample from both the number of significant digits of the input quantities, retrieved from experimental *a priori* knowledge, and the length of their intervals of variation [6]. The sample size $M$ must be greater than the maximum of the ratios of the lengths of intervals of variation to the smallest intervals that can be measured (discrimination threshold or finite resolution), for any parameter or variables.

The dispersion of the mean value of outputs ($\mu (\mathbf{o})$) of the standard deviation ($\sigma (\mathbf{o})$) and of the boundaries of the coverage interval (${\mathbf{o}}_{\text{low}}$ and ${\mathbf{o}}_{\text{high}}$) must be characterized to deduce the retained significant digits for each of these quantities. Therefore, the statistical study of a sufficient number of output samples must be carried out. In fact, each of these quantities, which are deduced from each sample, becomes a random number itself, since it varies from one sample to another. Reference [2] proposes an adaptive procedure to achieve a given target accuracy of these quantities. We draw inspiration from the Monte Carlo method; Algorithms 1–3 describe the procedure (Appendix A). The method is based on the repeated generation of $h$ samples of outputs (each of them having size $M$), until a given precision on the mean value, the standard deviation of the mean value, and the boundaries of the coverage interval is reached. The aim is to reach a given number of significant digits in these quantities. Actually, the repeated achievements of the propagation of uncertainties supply sets of $h$ mean values, standard deviations, and boundaries of the coverage intervals. The mean value and the standard deviation of the $h$ mean values, and the standard deviation and the boundaries of the coverage interval of each sample, are evaluated. The standard deviations of each of these estimated values (from the $h$ samples), relative to the mean values, are ${\mathbf{s}}_{\mu}$, ${\mathbf{s}}_{{y}_{\text{low}}}$, and ${\mathbf{s}}_{{y}_{\text{high}}}$. The standard deviation for the merged outputs (a single sample of size $M\times h$) is ${\mathbf{s}}_{u(\mathcal{O})}$.

At each step $h$ of the Monte Carlo loop, an additional sample of size $M$ is computed. The loop stops when the maximum ${\mathbf{s}}_{y}$ of the relative standard deviations is lower than the chosen target accuracy ${\mathbf{u}}_{r}(\mathbf{o})$ (i.e., 1%). The stopping criterion is [2]

### 2. Normality Test and Coverage Intervals

Intervals about the result of computations can be defined. They may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the output value. The coverage interval ${\mathrm{CI}}_{p\%}^{c}$ is a characteristic of a given sample. Indeed, it contains a specified proportion of the outputs, given an arbitrary chosen coverage probability $p$. But in the case of a sample of small size, its reliability may be questionable. Therefore, a complementary analysis may be of interest. It consists in testing the null hypothesis ${H}_{0}$ that the sample comes from a normal distribution of probability (for example) by using nonparametric tests of equality of probability distributions (or “goodness-of-fit”) [8]. Therefore, the question is as follows: *Is it reasonable to make the assumption that* $\mathbf{o}$ *is randomly distributed following a normal distribution?* If the answer is yes, an interval of confidence at level $p$ can be deduced.

In a normality test of data (or statistical hypothesis testing), data ${o}_{i}$ are tested against the null hypothesis that it is normally distributed. The test gives a probability value ${p}_{\text{test}}$. For example, the Lilliefors test [9], which is a variant of the Kolmogorov test [10] with estimated mean and variance from the sample, considers the maximum distance between the cumulative distribution functions (CDFs). The CDF is also used by the Anderson–Darling test [11], whereas the Shapiro–Wilk test [12] uses the estimate of variance. The Shapiro–Wilk is classically considered to be the most powerful, especially in the case of a small sample size. The Pearson test (${\chi}^{2}$) gives a probability value ${p}_{\text{test}}$ of deviations between observed and theoretical frequencies. The normalized sum of squared deviations follows a ${\chi}^{2}$ distribution, under the null hypothesis [8,13].

The probability value ${p}_{\text{test}}$ is compared to a significance level $\alpha $ (arbitrary chosen). The null hypothesis either

- • is rejected if ${p}_{\text{test}}<\alpha $, and there is evidence that the output sample $\mathbf{o}$ was not from a normally distributed population, or
- • cannot be rejected if ${p}_{\text{test}}\ge \alpha $.

The results of tests are just one piece of evidence that can be helpful to accept (or reject) the hypothesis of normal distribution [14]. However, the quantile-to-quantile plots exhibit the possible asymmetry of the distribution and its agreement with any theoretical distribution. It has to be noted that the size of the merged output sample $\mathcal{O}$ may be too large to be processed with conventional statistical tests.

In the acceptance or rejection of the supposed normality of the output sample being decided, two cases arise.

**Case #1, accepting the null hypothesis:** Coverage intervals having a level of confidence $p$ can be defined for the sample and for the mean value of the sample.

- • The coverage interval ${\mathrm{CI}}_{p\%}$ for the
*sample*, with a level of confidence $p\%$, is the following: with $\mu $ being the estimated mean value of the output sample and $\sigma $ its estimated standard deviation. ${z}_{s}$ is the coverage factor. We are $p\%$ confident that a*new*value of computation of the model is in our interval. The coverage factor ${z}_{s}$ is given by the inverse of the cumulative distribution function (${\mathrm{CDF}}_{\mathcal{N}(0,1)}^{-1}$) of the normal distribution with mean 0 and standard deviation 1, as a function of the considered level of confidence $p$ (see Table G.1 in Ref. [1]): - • The coverage interval ${\mathrm{CI}}_{p\%}^{\mu}$ for the
*mean value of the sample*is written as$${\mathrm{CI}}_{p\%}^{\mu}=[\mu -{z}_{m}\frac{\sigma}{\sqrt{M}};\mu +{z}_{m}\frac{\sigma}{\sqrt{M}}].$$We are $p\%$ confident that the mean of the output values is in ${\mathrm{CI}}_{p\%}^{\mu}$. Indeed, the mean value may differ from its estimation (the estimated mean value in the experiments is the best estimator of the measurand). This interval is a range of values that acts as a good estimate of the unknown mean value. The coverage factor ${z}_{m}$ is given by the inverse of the cumulative distribution function (${\mathrm{CDF}}_{t}^{-1}$) of the Student t-distribution with mean 0 and standard deviation 1, as a function of the considered level of confidence $p$ [1]: When the size $M$ of the sample grows, the t-distribution approaches the normal distribution with mean 0 and variance 1 [$\mathcal{N}(0,1)$]. The number of degrees of freedom of the t-distribution is equal to ${\nu}_{\text{eff}}$. References [1,2] suggest using the Welch–Satterthwaite formula to adjust the effective degrees of freedom of the output:$${\nu}_{\text{eff}}=\frac{{u}^{4}(\mathbf{o})}{\sum \left(\frac{{c}_{i}^{4}{u}_{i}^{4}({\mathcal{I}}_{i})}{{\nu}_{i}}\right)}.$$Both the number of random inputs (${\nu}_{i}+1$) and the uncertainties of the input parameters and variables ${u}_{i}$ are used to evaluate the effective degrees of freedom. ${c}_{i}$ values are the sensitivity coefficients [Eq. (1)]. If ${c}_{i}$ values are unknown, then the number of degrees of freedom is set to $M-1$.

The mean value is the best estimator of the output of the model. The associated coverage interval ${\mathrm{CI}}^{\mu}$ characterizes the statistical realization (i.e., if the computation is repeated on numerous occasions and interval estimates are made on each occasion, the resulting intervals would bracket the true population input in approximately 95% of the cases [4]).

**Case #2, rejecting the null hypothesis:** The result of the nonparametric tests of equality of probability distributions is a parameter such as ${p}_{\text{test}}<\alpha $. Therefore, the rejection of the hypothesis of a normal distribution of the sample is legitimate and forces us to choose an arbitrary value of the coverage factors: ${z}_{s}={z}_{m}=2$ [1]. In this case, no level of confidence can be associated with the intervals.

A first application of the propagation of uncertainties is the comparison of the results of two models. Indeed, the numerical evaluation of errors between models falls in an interval due to the uncertainties of the model inputs. This approach is close to that of the comparison between experiments and theory.

#### C. Application #1: Uncertainties for the Comparison of Models

Two models of the same experiment are considered: ${\mathcal{M}}_{1}(\mathcal{P},\mathcal{X})$ and ${\mathcal{M}}_{2}(\mathcal{P},\mathcal{X})$. For example, one of them can be an approximation of the other one, leading to a simpler analytical expression. We focus on the agreement of the output data of both models, by processing the output samples. Each model gives output samples ${\mathbf{o}}_{\mathbf{1}}$ and ${\mathbf{o}}_{\mathbf{2}}$ that can be compared using the coverage intervals described in Section 2.B. Indeed, the overlap between coverage intervals of the mean of the output indicates that both models are compatible with each other. On the other hand, in case of no overlap, the approximation fails.

These inputs produce ${\mathbf{o}}_{\mathbf{1}}={\mathcal{M}}_{1}(\mathcal{P},\mathcal{X})$ and ${\mathbf{o}}_{\mathbf{2}}={\mathcal{M}}_{2}(\mathcal{P},\mathcal{X})$. The relative error ${\mathbf{e}}_{\mathbf{r}}$ between the two models can be computed for all inputs that are randomly generated in their respective interval of uncertainty:

Even if the random inputs produce apparently random outputs, the models may induce deterministic links between them. The analysis of correlations reveals such a deterministic relation. Indeed, if significant correlations are found, some behavior laws can be deduced [3]. The behavior laws help to explain the underlying physics and to predict or examine experimental results [6,15].

#### D. Application #2: Uncertainties for Behavior Laws

Basically, the value of Pearson’s correlation coefficient $\rho $ [Eq. (2)] is a measure of the linear dependence between two quantities in a statistical sample. Therefore, $\rho $ is an indicator of the behavior law (or causal dependence) that could be deduced from a sample. In the case of linear dependence between two quantities, $|\rho |$ is close to 1. If the quantities have no causal relationship, $|\rho |$ is close to 0. However, the zero correlation should be considered with care: it does not imply that there is no dependence, and a visual inspection of the scatter plot of the sorted data is necessary before concluding. The correlation matrix reveals the link between all quantities involved in the problem: the outputs and the inputs.

A correlation coefficient close to $-1$ or $+1$ indicates that a deterministic relationship can be found by sorting the data and fitting the results via adequate functions. The fitting uses either models or polynomials. In some cases, the resulting function can be inverted, offering an alternative to solve inverse problems.

The sample of numerical model outputs is actually the result of numerical experiments that are subject to uncertainties. Therefore, finding inputs that produce outputs that are greater than a given value is possible. This search is part of the optimization process.

#### E. Application #3: Uncertainties for the Optimization Process

The statistical data obtained from the propagation of uncertainties through the model can be used in an optimization process that corresponds to a change of point of view in the analysis of the results. Solving an optimization problem is seeking values of the variables that lead to an optimal value of the function that is to be optimized [16–18]. Various methods have been proposed for global optimization, but some improvements must be introduced for the optimization of plasmonic nanostructures [19–22]. In the present case, we consider the simplest method of optimization based on the propagation of uncertainties. The data obtained from the propagation of uncertainties can be used for an optimization process [23,24]. The choice of a tolerance ${t}_{O}$ ($0<{t}_{O}<1$) makes it possible to separate the outputs into two categories. The outputs that fill the corresponding constraint are kept, and the corresponding inputs are those of the optimized system.

The optimization method involves two steps:

- 2. Then the intervals for each input parameter ${\mathcal{P}}_{O}$ and input variable ${\mathcal{X}}_{O}$ are deduced from the selected family of solutions that meet the criterion [Eq. (10)].

These intervals help to improve the experiments modeled by ${\mathcal{M}}_{n}$, by indicating the tolerated uncertainty of each input. The global maximum $\mathrm{max}(\mathbf{o})$ can also be found from a specific method of optimization (simulated annealing, particle swarm optimization method [25,26], evolutionary method [27,28]). The method based on the propagation of uncertainties provides a family of acceptable sets of inputs, and therefore is closer to the experimental design. If a local optimum is found to satisfy the tolerance constraint, it is kept in the family of solutions.

Finding parameter sets or input variables that produce a given value of outputs of the numerical model is also of interest. In this case, the target is not the optimum. This search is part of the domain of the inverse problem resolution.

#### F. Application #4: Uncertainties for Inverse Problem Resolution

Solving inverse problems consists in retrieving a set of variables used in a numerical model that adjusts itself to given experimental data [29]. If the model is robust and accurately describes the experiment, the recovered variables are expected to be consistent with the physical reality. Some optimization methods have been proposed to solve inverse problems [26,27,30,31].

Among all the solutions obtained from the propagation of uncertainties, the target is to retrieve the parameters and the variables $({\mathcal{P}}_{I},{\mathcal{X}}_{I})$ giving a specific value, ${\mathbf{o}}_{I}={\mathcal{M}}_{n}({\mathcal{P}}_{I},{\mathcal{X}}_{I})$. Generally, the problem consists in finding the inverse function: $({\mathcal{P}}_{I},{\mathcal{X}}_{I})={\mathcal{M}}_{n}^{-1}({\mathbf{o}}_{I})$. This inverse or its approximation is rarely determined, even locally, if we have no local information on the model. Moreover, in many cases, the solution is not unique, but a family of solutions is expected. Therefore, the output sample of the model, obtained from the propagation of uncertainties, can be used to solve the inverse problems.

The method consists in selecting the variable sets that correspond to the target $T$, with the possible introduction of a tolerance ${t}_{I}$ ($0<{t}_{I}<1$), as in the case of the optimization process:

## 3. IMPLEMENTATION FOR THE SCATTERING OF LIGHT BY GOLD NANOPARTICLES

The propagation of uncertainties described in Section 2 is applied to the scattering of light by spherical gold nanoparticles. We use two numerical models as examples of applications: the full Mie theory and the dipole approximation. Following is an outline of this section: Section 3.A deals with the basics of the full Mie theory and of the dipole approximation. Some details of the dipole approximation are given to enable discussion of the following results. The uncertainties of the inputs (parameters and variable) are evaluated in Section 3.B from references. Sections 3.C and 3.D are devoted to the analytical and statistical approaches of the propagation of uncertainties, respectively. The applications of the propagation of uncertainties to correlation analysis and behavior law determination, to optimization, and to inverse problems can be found in Sections 3.E, 3.F, and 3.G.

#### A. Full Mie Theory and the Dipole Approximation

The Mie theory [32] describes the interaction of light with material particles through an analytical approach. A fully analytical solution can be found for spherical nanoparticles. (Notations are summarized in Table 7.) The scattering efficiency of a homogenous, isotropic spherical particle of refractive index $m$ (the surrounding medium is air) and size parameter $x=2\pi R/{\lambda}_{0}$ ($R$ is the particle radius and ${\lambda}_{0}$ is the wavelength of the illuminating light) can be written as follows:

The coefficients ${a}_{n}$ and ${b}_{n}$ of the Mie series are deduced from the formulation of the scalar wave equation in spherical coordinates and from the boundary conditions on the surface of the sphere:

For subwavelength particles ($x\ll 1$), a power series expansion about $x=0$ of the scattering efficiency [Eq. (12)] can be deduced using some algebra [33,34]. The result is a function of the relative permittivity ${\epsilon}_{r}={m}^{2}$ (this parameter is a complex number). Limiting the order of the Mie scattering coefficients to the lowest powers of $x$ seems legitimate for the calculation of the scattering efficiency ${Q}_{\text{sca}}$:

The maximum of the scattering efficiency ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$ over the visible spectrum and its position ${\lambda}_{r}$ in the spectrum are the fingerprints of the localized surface plasmon resonance of the metallic nanoparticle. This resonance is a complex phenomenon that depends both on the radius $R$ of the nanoparticle and on its relative permittivity ${\epsilon}_{r}$. Therefore, the methods presented in Section 2 are applied to the problem of the scattering of light by spherical gold nanoparticles:

- • the models are ${\mathcal{M}}_{1}={\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$ and ${\mathcal{M}}_{2}={\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}^{d})$;
- • the input of the models is $\mathcal{I}=(\mathfrak{R}({\epsilon}_{r}),\mathfrak{I}({\epsilon}_{r}),R)$, where $\mathfrak{R}({\epsilon}_{r})$ and $\mathfrak{I}({\epsilon}_{r})$ are respectively the real and imaginary parts of the relative permittivity of gold and $R$ is the radius of particles;
- • the output of the models is $\mathbf{o}=({\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}),{\lambda}_{r})$, where ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$ is the maximum of the scattering efficiency within the visible wavelength spectrum, which occurs at ${\lambda}_{0}={\lambda}_{r}$. Actually, the outputs are the abscissa and the ordinate of the maximum of the function ${Q}_{\text{sca}}({\lambda}_{0})$.

The direct application of the propagation of uncertainties described in [2] is impossible, as the uncertainties of the relative permittivity and of the radius do not play the same role in the numerical model. The radius is a piece of classical input data, while the relative permittivity acts at an intermediate level when the maximum is searched over the spectrum. The relative permittivity of gold may differ at the same wavelength, according to the relevant handbooks and papers. Therefore, using this dispersion of inputs can help improve the discussion of validity, from a metrological point of view. The basic idea is to propagate the uncertainties through the model and then to process the output results, in the same way as repeated experimental data would be. The next subsection is therefore devoted to the numerical evaluation of uncertainties for the inputs of the full Mie theory and of the dipole approximation. This first stage makes use of experimental results and bibliographical references about the measurement of the inputs of models: mean values, dispersion, and eventually the probability distribution.

#### B. Uncertainties of the Inputs of the Model

The uncertainties come from imperfect measurement, random variations of the observation, or incomplete knowledge of certain physical phenomena [1]: “Uncertainty of measurement is thus an expression of the fact that, for a given measurand and a given result of measurement of it, there is not one value but an infinite number of values dispersed about the result that are consistent with all of the observations and data and one’s knowledge of the physical world, and that with varying degrees of credibility can be attributed to the measurand.” Consequently, we evaluate the uncertainty of the radius $R$ of nanoparticles and that of the relative permittivity of gold (${\epsilon}_{r}$) from bibliographical references.

### 1. Uncertainty of the Radius of the Particle

Experimentally, the uncertainty of the radius $R$ of nanoparticles is related to the preparation method [34]. The size distribution of “gold standard” nanoparticles was carefully addressed from National Institute of Standards and Technology (NIST) reference materials RM 8011, RM 8012, and RM 8013, with nominal radii of 5, 15, and 30 nm [37]. The reference values from NIST were $4.55\pm 0.90\text{\hspace{0.17em}}\mathrm{nm}$ (i.e., 20%), $12.45\pm 0.60\text{\hspace{0.17em}}\mathrm{nm}$ (i.e., 5%), and $26.6\pm 2.6\text{\hspace{0.17em}}\mathrm{nm}$ (i.e., 10%), respectively. The uncertainty can reach 50% for nanometric size particles [38], even if a few papers report multimodal distributions $6.68\pm 0.10\text{\hspace{0.17em}}\mathrm{nm}$ (i.e., 1.5%) and $6.82\pm 0.06\text{\hspace{0.17em}}\mathrm{nm}$ (i.e., 0.9%) [39]. However, typical values of relative uncertainties are around 10% [37,40–42]. According to the dispersion of the available experimental data, for the following numerical applications using $R\approx 25\text{\hspace{0.17em}}\mathrm{nm}$, we consider the standard uncertainty for the NIST reference material RM 8013 (10%):

where ${u}_{r}$ is the relative uncertainty. Histograms in most of those references show a Gaussian shape (or normal) distribution of size, but tests of the hypothesis of a normal distribution have been sparingly performed on the abovementioned experimental data. Actually, these statistical tests of the null hypothesis that the sample comes from a normal distribution could increase the reliability of the uncertainties and of the coverage intervals that can be deduced from. In the present case, it could also induce the choice of the normal distribution for generating the radii used for the propagation of uncertainties through the model. Let us now investigate the uncertainty of the optical parameters of the gold nanoparticles.### 2. Uncertainty of the Relative Permittivity

The measurement of the complex dielectric constant of a single gold nanoparticle revealed high variability [43]. The relative permittivity of gold was shown to depend on the preparation method [34]. A collection of measured relative permittivities of gold can be found in [44]. They come from published articles, private communications, and handbooks [45]. Figures 1 and 2 show, respectively, the real and the imaginary parts of the relative permittivity of gold as a function of the photon energy, from Refs. [46–51].

We consider the domain of variations of the relative permittivity of gold to be the min–max values of the real part $\mathfrak{R}({\epsilon}_{r})$ and imaginary part $\mathfrak{I}({\epsilon}_{r})$, respectively. These min–max values are computed from all the experimental data shown in Figs. 1 and 2, at each wavelength ${\lambda}_{0}$. Figure 3 shows the minimum, the mean value, and the maximum of both the real and imaginary parts of the relative permittivity as functions of the wavelength. According to the experimental setup, the optical properties are generally measured at different wavelengths. Therefore, for computational purpose, we have to evaluate all of them at the same wavelength. To reach this goal, two approaches are commonly used:

- • Special functions are used for fitting; they come from physical considerations regarding dispersion and are used in computational codes like finite difference time domain [52,53]. For illustration, the fittings of data proposed in [52,53] are plotted (solid and dashed lines, respectively) in Figs. 1 and 2.
- • Interpolation or spline fitting of the reference data.

Even if the results from Ref. [53] are better than those from Ref. [52] in the 400–800 nm range of wavelengths, the discrepancy between the fitting by special functions and by the experimental data leads us to prefer the spline fitting. For numerical applications, both the real and imaginary parts of the relative permittivity of gold are calculated for ${\lambda}_{0}\in [510;530]\text{\hspace{0.17em}}\mathrm{nm}$ using a step of 0.1 nm. In this interval of wavelengths, the ratio of the min–max values to the mean values varies from 25% to 26% and from 16% to 30% for the real and imaginary parts of the relative permittivity of gold, respectively.

The structure of gold in the particle is assumed to be unknown, and therefore all the experimental values found in the literature mentioned above are considered to be equally probable. The uncertainty is therefore deduced from the min–max domains, assuming a uniform distribution of probability, and this at each wavelength ${\lambda}_{0}$ [1]:

#### C. Propagation of Uncertainties: the Analytical Approach (Illustration of Application #1)

Let us consider the dipole approximation of the scattering efficiency [Eq. (16)]. The simple formulation of ${Q}_{\text{sca}}^{d}$ facilitates the application of the analytical method for the propagation of uncertainties proposed in Section 2.A. Indeed, the calculation of the required partial derivatives is straightforward. The partial derivatives (sensitivity coefficients) are calculated from Eq. (16) using $x=kR$, with $k=2\pi /{\lambda}_{0}$. Assuming non-negligible uncertainties of both the radius $R$ of the particle and of the real and imaginary parts of the relative permittivity ${\epsilon}_{r}$, the square of the combined standard uncertainty ${u}^{2}({Q}_{\text{sca}}^{d})$ is deduced from the three partial derivatives of ${Q}_{\text{sca}}^{d}(R,\mathfrak{R}({\epsilon}_{r}),\mathfrak{I}({\epsilon}_{r}))$ [see Eq. (20)].

In the present study, the dispersion of radii as well as the evaluation of the uncertainty of the relative permittivity are obtained from independent references. Therefore, the correlation between the radius and the relative permittivity is set at 0 [1]. Only the correlation between the real and imaginary parts of the relative permittivity is included in Eq. (20). Considering correlated measurements of the real part and the imaginary part of the relative permittivity, the square of the relative standard uncertainty ${u}_{r}^{2}({Q}_{\text{sca}}^{d})$ is the sum of four terms:

This analytical approach is able to reveal the dominant terms, if they exist, in the combined standard uncertainty. ${T}_{R}$ is proportional to ${x}^{3}$, whereas the other terms are proportional to ${x}^{4}$; therefore, it can be dominant for nanoparticles of subwavelength size ($x<1$). Figure 4 shows the contribution of each additive term in the square of the combined standard uncertainty [Eqs. (20)–(24)] for $R=25\text{\hspace{0.17em}}\mathrm{nm}$. Clearly the uncertainty of the radius of the particle is prevalent, unlike that of the relative permittivity. The maximum of the relative uncertainty is 42% and occurs at ${\lambda}_{0}=506\text{\hspace{0.17em}}\mathrm{nm}$. In this zone, the contribution of the real part is vanishing, whereas that of the imaginary part exhibits a maximum. The scattering efficiency is therefore more sensitive to the imaginary part of the relative permittivity. Further explanations and illustrations can be found in Refs. [54,55] and are outside the scope of this tutorial.

Figure 5 illustrates the propagation of uncertainties of the relative permittivity through the dipole approximation, by plotting uncertainty bars (the uncertainty intervals), as a function of the wavelength for $R=25\text{\hspace{0.17em}}\mathrm{nm}$. The dipole approximation underestimates ($-15\%$) and shifts the resonance (blueshift of about 2 nm). The scattering efficiency computed for the mean values of the data used for the relative permittivity of gold is shown (the gray solid line represents the full Mie theory, and the black solid line is the dipole approximation). The curve of the scattering efficiency computed from the full Mie theory passes through the uncertainty intervals, and therefore both models are compatible for $R=25\text{\hspace{0.17em}}\mathrm{nm}$, reflecting the high value of the relative uncertainty of the radius. Considering the combined standard uncertainty of the calculation of the dipole approximation of the scattering efficiency, the discrepancy between the dipole approximation and the full Mie theory appears to be negligible. Nevertheless, both models should be carefully compared before concluding, by calculating the relative error of the approximation, taking into account the uncertainties of their inputs.

The sample generated from the propagation of uncertainties can be used to calculate intervals of error for each radius of a particle. Indeed, from either side of the mean value, the dispersion of error induced by uncertainties provides numerical information that may be useful to justify the choice or the rejection of the dipole approximation. The relative errors ${e}_{r}$ (in percent) of the approximation of the full Mie theory by the dipole term can be evaluated for each output of both models [Eq. (9)]: ${e}_{r}({\lambda}_{r},{\lambda}_{r}^{d})$ and ${e}_{r}({\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}),{\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}^{d}))$. Figure 6 shows the boundaries of relative errors as functions of the radius $R$ of the nanoparticle. Only the uncertainties of the relative permittivity defined in Section 3.B.2 (see Fig. 3 and Eqs. (18) and (19)] are used for these computations, for each radius $R$ of the nanoparticles. The relative error of the maximum of the scattering efficiency over the visible spectrum is computed from Eqs. (12) and (16) (gray lines). The relative error of the spectral position of this maximum is deduced from the same computation (black lines).

The investigated domain of radii is [15;100] nm. The min–max range of the relative error ${e}_{r}({\lambda}_{r},{\lambda}_{r}^{d})$ of the position of the resonant wavelength obtained from the full Mie theory and from the dipole approximation slowly increases with the radius. This relative error is an increasing function of the radius $R$ but remains lower than 6% for $R<25\text{\hspace{0.17em}}\mathrm{nm}$. On the other hand, the shape of the min–max range of the relative error ${e}_{r}({\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}),{\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}^{d}))$ exhibits a maximum near $R=42\text{\hspace{0.17em}}\mathrm{nm}$ and vanishes near $R=61\text{\hspace{0.17em}}\mathrm{nm}$. Then it becomes negative and decreases rapidly with the radius, due to the major role of the multipolar terms in the Mie series [Eq. (15)]. For $R$ between 62 and 68 nm, the zero value of the error is included in the min–max interval. Therefore, the models are compatible for this range of radii. This condition is never ensured for the resonant wavelength. Consequently, both models are almost never compatible when we take only the uncertainty of the relative permittivity of gold into account. It has to be noted that the distance between the minimum and the maximum of the relative error increases with the radius of the nanoparticle, and therefore the uncertainty of the relative permittivity cannot be neglected.

In this section, the analytical approach to the propagation of uncertainties has been used to discuss the respective weight of each uncertainty in that of scattering efficiency, calculated from the dipole approximation (Fig. 4). These first results have allowed us to examine the validity of the dipole approximation for $R=25\text{\hspace{0.17em}}\mathrm{nm}$ (Fig. 5). Considering the relative error between both models, a more general study of the influence of the uncertainty of the relative permittivity has revealed two different behaviors if the quantity of interest is either the value of the scattering efficiency or that of the resonant wavelength (Fig. 6).

Despite the usefulness of the analytical approach (dipole approximation), it suffers from limitations: first, it is only applicable to a simple analytical formula, and second, it does not help directly answer the question: *Assuming the three sources of uncertainty (*$R$, $\mathfrak{R}({\epsilon}_{r})$, *and* $\mathfrak{I}({\epsilon}_{r})$*), what are the uncertainties of both the spectral position* ${\lambda}_{r}$ *and the value of the resonance peak maximum?* Further analytical calculation would be necessary to address this problem. Considering a model for the dependence of the relative permittivity on the wavelength, finding the inverse function of the derivative of the scattering efficiency is not obvious. However, answering this question is of interest if the tuning of a plasmon polariton is sought [23], in particular for fluorescence enhancement [56,57]. The following statistical approach to the propagation of uncertainties through models can help answer this question.

#### D. Propagation of Uncertainties: an Example of Realization of the Statistical Approach (Illustration of Application #1)

The propagation of uncertainties through models has its origins in the repetition of observations in experiments. “The uncertainty of the result of a measurement reflects the lack of exact knowledge of the value of the measurand. The result of a measurement after correction for recognized systematic effects is still only an estimate of the value of the measurand because of the uncertainty arising from random effects and from imperfect correction of the result for systematic effects” [1]. The natural counterpart in numerical simulation involves the random generation of the inputs, which are subject to uncertainties. Therefore, the repeated executions of the numerical model for each set of inputs gives a sample of output results. The output sample can be considered to be the realizations of a random variable that involves both the characteristics of the model and the input uncertainties [2]. To illustrate the propagation of uncertainties and its applications, we use the full Mie theory and the dipole approximation as numerical models (Section 3.A). (Tables 4–6 in Appendix B specify the parameters, variables, outputs, and uncertainties, in relation to the theoretical names in Section 2.)

The first step is to choose the initial size of the sample, to ensure the reliability of the results. The accuracy (significant digits) of each output quantity is deduced from the precision of classical measurements in physics [6] and from *a priori* information.

- • According to the uncertainty of the average radius $R=25\text{\hspace{0.17em}}\mathrm{nm}$, determined in Section 3.B.1, the most probable interval of variation of $R$ is [22.5; 27.5] nm, and the minimum number of radii is 20 for a target accuracy of 0.25 nm.
- • From the results in Section 3.B.2, the resonant wavelength ${\lambda}_{r}$ is located around 520 nm. Therefore, the interval of wavelengths ${\lambda}_{0}$ can be limited to [510; 530] nm, and we deduce the minimum number of wavelengths ${\lambda}_{0}\text{:}200$. This sampling enables us to locate ${\lambda}_{r}\pm 0.1\text{\hspace{0.17em}}\mathrm{nm}$.
- • At each wavelength, the minimum number of random permittivities is deduced from the min–max interval about 520 nm and the acceptable tolerance. If we set it to 5%, the minimum number of relative permittivities is 20.

In this case, the production of an output sample of $M=400$ data requires 80,000 evaluations of each model. The number of inputs has been chosen to make a trade-off between the computing time and the significance of the results. The greater the number of random inputs, the more reliable the results are.

Nevertheless, this first evaluation does not ensure the accuracy of the mean value, of the standard deviation, and of the boundaries of the coverage intervals obtained from random generation of the inputs. Fortunately, the accuracy of the outputs can be controlled by processing the results of repeated realizations of the procedure. For this, the results of the repeated realizations of the 400 output computation are processed according the Monte Carlo adaptive procedure described in Ref. [2]. The algorithm of the procedure is given in Appendix A. The standard deviations of the averages, of the standard deviations, and of the boundaries of the coverage intervals are calculated along all Monte Carlo trials. The loop of realizations continues until the stopping criterion is satisfied:

### 1. Coverage Intervals for the Whole Sample (Size 80,400)

The standard deviation over the Monte Carlo trials of the averages, of the standard deviations, and of the coverage interval boundaries provides indicators of the convergence of the method (see Algorithms 1–3 in Appendix A). The stopping criteria [${\mathbf{u}}_{r}$($\mathbf{o}$) in Eq. (25)] are ${u}_{r}({\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}))=1\%$ and ${u}_{r}({\lambda}_{r})=0.97\%$ (Table 1). These values are chosen according to a preliminary test showing the dispersion of each quantity and the required precision. The number of iterations required to reach the stopping criterion is $\mathrm{max}(h)=201$. The whole output data set $\mathcal{O}$ (size 80,400) is the merging of all samples $\mathbf{o}$ of size 400. The coverage intervals are retrieved from the sample by finding the boundaries of the smallest interval that contains 95% of the data [2] (Algorithm 3 in Appendix A). The first two significant digits of the standard deviation $\sigma $ limit the significant digits of other results. The maximum ${\mathbf{s}}_{y}$ of the uncertainties of ${\lambda}_{r}$ decreases more rapidly than that of ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$. Hence, its relative value at the end of the loop is lower than 0.01%. Only 12 realizations are necessary to reach ${u}_{r}({\lambda}_{r})<0.1\%$. The coverage intervals with coverage factor $p=95\%$ for the results of the full Mie theory and the dipole approximation are overlapping, even if the average of the maximum of the scattering efficiency obtained from the dipole approximation is about 20% smaller. The blueshift of the resonance is about 1.6 nm for the dipole approximation. Consequently, the choice of the dipole approximation for modeling a nanoparticle of radius $R=25\text{\hspace{0.17em}}\mathrm{nm}$ should be based on an *a priori* acceptable error.

Over the visible spectrum, the analytical relative uncertainty $u({Q}_{\text{sca}}^{d})$ is between 40% and 42% [Eq. (20), Fig. 4]. The same values are obtained when calculating the ratio $\sigma /\mu $.

The large size of this merged sample prevents us from doing a statistical test of the ${H}_{0}$ hypothesis of a normal distribution of the outputs $\mathcal{O}=({\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}),{\lambda}_{r})$. However, a visual inspection of the quantile–quantile plot (Q–Q plot) also gives an indication of the normal distribution of the sample. Actually, data from a normal distribution are close to the first bisector of the Q–Q plot (dashed line in Figs. 7 and 8). Figures 7 and 8 show that the distribution of the values of the resonant wavelength may be normally distributed, unlike the maximum of the scattering efficiency.

The mean value, the standard deviation, and the boundaries of the coverage interval can be computed from a specific data set $\mathbf{o}$. In this case, the coverage interval associated with the level of confidence $p$, provided the null hypothesis of a normal distribution of the data, can be accepted.

### 2. Coverage Intervals for the Small Sample (Size 400)

A single realization of the method of the propagation of uncertainties is under consideration. This approach should be restricted to numerical models requiring time-consuming computation. The size of the sample is $M=400$, with random generation of

- • 20 values of the radius $R$ around the mean value $\mu =25\text{\hspace{0.17em}}\mathrm{nm}$ following a normal distribution with standard deviation $\sigma =2.5\text{\hspace{0.17em}}\mathrm{nm}$ [$\mathcal{N}(25,2.5)$];

Consequently, a set of 400 inputs is generated for this realization. It requires 80,000 evaluations of both models. For each wavelength ${\lambda}_{0}$, the scattering efficiencies are computed from the full Mie theory (${Q}_{\text{sca}}$) and from the dipole approximation (${Q}_{\text{sca}}^{d}$). Then the maximums of the scattering efficiencies (${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$, ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}^{d})$) are found for each set of inputs. Each maximum corresponds to a scattering efficiency resonance that occurs at ${\lambda}_{r}$. Figure 9 shows the scatter plot of a realization of the algorithm. The maximum values of the scattering efficiencies [${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})({\lambda}_{r})$ (black) and ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}^{d})({\lambda}_{r})$ (gray)] are plotted as functions of the resonant wavelengths (resp. ${\lambda}_{r}$ and ${\lambda}_{r}^{d}$). The dispersion of the results is due to the propagation of uncertainties of both the radius and relative permittivity. The two thick lines (indicated by arrows) come from the deterministic computation for $R\in [22.5;27.5]\text{\hspace{0.17em}}\mathrm{nm}$ [dipole approximation (gray) and full Mie theory (black)], by using the mean values of the relative permittivities. The scattering of results around these small segments shows the influence of the input uncertainties on the outputs. The maximum of the scattering efficiency can reach 140% of the maximum of the deterministic evaluation (for $R=27.5\text{\hspace{0.17em}}\mathrm{nm}$). The small value for the dispersion of ${\lambda}_{r}$ and ${\lambda}_{r}^{d}$ confirms the analytical analysis in Section 3.A. On the other hand, high variations of ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$ and ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}^{d})$ can be observed, leading to the conclusion of a high sensitivity of the value of the scattering efficiency to the inputs.

Table 2 shows the statistical results of 400 computations of ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$ (full Mie theory) and of ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}^{d})$ (dipole approximation). Table 3 gives the abscissa ${\lambda}_{r}$ and ${\lambda}_{r}^{d}$ of these maximums. Each column of Tables 2 and 3 is computed from normal distributions $\mathcal{N}(25,2.5)$ of radii and from uniform distributions of the relative permittivity of gold [Eqs. (18) and (19)]. Tables 2 and 3 give the mean value, the standard deviation, the results of normality tests [Lilliefors (SK), Pearson $({\chi}^{2})$, Anderson–Darling, Shapiro–Wilk tests of normality], coverage factors of the sample ${z}_{s}$ and of the mean value ${z}_{m}$, and coverage intervals. The effective value for degrees of freedom ${\nu}_{\text{eff}}$ is deduced from Eq. (8) with Eqs. (21)–(23):

In Table 2, the tests of a normal distribution of ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$ are all in agreement, based on the 201 Monte Carlo trials: all results cause the rejection of the normal distribution. The values of the standard deviation $\sigma $ can be compared to the estimated value from the analytical approach (Section 3.A). Over the visible spectrum, the analytical relative uncertainty $u({Q}_{\text{sca}}^{d})$ is between 40% and 42% [Eq. (20), Fig. 4]. About the same values are obtained by calculating the ratio $\sigma /\mu $.

Table 3 gives the results for the resonant wavelengths ${\lambda}_{r}$ and ${\lambda}_{r}^{d}$, respectively, computed from the full Mie theory and the dipole approximation. These wavelengths are the abscissa of ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$ and ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}^{d})$. However, the retrieved resonant wavelengths seem to follow a normal distribution, unlike the maximum of the scattering efficiency. Indeed, all tests of normality are in agreement (${p}_{\text{test}}>\alpha $ with $\alpha =5\%$ being the threshold generally considered), and the computed probabilities suggest that the hypothesis of a normal distribution of ${\lambda}_{r}$ is much more probable than that of ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$. The Anderson–Darling, Liiliefors and Pearson tests are in line with the Shapiro–Wilk test. According to these results, the hypothesis of a normal distribution of the values of ${\lambda}_{r}$ and ${\lambda}_{r}^{d}$ cannot be rejected. Consequently, the coverage intervals having a 95% level of confidence can be calculated from Eqs. (5) and (7). The coverage intervals ${\mathit{CI}}_{95\%}^{c}$ are a little bit wider than the coverage interval ${\mathit{CI}}_{95\%}$ of the sample in Table 1.

Even though the mean values and the standard deviations are of the same order of magnitude in Tables 1–3, this is not the case for the coverage intervals in Tables 1–3. The boundaries of the coverage intervals differ strongly for each model. These quantities are much more sensitive to the sample size than the mean value and the standard deviation. Therefore, to deduce accurate coverage intervals from the samples, the Monte Carlo method must be used. Moreover, it gives the dispersion of all quantities. Nevertheless, the overlap of the coverage intervals of the sample *CI* and ${\mathit{CI}}^{c}$ in each column of Tables 1–3 shows that the Mie theory and the dipole approximation are compatible for $R=25\text{\hspace{0.17em}}\mathrm{nm}$. Actually, both methods can give values of the same order of magnitude.

On the other hand, the coverage intervals of the mean ${\mathit{CI}}^{\mu}$ do not overlap in both columns of Tables 2 and 3. This discrepancy between the results obtained from the full Mie theory and those from the dipole approximation are evidence of the fact that the dipole approximation is not sufficiently accurate to evaluate the maximum of the scattering efficiency over the visible spectrum for $R\approx 25\text{\hspace{0.17em}}\mathrm{nm}$ or the resonant wavelength. The models are very different in terms of metrology. This result can be related to the nonvanishing error in Fig. 6, even if the uncertainty of the radius is not taken into account.

### 3. Standard Expression of the Mean Value

All the following numerical results are written by keeping two significant digits for the standard uncertainties of the mean value, as recommended in [1]. The number of digits kept in the estimated values of the scattering efficiency or of the resonant wavelength is limited accordingly. The last digit should only be considered to be supplementary information on the order of magnitude of the estimated values. Please note that the propagation of experimental uncertainties of inputs through a numerical model allows examination of the overlap between coverage intervals obtained from experiments and models. This comparison can support the often found conclusion of the “excellent agreement between experiment and theory.” The results of the computations can be written as a result of measurement as follows:

where the number following the symbol $\pm $ is the numerical value of an expanded uncertainty, $U={z}_{m}.u$, determined from a standard uncertainty $u$ computed from 20 values of radii following a normal distribution and from 20 random relative permittivity values following a uniform distribution. The coverage factor ${z}_{m}=2$ is arbitrary, since the distribution of values is probably not normal. On the other hand, the resonant wavelengths seem to follow a normal distribution. The result for the Mie theory can be written as follows: where the number following the symbol $\pm $ is the numerical value of an expanded uncertainty, $U={z}_{m}.u$, determined from a standard uncertainty $u$ computed from 20 values of radii following a normal distribution and from 20 random relative permittivity values following a uniform distribution. The coverage factor ${z}_{m}=1.9742$ is based on the t-distribution for 168 degrees of freedom, and defines an interval estimated to have a level of confidence of 95 percent. The resonant wavelength for the dipole approximation is where the number following the symbol $\pm $ is the numerical value of an expanded uncertainty, $U={z}_{m}.u$, determined from a standard uncertainty $u$ computed from 20 values of radii following a normal distribution and from 20 random relative permittivity values following a uniform distribution. The coverage factor ${z}_{m}=2.0003$ is based on the t-distribution for 60 degrees of freedom, and defines an interval estimated to have a level of confidence of 95 percent.These results show that the full Mie theory and the dipole approximation are not compatible in terms of metrology, with the estimated average and its dispersion being used to draw this conclusion. The behavior laws that can be deduced from the input and output samples may confirm this result.

#### E. Illustration of Application #2: Correlation and Behavior Laws

For this application of the propagation of uncertainties, the Pearson’s correlation coefficients $\rho $ [Eq. (2)] between the inputs [$(\mathfrak{R}({\epsilon}_{r}),\mathfrak{I}({\epsilon}_{r}))$, $R$] and the output of the full Mie theory [${\lambda}_{r}$, ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$] are calculated. Figure 10 shows the correlation matrix computed from the sample calculated from the full Mie theory. The elements on the diagonal of the matrix equal 1. They correspond to the autocorrelations.

### 1. Correlation Study

The most significant correlation coefficient is $|\rho (R,{\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}))|=0.9744$. The next ones are $|\rho (\mathfrak{R}({\epsilon}_{r}),{\lambda}_{r})|=0.5566$ and $|\rho (R,{\lambda}_{r})|=0.5185$. These two last values are not close enough to 1 to be significant. These results can be confirmed by the visual inspection of all the scatter plots (not shown here). The dispersion of the correlation coefficients can be computed along the 201 realizations of the Monte Carlo procedure. The standard deviation of $|\rho (R,{\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}))|$ is 0.0013; that of $|\rho (\mathfrak{R}({\epsilon}_{r}),{\lambda}_{r})|$ is 0.020. The dispersion decreases with the absolute value of the correlation coefficient. These uncertainties govern the number of significant digits.

### 2. Visualization of the Behavior Law

Let us focus on the relationship between the radius $R$ and the mean value of ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$ computed over the 20 values of relative permittivities of gold. Sorting the random values of $R$ and the values of the scattering efficiency maximum accordingly reveals a deterministic law that links $R$ and $\mu ({\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}))$. Figure 11 shows the corresponding points. The plots of the scattering efficiencies are calculated from the mean values $\mu ({\epsilon}_{r})$ of the relative permittivities for each wavelength (see Section 3.B.2). These deterministic approaches use the full Mie theory (gray line) and the dipole approximation (black line). The corresponding uncertainties also are plotted as crosses.

The sample that comes from the propagation of uncertainties falls in the interval of uncertainties of the full Mie theory, but not in those of the dipole approximation. The maximum of the scattering efficiencies found by propagating the uncertainties is much greater than those calculated from the mean value of the 20 relative permittivities following a uniform distribution for each wavelength ${\lambda}_{0}$. Therefore, calculating a behavior law is possible, and the effective relative permittivities can be deduced.

### 3. Behavior Laws from Models

Two approximations have been proposed in Section 3.A [Eqs. (15) and (16) (the dipole approximation)]. The dipole approximation depends on ${x}^{4}$, and the second term of the series in Eq. (15) depends on ${x}^{6}$. Therefore, the fitting of the scatter plot in Fig. 11 is possible with these polynomials. Using the dipole approximation [Eq. (16): ${Q}_{\text{sca}}={\alpha}_{1}{x}^{4}$] for the fitting of the sorted statistical data leads to a new coefficient, obtained from Levenberg–Marquardt nonlinear regression [58,59]: ${\alpha}_{1}=13.630$ (uncertainty 0.016), instead of ${\alpha}_{1}=8.830$ [calculated using the mean value of the relative permittivities of gold ${\epsilon}_{r}=-3.789+2.310i$ at ${\lambda}_{0}={\lambda}_{r}=518\text{\hspace{0.17em}}\mathrm{nm}$, where $i$ is the imaginary unit (${i}^{2}=-1$)]. The correlation coefficient of the best-fitting curve using the ${\alpha}_{1}{x}^{4}$ model is $\rho =0.9997$ rather than $\rho =0.996$.

The fitting of output data by the power series expansion for ${Q}_{\text{sca}}(x)$ for about $x=0$ to order ${x}^{6}$ [Eq. (15)] gives ${Q}_{\text{sca}}={C}_{1}{x}^{4}+{C}_{2}{x}^{6}$, with ${C}_{1}=11.602$ and ${C}_{2}=18.90$ (uncertainties 0.013 and 0.14, respectively). The correlation coefficient is improved to $\rho =0.99995$. Therefore, the model used for fitting is more accurate for $x\in [0.2149;0.3699]$ (the dipole approximation is not sufficient to fit the data). The great advantage of using two terms in series is the possibility of deducing the effective relative permittivity of particles, ${\epsilon}_{r}^{\text{eff}}=-2.28+1.76i$:

### 4. Behavior Laws from Polynomials

The “blind” polynomial fitting is used if no available model enables the choice of the polynomials. It gives the following results, with correlation coefficient 0.99996:

#### F. Illustration of Application #3: Optimization

The optimization process only corresponds to a change of point of view in the analysis of the results of the propagation of uncertainties (Section 2.E). In the present case, optimization consists in selecting the sets of model inputs (the radii) that guarantee a sufficient level of scattering efficiency. According to the above results, the lack of accuracy of the dipole approximation leads us to use the samples obtained from the full Mie theory.

### 1. Processing the Whole Sample of Size 80,400

A tolerance ${t}_{O}=0.90$ gives $\mathrm{max}({\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}))=0.414$. The number of selected data is 78. The evaluated mean value of the selected radii is $\mu =24.7\text{\hspace{0.17em}}\mathrm{nm}$. The standard deviation is $\sigma =2.4\text{\hspace{0.17em}}\mathrm{nm}$, and therefore the relative uncertainty should be reduced to 9.7%. The coverage interval ($p=95\%$) is ${\mathrm{CI}}_{95\%}^{c}=[20.3;28.9]\text{\hspace{0.17em}}\mathrm{nm}$, and ${s}_{y}=0.17\%$. Consequently, reducing the uncertainty of the radius of the produced particles is unnecessary if that of the relative permittivity remains unchanged.

### 2. Comparison with Classical Optimization

We can compare these results of optimization with those obtained from evolutionary optimization, by using the method described in [27]. The evolutionary optimization gives the best result over the domain of the search. For each wavelength ${\lambda}_{0}$, 30 random relative permittivities and 30 radii are generated (uniform distributions). These populations are recombined, and are mutated to produce an input set (size 100). The 30 best input sets are selected within the evolutionary loop until their relative dispersion is lower than a target (${10}^{-6}$). For each wavelength ${\lambda}_{0}$, the best radius is 27.5 nm (the upper boundary of the domain of the search). Indeed, the function ${Q}_{\text{sca}}(R)$ is an increasing function of the radius. The best result is $\mathrm{max}({\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}))=0.1764$ at ${\lambda}_{r}=519.9\text{\hspace{0.17em}}\mathrm{nm}$. Moreover, the real part and imaginary part of the relative permittivity of gold are found, respectively, at the maximum and the minimum of their interval of search at ${\lambda}_{0}={\lambda}_{r}$: $\mathfrak{R}({\epsilon}_{r})=-3.4242$ and $\mathfrak{I}({\epsilon}_{r})=1.7702$. The imaginary part of the relative permittivity contributes to the dimming and widening of the resonance peak of the scattering efficiency, and therefore should be as small as possible. The real part tends toward the pole of the scattering efficiency, ${\epsilon}_{r}=-2$. This result is also supported by the optimization using a wider domain of search for the relative permittivity, $[-4;0]$ for the real part and [0;2] for the imaginary part. Indeed, the optimum is found for $\mathfrak{R}({\epsilon}_{r})=-2.1887$, $\mathfrak{I}({\epsilon}_{r})<{10}^{-6}$, and $R=22.5$. In this case, the smaller the radius is, the better the dipole approximation is. The first terms of the series in Eq. (15) are therefore dominant.

Even if the classical optimization is of interest for explaining the physical behavior of the light scattering by gold nanoparticles, finding the global optimum does not allow us to get intervals of suitable inputs useful for experimental tuning. In contrast to this global optimization strategy, the proposed method gives a set of acceptable inputs, by relaxing the constraint on the searched optimum. Indeed, intervals can be deduced, and therefore the improvement of the experiments is possible by indicating the critical parameters and their tolerated uncertainty.

Saving computing time is of interest, and therefore a comparison of the results obtained from the whole sample and from a single realization is of interest.

### 3. Processing a Sample of Size 400

As an example, we consider the first sample (400 values), for which $\mathrm{max}({\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}}))=0.188$. The number of selected sets of inputs is 25. The mean value for the selected radii is $\mu =24.7\text{\hspace{0.17em}}\mathrm{nm}$, with a standard deviation $\sigma =1.9\text{\hspace{0.17em}}\mathrm{nm}$. The relative dispersion of the radii is therefore 8%. The coverage interval ($p=95\%$) is ${\mathrm{CI}}_{95\%}^{c}=[21.3;28.0]\text{\hspace{0.17em}}\mathrm{nm}$. A single realization, therefore, gives the same order of magnitude except for the maximum of the scattering efficiency and the boundaries of the coverage interval. Moreover, the number of significant digits for these results cannot be determined if the Monte Carlo scheme is not used.

After processing each of the 201 samples of 400 values, the minimums of the mean values, the standard deviations, and the boundaries of the coverage interval for $R$ are 26.4 nm, $1.4\times {10}^{-5}\text{\hspace{0.17em}}\mathrm{nm}$, 25.9 nm, and 26.5 nm, respectively. The corresponding maximums are 33.6 nm, 0.314 nm, 33.6 nm, and 33.6 nm. Therefore, the results of the processing of a single sample of size 400 should be considered as indicative and not accurate.

The random data obtained from the propagation of uncertainties have been used for optimization. Therefore, in this application of the method of propagation of uncertainties, the challenge was to recover the sets of inputs leading to a sufficient scattering efficiency. However, the target also could be the recovery of inputs that lead to a short interval around a given target value (output). This application of the propagation of uncertainties through a model is actually part of the resolution methods of the inverse problems.

#### G. Illustration of Application #4: Inverse Problem Resolution

For decades, nanoparticles have been used to enhance the signal of Raman spectroscopy [34,60–65] and fluorescence [56,66–72]. For these applications, even if the excitation wavelength is matched to an electronic transition of the investigated molecule or crystal, the detected signal is weak. Therefore, the resonance of the interaction of light with nanoparticles can be used to enhance the detected signal. In this case, the challenge consists in finding the radii $R$ of nanoparticles that could produce a resonance at a specific wavelength ${\lambda}_{r}$. Actually, the variable should be deduced from a constrained output of the model ${\lambda}_{r}$, but the inverse function of ${\mathrm{max}}_{{\lambda}_{0}}({Q}_{\text{sca}})$ cannot be calculated analytically. Fortunately, the data obtained in Section 3.D were pairs of values $(R,{\lambda}_{r})$. Therefore, the selection of nanoparticles whose resonance is located within a narrow range of wavelengths around the target is possible [73].

As an illustration, let us search the variable sets that verify ${\lambda}_{r}=518\pm 0.5\text{\hspace{0.17em}}\mathrm{nm}$. The tolerance is therefore ${t}_{I}=0.096\%=0.5/518$ [Eq. (11)]. The results are the following:

- •
**Sample of size 80,400:**The size of the selected sample ${\mathcal{O}}_{I}$ is 13,722. The mean value is $\mu =24.9\text{\hspace{0.17em}}\mathrm{nm}$, and the standard deviation is $\sigma =2.5\text{\hspace{0.17em}}\mathrm{nm}$. The coverage interval is ${\mathrm{CI}}_{95\%}^{c}=[15.6;29.1]\text{\hspace{0.17em}}\mathrm{nm}$. The uncertainty over the 201 realizations is 0.18 nm. - •
**Sample of size 400**(see Tables 2 and 3): The number of selected variables is 72. The mean radius is $\mu =24.9\text{\hspace{0.17em}}\mathrm{nm}$, and the standard deviation is $\sigma =2.4\text{\hspace{0.17em}}\mathrm{nm}$. The coverage interval is ${\mathrm{CI}}_{95\%}^{c}=[20.0;29.8]\text{\hspace{0.17em}}\mathrm{nm}$. Consequently, the average and dispersion of the solutions retrieved from this atom sample are significant, unlike the coverage interval. The small size of the second sample explains the discrepancy between the coverage intervals. - •
**All samples of size 400:**The minimums of the mean values, the standard deviations, and the boundaries of the coverage interval for $R$ are 23.4 nm, 3.81 nm, 15.6 nm, and 26.5 nm, respectively. The corresponding maximums are 26.8 nm, 3.81 nm, 23.8 nm, and 33.6 nm. Again, the repeated realizations through the Monte Carlo scheme require more computing time but improve the results of the inverse problem.

## 4. CONCLUSION

The analytical and statistical methods of the propagation of uncertainties were presented. We illustrated the methods through the problem of the scattering of light by a gold nanoparticle. We have discussed the validity of the dipole approximation for a gold nanosphere 25 nm in radius. The comparison of models, the applications in optimization and in inverse problems, and the determination of behavior laws from the propagation of uncertainties through a model have been presented. The cited references and suggestions for improvement could help one go further and see other illustrations. The purpose was to give tools to make the most of the repeated computations from random generated inputs of numerical models. The proposed applications can be applied to any physical model or repeated measurements. The opportunity to address the metrological problem of the propagation of uncertainties could inform the method of comparison of experimental and theoretical results with similar approaches [1].

## APPENDIX A: ALGORITHM USED TO DETERMINE THE SIZE OF THE SAMPLE

To easily demonstrate the connection with the Monte Carlo adaptive procedure in Ref. [2], we use the same names as much as possible. The coverage interval ${\mathrm{CI}}_{p\%}$ for the output quantity is defined in Section 7.7 of Ref. [2]. This is the shortest interval that contains $p\%$ of the output values, $p\%$ being the coverage factor in percent. The basic idea of this algorithm is to generate $h$ Monte Carlo trials of the $M$ outputs $\mathbf{o}$ until the dispersion of the mean value, the standard deviation, and the boundaries of the coverage interval fall within a stop criterion. This stop criterion is defined as the relative tolerance of the output data.

The output $\mathcal{O}$ of the procedure Sample is the sample of size $M\times h$ that fulfills the convergence criterion: the maximum of the relative standard deviations of the average, the standard deviation, and both boundaries of the coverage interval is lower than the relative tolerated uncertainty of the outputs (${\mathbf{u}}_{r}(\mathbf{o})$). The mean values $\mu (\mathcal{O})$, the standard deviation $\sigma (\mathcal{O})$, and the upper and lower boundaries of the coverage interval ${\mathrm{CI}}_{p\%}\leftarrow [{y}_{\text{low}}(\mathcal{O});{y}_{\text{high}}(\mathcal{O})]$ are deduced. The relative uncertainty of each output limits the significant digits on these data. The procedure Sample (Algorithm 1) uses the function POU (Algorithm 2) to compute the mean value, the standard deviation, and the coverage interval (function ${\mathrm{CI}}_{p\%}^{c}$; Algorithm 3), considering a confidence level $p$.

## APPENDIX B: GLOSSARY

The notations and quantities used in this paper are summarized in Tables 4–7.

## REFERENCES

**1. **Working Group 1 of the Joint Committee for Guides in Metrology (JCGM/WG 1), “Evaluation of measurement data—Guide to the expression of uncertainty in measurement,” 1st ed., Technical Report JCGM 100:2008 (GUM 1995 with minor corrections) (2008).

**2. **Working Group 1 of the Joint Committee for Guides in Metrology (JCGM/WG 1), “Evaluation of measurement data—Supplement 1 to the ‘Guide to the expression of uncertainty in measurement’—Propagation of distributions using a Monte Carlo method,” 1st ed., Technical Report JCGM 101:2008 (2008).

**3. **Working Group 1 of the Joint Committee for Guides in Metrology (JCGM/WG 1), “Evaluation of measurement data—Supplement 2 to the ‘Guide to the expression of uncertainty in measurement’—Extension to any number of output quantities,” 1st ed., Technical Report JCGM 102:2011 (2011).

**4. **B. N. Taylor and C. E. Kuyatt, *Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results* (National Institute of Standards and Technology, 1994) (supersedes NIST Technical Note 1297, January 1993).

**5. **L. Kirkup, “A guide to gum,” Eur. J. Phys. **23**, 483–487 (2002). [CrossRef]

**6. **D. Barchiesi and T. Grosges, “Inverse problem method: a complementary way for the design and the characterization of nanostructures,” AASCIT Commun. **2**, 296–300 (2015).

**7. **J. W. Eaton, D. Bateman, S. Hauberg, and R. Wehbring, GNU Octave Version 4.2.0 Manual: A High-Level Interactive Language for Numerical Computations (2016).

**8. **A. A. Borovkov, *Mathematical Statistics* (Gordon and Breach Science, 1998).

**9. **H. W. Lilliefors, “On the Kolmogorov-Smirnov test for normality with mean and variance unknown,” J. Am. Stat. Assoc. **62**, 399–402 (1967). [CrossRef]

**10. **A. Kolmogoroff, “Sulla determinazione empirica di una legge di distribuzione,” G. Ist. Ital. Attuari **4**, 83–91 (1933).

**11. **T. W. Anderson and D. A. Darling, “A test of goodness of fit,” J. Am. Stat. Assoc. **49**, 765–769 (1954). [CrossRef]

**12. **S. S. Shapiro and M. B. Wilk, “An analysis of variance test for normality (complete samples),” Biometrika **52**, 591–611 (1965). [CrossRef]

**13. **S. Kotz and N. L. Johnson, eds., *Breakthroughs in Statistics: Methodology and Distribution* (Springer, 1992).

**14. **G. E. P. Box, W. G. Hunter, and J. S. Hunter, *Statistics for Experimenters* (Wiley, 1978).

**15. **S. Kessentini and D. Barchiesi, *Nanostructured Biosensors: Influence of Adhesion Layer, Roughness and Size on the LSPR: A Parametric Study* (INTECH Open Access, 2013), Chap. 12, pp. 311–330.

**16. **H. Nakamura, T. Sato, H. Kambe, K. Sawada, and T. Saiki, “Design and optimization of tapered structure of near-field probe based on finite-difference time-domain simulation,” J. Microsc. **202**, 50–52 (2001). [CrossRef]

**17. **N. Harris, M. J. Ford, and M. B. Cortie, “Optimization of plasmonic heating by gold nanospheres and nanoshells,” J. Phys. Chem. B **110**, 10701–10707 (2006). [CrossRef]

**18. **D. Barchiesi, *Numerical Optimization of Plasmonic Biosensors* (INTECH Open Access, 2011), Chap. 5, pp. 105–126.

**19. **D. H. Wolpert and W. Macready, “No free lunch theorems for optimization,” IEEE Trans. Evol. Comput. **1**, 67–82 (1997). [CrossRef]

**20. **S. Kessentini, D. Barchiesi, T. Grosges, and M. Lamy de la Chapelle, “Selective and collaborative optimization methods for plasmonics: a comparison,” PIERS Online **7**, 291–295 (2011).

**21. **S. Kessentini and D. Barchiesi, “Particle swarm optimization with adaptive inertia weight,” Int. J. Mach. Learn. Comput. **5**, 368–373 (2015). [CrossRef]

**22. **D. Barchiesi, “Adaptive non-uniform, hyper-elitist evolutionary method for the optimization of plasmonic biosensors,” in *CIE 2009 International Conference Computers & Industrial Engineering* (2009), pp. 542–547.

**23. **T. Grosges, D. Barchiesi, T. Toury, and G. Gréhan, “Design of nanostructures for imaging and biomedical applications by plasmonic optimization,” Opt. Lett. **33**, 2812–2814 (2008). [CrossRef]

**24. **D. Barchiesi, “Surface Plasmon Resonance Biosensors: Model and Optimization,” in *Nanoantenna: Plasmon-Enhanced Spectroscopies for Biotechnological Applications*, M. L. De la Chapelle and A. Pucci, eds. (Pan Stanford, 2013), Chap. 11, pp. 333–357.

**25. **D. Barchiesi, “Numerical retrieval of thin aluminium layer properties from SPR experimental data,” Opt. Express **20**, 9064–9078 (2012). [CrossRef]

**26. **D. Barchiesi, “The Lycurgus cup: inverse problem using photographs for characterization of matter,” J. Opt. Soc. Am. A **32**, 1544–1555 (2015). [CrossRef]

**27. **D. Macías, A. Vial, and D. Barchiesi, “Application of evolution strategies for the solution of an inverse problem in near-field optics,” J. Opt. Soc. Am. A **21**, 1465–1471 (2004). [CrossRef]

**28. **D. Macias and D. Barchiesi, “Identification of unknown experimental parameters from noisy apertureless scanning near-field optical microscope data with an evolutionary procedure,” Opt. Lett. **30**, 2557–2559 (2005). [CrossRef]

**29. **A. Tarantola, *Inverse Problem Theory and Methods for Model Parameter Estimation* (Society for Industrial and Applied Mathematics, 2005).

**30. **J. Salvi and D. Barchiesi, “Measurement of thicknesses and optical properties of thin films from surface plasmon resonance (SPR),” Appl. Phys. A **115**, 245–255 (2014). [CrossRef]

**31. **D. Barchiesi, S. Kessentini, N. Guillot, M. Lamy de la Chapelle, and T. Grosges, “Localized surface plasmon resonance in arrays of nano-gold cylinders: inverse problem and propagation of uncertainties,” Opt. Express **21**, 2245–2262 (2013). [CrossRef]

**32. **G. Mie, “Beiträge zur Optik trüber Medien speziell kolloidaler Metallösungen,” Ann. Phys. **33025**, 377–445 (1908). [CrossRef]

**33. **C. F. Bohren and D. R. Huffman, *Absorption and Scattering of Light by Small Particles* (Wiley, 1998).

**34. **S. Link and M. A. El-Sayed, “Shape and size dependence of radiative, non-radiative and photothermal properties of gold nanocrystals,” Int. Rev. Phys. Chem. **19**, 409–453 (2000). [CrossRef]

**35. **D. Barchiesi and T. Grosges, “Control of the applicability of the dipole approximation for gold nanoparticles,” Adv. Studies Biol. **7**, 403–412 (2015). [CrossRef]

**36. **D. Barchiesi and T. Grosges, “Short note on the dipole approximation for electric field enhancement by small metallic nanoparticles,” J. Opt. **17**, 114003 (2015). [CrossRef]

**37. **R. Bienert, F. Emmerling, and A. F. Thünemann, “The size distribution of ‘gold standard’ nanoparticles,” Anal. Bioanal. Chem. **395**, 1651–1660 (2009). [CrossRef]

**38. **O. P. Na, L. Rodríguez-Fernández, V. Rodríguez-Iglesias, G. Kellermann, A. Crespo-Sosa, J. C. Cheang-Wong, H. G. Silva-Pereyra, J. Arenas-Alatorre, and A. Oliver, “Determination of the size distribution of metallic nanoparticles by optical extinction spectroscopy,” Appl. Opt. **48**, 566–572 (2009).

**39. **R. P. Carney, J. Y. Kim, H. Qian, R. Jin, H. Mehenni, F. Stellacci, and O. M. Bakr, “Determination of nanoparticle size distribution together with density or molecular weight by 2D analytical ultracentrifugation,” Nat. Commun. **2**, 335 (2011). [CrossRef]

**40. **W. Haiss, N. T. K. Thanh, J. Aveyard, and D. G. Fernig, “Determination of size and concentration of gold nanoparticles from UV-VIS spectra,” Anal. Chem. **79**, 4215–4221 (2007). [CrossRef]

**41. **Y. Mori, M. Furukawa, T. Hayashi, and K. Nakamura, “Size distribution of gold nanoparticles used by small angle x-ray scattering,” Part. Sci. Technol. **24**, 97–103 (2006). [CrossRef]

**42. **H. N. Verma, P. Singh, and R. M. Chavan, “Gold nanoparticle: synthesis and characterization,” Veterinary World **7**, 72–77 (2014). [CrossRef]

**43. **P. Stoller, V. Jacobsen, and V. Sandoghdar, “Measurement of the complex dielectric constant of a single gold nanoparticle,” Opt. Lett. **31**, 2474–2476 (2006). [CrossRef]

**44. **http://refractiveindex.info, 2016.

**45. **L. Gao, F. Lemarchand, and M. Lequime, “Comparison of different dispersion models for single layer optical thin film index determination,” Thin Solid Films **520**, 501–509 (2011). [CrossRef]

**46. **P. B. Johnson and R. W. Christy, “Optical constants of the noble metals,” Phys. Rev. B **6**, 4370–4379 (1972). [CrossRef]

**47. **E. D. Palik, *Handbook of Optical Constants* (Academic, 1985).

**48. **R. L. Olmon, B. Slovick, T. W. Johnson, D. Shelton, S.-H. Oh, G. D. Boreman, and M. B. Raschke, “Optical dielectric function of gold,” Phys. Rev. B **86**, 235147 (2012). [CrossRef]

**49. **S. Babar and J. H. Weaver, “Optical constants of Cu, Ag, and Au revisited,” Appl. Opt. **54**, 477–481 (2015). [CrossRef]

**50. **K. M. McPeak, S. V. Jayanti, S. J. P. Kress, S. Meyer, S. Iotti, A. Rossinelli, and D. J. Norris, “Plasmonic films can easily be better: rules and recipes,” ACS Photon. **2**, 326–333 (2015). [CrossRef]

**51. **A. D. Rakić, A. B. Djurišić, J. M. Elazar, and M. L. Majewski, “Optical properties of metallic films for vertical-cavity optoelectronic devices,” Appl. Opt. **37**, 5271–5283 (1998). [CrossRef]

**52. **A. Vial, A.-S. Grimault, D. Macias, D. Barchiesi, and M. Lamy de la Chapelle, “Improved analytical fit of gold dispersion: application to the modeling of extinction spectra with a finite-difference time-domain method,” Phys. Rev. B **71**, 085416 (2005). [CrossRef]

**53. **D. Barchiesi and T. Grosges, “Errata: fitting the optical constants of gold, silver, chromium, titanium, and aluminum in the visible bandwidth,” J. Nanophoton. **8**, 089996 (2015). [CrossRef]

**54. **D. Barchiesi, E. Kremer, V. P. Mai, and T. Grosges, “A Poincaré’s approach for plasmonics: the plasmon localization,” J. Microsc. **229**, 525–532 (2008). [CrossRef]

**55. **D. Barchiesi and A. Otto, “Excitations of surface plasmon polaritons by attenuated total reflection, revisited,” Riv. Nuovo Cimento **36**, 173–209 (2013).

**56. **T. Pagnot, D. Barchiesi, D. van Labeke, and C. Pieralli, “Use of SNOM architecture to study fluorescence and energy transfer near a metal,” Opt. Lett. **22**, 120–122 (1997). [CrossRef]

**57. **G. Parent, D. Van Labeke, and D. Barchiesi, “Fluorescence lifetime of a molecule near a corrugated interface: application to near-field microscopy,” J. Opt. Soc. Am. A **16**, 896–908 (1999). [CrossRef]

**58. **K. Levenberg, “A method for the solution of certain problems in least squares,” Quart. Appl. Math. **2**, 164–168 (1944). [CrossRef]

**59. **D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” J. Soc. Ind. Appl. Math. **11**, 431–441 (1963). [CrossRef]

**60. **M. Futamata, “Surface-plasmon-polariton-enhanced Raman scattering from self-assembled monolayers of *p*-nitrothiophenol and *p*-aminothiophenol on silver,” J. Phys. Chem. **99**, 11901–11908 (1995). [CrossRef]

**61. **N. Félidj, J. Aubard, G. Lévi, J. R. Krenn, M. Salerno, G. Schider, B. Lamprecht, A. Leitner, and F. R. Aussenegg, “Controlling the optical response of regular arrays of gold particles for surface-enhanced Raman scattering,” Phys. Rev. B **65**, 075419 (2002). [CrossRef]

**62. **N. Félidj, J. Aubard, G. Lévi, J. R. Krenn, A. Hohenau, G. Schider, A. Leitner, and F. R. Aussenegg, “Optimized surface-enhanced Raman scattering on gold nanoparticle arrays,” Appl. Phys. Lett. **82**, 3095–3097 (2003). [CrossRef]

**63. **J. Grand, M. L. de la Chapelle, J.-L. Bijeon, P.-M. Adam, A. Vial, and P. Royer, “Role of localized surface plasmons in surface-enhanced Raman scattering of shape-controlled metallic particles in regular arrays,” Phys. Rev. B **72**, 033407 (2005). [CrossRef]

**64. **L. Billot, M. L. de la Chapelle, A. S. Grimault, A. Vial, D. Barchiesi, J.-L. Bijeon, P.-M. Adam, and P. Royer, “Surface enhanced Raman scattering on gold nanowire arrays: evidence of strong multipolar surface plasmon resonance enhancement,” Chem. Phys. Lett. **422**, 303–307 (2006). [CrossRef]

**65. **G. Sun, J. B. Khurgin, and D. P. Tsai, “Comparative analysis of photoluminescence and Raman enhancement by metal nanoparticles,” Opt. Lett. **37**, 1583–1585 (2012). [CrossRef]

**66. **H. Chew, P. J. McNulty, and M. Kerker, “Model for Raman and fluorescent scattering by molecules embedded in small particles,” Phys. Rev. A **13**, 396–404 (1976). [CrossRef]

**67. **P. J. McNumy, S. D. Druger, M. Kerker, and H. Chew, “Fluorescent scattering by anisotropic molecules embedded in small particles,” Appl. Opt. **18**, 1484–1488 (1979). [CrossRef]

**68. **T. Hayakawa, S. T. Selvan, and M. Nogami, “Field enhancement effect of small Ag particles on the fluorescence from Eu_{3}^{+}-doped SiO_{2} glass,” Appl. Phys. Lett. **74**, 1513–1515 (1999). [CrossRef]

**69. **T. Pagnot, D. Barchiesi, and G. Tribillon, “Energy transfer from fluorescent thin films to metals in near-field optical microscopy: comparison between time-resolved and intensity measurements,” Appl. Phys. Lett. **75**, 4207–4209 (1999). [CrossRef]

**70. **G. Parent, D. Van Labeke, and D. Barchiesi, “Surface imaging in near-field optical microscopy by using the fluorescence decay rate: a theoretical study,” J. Microsc. **194**, 281–290 (1999). [CrossRef]

**71. **C. Baffou, M. Kreuzer, F. Kulzer, and R. Quidant, “Temperature mapping near plasmonic nansotructures using fluorescence polarization anistropy,” Opt. Express **17**, 3291–3298 (2009). [CrossRef]

**72. **V. V. Datsyuk and O. M. Tovkach, “Optical properties of a metal nanosphere with spatially dispersive permittivity,” J. Opt. Soc. Am. B **28**, 1224–1230 (2011). [CrossRef]

**73. **D. Barchiesi, “Handling uncertainties of models inputs in inverse problem: the U-discrete PSO approach,” in *International Conference on Control, Decision and Information Technologies (CoDIT)*, I. Kacem, P. Laroche, and Z. Róka, eds. (IEEE, 2014), pp. 747–752.