## Abstract

An innovative method for the automatic design of optical systems is presented and verified. The proposed method is based on a multi-objective evolutionary memetic optimization algorithm. The multi-objective approach simultaneously, but separately, addresses the image quality, tolerance, and complexity of the system. The memetic technique breaks down the search for optical designs in to three different parts or phases: optical glass selection, exploration, and exploitation. The optical glass selection phase defines the most appropriate set of glasses for the system under design. The glass selection phase limits the available glasses from hundreds to just a few, drastically reducing the design space and significantly increasing the efficiency of the automatic design method. The exploration phase is based on an evolutionary algorithm (EA), more specifically, on a problem-tailored generalized extremal optimization (GEO) algorithm, named Optical GEO (O-GEO). The new EA incorporates many features customized for lens design, such as optical system codification and diversity operators. The trade-off systems found in the exploration phase are refined by a local search, based on the damped least square method in the exploitation phase. As a result, the method returns a set of trade-off solutions, generating a Pareto front. Our method delivers alternative and useful insights for the compromise solutions in a lens design problem. The efficiency of the proposed method is verified through real-world examples, showing excellent results for the tested problems.

© 2016 Optical Society of America

## 1. Optimization methods in lens design

A lens system can be represented as a point $({x}_{1},{x}_{2},\cdots ,{x}_{n})$ in an n-dimensional construction parameter space. The most common construction parameters, or design variables, in lens design are (i) the radius of curvature of each optical surface (lenses or mirrors); (ii) the central thicknesses of the lenses; (iii) the air spaces between the optical elements, and (iv) the optical glass type used for the lens construction.

Each point in the construction parameter space is associated with one objective function (OF) value, which is also called a merit function (MF) in optical design. The OF value defines system performance. The optimization process of an optical system consists in transferring the initial point in the parameter space to another position that improves the MF, which, in our case, is a decrease in value. Repeating this process, the designer may eventually reach a satisfactory solution [1].

The way the solution shifts in the parameter space during the optimization process depends on the optimization method used. Generally speaking, these methods can be divided into two families: deterministic and metaheuristic methods.

Deterministic, or classical, optimization methods are analytical, direct search-for-optimum methods, generally driven by OF gradient information. Therefore, deterministic methods can only be directly applied to continuous and differentiable functions [2–4].

Metaheuristic methods, on the other hand, are non-deterministic, generally stochastic, optimization methods that make few assumptions about the optimization problem to be solved. Metaheuristics are normally used for global search purposes, but a globally optimal solution is not guaranteed. Natural, organic processes inspire many of these methods. Metaheuristics are recommended for highly nonlinear, multimodal, discontinuous, and multivariate problems. Some examples of methods of this class are: simulated annealing, tabu search, evolutionary optimization, ant colonies, particle swarms, scatter search, and immune systems [5–8].

#### 1.1. Evolutionary optimization methods and their use in optical design

Global methods of optimization in lens design have been studied for more than three decades [9], and are still an active topic of study with very recent publications [10]. Over the years, different methods have been proposed in this field [11–13].

By far, evolutionary optimization (EO) is the most used metaheuristic method of global search in lens design, and is also the class of algorithm selected for the exploration phase of the automatic design method presented herein. Therefore, we restrict our discussion to this class of metaheuristic algorithms.

In the last decades, EO has been used for problem solving in several fields of knowledge. These algorithms attempt to mimic the evolution of living organisms, using the natural selection theory proposed by Charles Darwin and the heritage of characteristics between generations of individuals through the mechanisms of genetics [14].

The basic principle of EA lies in the fact that environmental pressure over the individuals of a species causes natural selection, which increases the quality of the population in subsequent generations. The basic elements of EA are (i) the process of selection of individuals based on their fitness in the environment and (ii) the diversity operators (in general, recombination and mutation) responsible for generating new individuals [15].

When these algorithms are applied to optimization problems, the individuals in the population are, in fact, potential solutions for the problem, and the individual fitness is proportional to the merit function of the solution it represents.

In the specific case of optical design, each individual carries, in a codified form, the information of each variable of the optical system, such as the radius of curvature, the air space, glass thickness, and the material properties of each surface. Lens design optimization is a minimization problem. In this case, as the MF (or OF) decreases, individual fitness increases.

Evolutionary optimization has some advantages over classical methods.

- a) It provides a much higher probability of finding the global minimum.
- b) It does not use either the gradient, Jacobian, or derivative information from the OF.
- c) It has large application domains requiring little to no information about the problem being solved.
- d) It can be applied in problems with any kind of variable; be they continuous, discrete, or a mix of both.
- e) It is robust and well suited for complex search spaces without getting stuck in local minima.
- f) It does not require an initial solution for good convergence.
- g) It is simple to implement and parallelize.
- h) It can be applied to multi-objective problems without transforming them in to a mono-objective problem.

This class of algorithms is considered computationally demanding; however, this drawback may not be a huge problem for modern computational systems. The rapid evolution in computers performance in the last decades has enabled the successful use of evolutionary methods to solve optimization problems in different areas. In optical design, these methods have been used since the 1990s with the promise of addressing the limitations of classical methods.

Different evolutionary optimization approaches have been used in lens design: genetic algorithms [16–23], evolution strategy [16–25], genetic programming [21,26,27], and evolutionary programming [28]. All of these studies have reported good results.

However, by studying the previous research, we discovered opportunities to improve the algorithms by designing from scratch, using a multi-objective approach [29], customizing the EA for the problem, and optimizing the glass selection before the system is effectively designed in order to reduce the design space.

Some studies have reported the use of multi-objective optimization approaches in lens design [18,30,31]. However, the previous research has not discussed the use of this technique in designing optical systems, which foresee fabrication issues such as system sensitivity and complexity.

Despite many papers claiming the possibility of designing a lens system from scratch, we found only a single implementation [26,27] where the number of lenses is used as a design variable in the search. For a true, from scratch design, only the system requirements must be provided. When the number of lenses is preset, there is a huge constraint in the design, which restricts the exploration of the design space.

Many of the studies applying EA in lens design made few or no customizations to the algorithm for the specific problem of lens design [17,19–21,24,25,30,31]. This lack of adaptation limits the algorithm’s performance.

The glass used in each lens is an important design parameters in a lens system, especially for color correction, but they are discrete variables that cannot be directly optimized with classical methods. On the other hand, EAs have no limitations in dealing with discrete variables.

Nonetheless, many EO methods applied in lens design do not take advantage of this. They either do not consider the glasses as variables at all [21,22], or they transform glasses into continuous variables using glass model equations [20,24,31], which do not produce satisfactory results, especially when color correction plays an important role.

Furthermore, the studies that use of glass directly as a discrete variable [16–18,23,26,32,33] do not do so efficiently, despite reporting excellent results. The dimensionality of the glass selection problem is huge, even for a reasonably simple optical system, as pointed out by Tesar [34].

Based on the weaknesses found in the methods presented in the literature, we present a new method that addresses all these issues. The proposed method makes use of a variant of the evolutionary “generalized extremal optimization” (GEO) algorithm [35] during the exploration phase.

## 2. The proposed automatic optical design method

#### 2.1. The method overview

Figure 1 illustrates the flowchart of the proposed method. Our method is characterized as a memetic approach [36,37] because it breaks the search into three parts: (i) glass selection, (ii) exploration, and (iii) exploitation phases. As input, the designer must provide only the optical system requirements and constraints.

#### 2.2. Glass selection phase

The glass selection phase is performed first, and it is independent of the two subsequent phases in the process of automatic design. The aim of this phase is to select the most appropriate set of glasses for the system under design. This information is then used as a priori knowledge in the exploration phase.

The glass selection method used here was presented earlier [38]. It lies on the compatibility of the glass materials to minimize the chromatic aberration, while simultaneously controlling other specific, difficult-to-correct, monochromatic aberrations. The glass selection method also uses a multi-objective approach and the final selection of the best set of glasses depends on the designer’s choice.

The glasses in the exploration phase are then considered as discrete variables, but with significantly reduced number of possibilities. The performance of the EO algorithm can be significantly increased if the number of glasses is constrained. This is confirmed by Tesar [34] and Van Leijenhorst *et al*. [17].

#### 2.3. Exploration phase

The exploration phase is based on a multi-objective EO algorithm, which is responsible for the global search aspect of the method. We call this algorithm Optical GEO (O-GEO). The O-GEO is a tailored version resulting from the mix between M-GEO [39] and GEO_{real1} [40], with the addition of specific codification of the candidate solutions and diversity operators, suitable for the problem of optical design.

A multi-objective optimization approach does not return a single solution, but finds a number of compromise solutions for the problem, which are called Pareto optimal solutions [29]. A solution is a Pareto optimal, or non-dominated, if none of the objective functions can be improved without degrading some of the other objectives.

This approach gives the designers a very clear idea of the compromises that can be made in the design, considering the metrics used.

Evolutionary Optimization (EO) Algorithms are very powerful tools to directly solve multi-objective optimization problems. The multi-objective approach proposed here addresses the image quality, assembly tolerance, and system complexity.

Tolerancing is a crucial issue in the optical design field due to the high sensitivity of imaging optical systems with respect to fabrication errors. The goal of optical designers is to determine systems that comply with image quality requirements after fabrication. Furthermore, the production costs of insensitive systems are reduced without significantly compromising the image quality.

Classical approaches for the design of insensitive optical systems are based on a two-part interactive method. The first part is the design itself. As the second task, tolerance analysis is carried out in order to determine the system error budget and the expected as-built performance. Usually, designers realize that some modifications to the system architecture are needed to improve the as-built performance within the desired manufacturing tolerances. In this way, an interactive process of design and tolerance analyses is performed to find an acceptable trade-off solution between the as-designed and the as-built system performance.

Some studies have already suggested methods where both the lens system image quality and sensitivity are considered at the same time during the design optimization phase [41–46]. However, each one of these methods has drawbacks that were already discussed in [47].

Another important characteristic in the fabrication of an optical system is system complexity. Herein, system complexity is taken as the number of optical elements used in the system.

Normally, more complex systems require more expensive production in terms of man-hours and material. On the other hand, increasing the number of optical elements in a system increases the number of degrees of freedom available to correct the optical aberrations, making better image correction possible. Furthermore, increasing the number of elements potentially lowers their individual power, making the system more insensitive to fabrication errors of each element. Then again, with more elements, the system has more potential sources of errors in the fabrication.

With this explanation, we see that defining the optimum number of elements used in an optical system is not a straightforward task, even for an experienced optical designer, especially for high performance optical systems. As our research has determined, the variation of the number of optical elements during the automatic optimization has rarely been explored.

Only one optimization tool for optical systems, based on a genetic programming algorithm, utilizes the number of lenses as a design variable [26,27]. In spite of that, the number of lenses is controlled by a penalty in the merit function; the more elements, the higher the penalty. In our opinion, the EO method used, as well as the way they control the number of lenses, result in a drawback to their implementation. The penalty to control the number of lenses restricts the exploration of the design space and the genetic programming approach limits the method’s application and/or performance due to its computationally demanding nature. To execute his method, Koza *et al*. [26] mention the use of a cluster with thousands of CPUs.

An overview of the O-GEO algorithm is shown in Fig. 2. It is very similar to other versions of GEO, particularly to M-GEO. We use real codification, and one of the many diversity operators is similar to the perturbations used in GEO_{real1}.

The algorithm performs the following steps: 1) randomly generate a set of species representing an optical system; 2) perturb the system applying the diversity operators to all species, one at a time; 3) compute all objective functions for each solution that results from the application of the diversity operators; 4) check the dominance criteria among the perturbed systems and the Pareto front for the current execution of O-GEO, saving all of the non-dominated solutions of this execution in a new Pareto front; 5) randomly select one of the objective functions; 6) sort the perturbed systems according to their fitness from the least to the most adaptable as given by the selected objective function; 7) randomly select one of the perturbed systems and use this system as the seed for the next generation with the probability given by $P\left(k\right)={k}^{-\tau}$, where $k$ is the rank position and $\tau $ is a non-negative real value adjustable parameter; and, finally, 8) check if one of the stopping criteria has been fulfilled. If one of the stopping criteria is verified, the algorithm stops and returns the last Pareto front of this independent execution of O-GEO. This Pareto front is sent to the exploitation phase as input. Otherwise, a new generation starts with the system selected in step 6, and the algorithm repeats steps 2 to 7.

The candidate solution representation in O-GEO was customized for the problem allowing the application of the different diversity operators. A candidate solution in O-GEO has a string representation of the form shown in Fig. 3.

The first two cells in the string provide general information about the optical system. The first cell contains the number of lenses in the system. The second cell provides the surface number where the system aperture stop (system diaphragm) is located.

We are not aware of any commercial optical design software that uses the number of lenses as a design variable. In our case, however, because we intend to design optical systems from scratch, a wide-ranging search in the design space, which returns all potential trade-off solutions, is necessary. Therefore, in this case, the number of lenses becomes a crucial variable. The aperture stop position is also not normally used as a variable; nevertheless, it can be converted into a continuous variable in commercial optical design software with some tricks. In our case, we treat the aperture stop position as a discrete variable.

After the general system information, the codification string provides the data for each lens in the system starting with the lens position in the set, which is given by an integer number indicating its rank in the system, followed by its construction parameters: the first surface radius of curvature (R1), the second surface radius of curvature (R2), the central thickness of the lens (T1), which is the distance from the first lens surface to the second lens surface, the central distance to the next lens (T2), which is the distance from the second lens surface to the first surface of the following lens, the lens material (M1), and the medium material between one lens and the next lens (M2).

Note that the lens position information has nothing to do with the lens sequence in the codification string. The lens position cell is important for facilitating the application of diversity operators proposed in the method such as the variation of the number of lenses in the system and the switching of lens position as presented ahead.

The lens radii and central thickness are real values, normally defined within a certain boundary provided by the designer. An integer number corresponding to a real glass in the catalog represents the material variables. If the material is set to zero, it is interpreted to be air.

Note that the radius of the object, the radius of the image plane, and the distance from the object to the first lens is not codified in the string. These parameters are not considered to be variables and must be defined by the designer. Moreover, the second radius of curvature of the last lens in the system as well as the related second central distance are not treated as variables in this phase of the search. The second radius of curvature of the last lens is solved to give the required system an effective focal length and the distance from the last surface to the image is solved to keep the image plane in its paraxial position.

In Step 2, the diversity operators are applied to the current candidate solution, one at a time, for each one of the species (or variables). Based on our knowledge of the problem, six diversity operators were developed for O-GEO. Each one of the operators acts on one type of variable. O-GEO has the following diversity operators: 1) continuous variable mutation, 2) lens flipping mutation, 3) lens position mutation, 4) glass mutation, 5) aperture stop shift mutation, and 6) number of lens mutation.

**Continuous variables mutation**

This operator is applied to the continuous variables of the candidate solution string: radii of curvature (R1 and R2), glass thickness (T1), and lens air separation (T2). The continuous variable mutation works in a similar way to the diversity operator used in GEO_{real1} implementations. We applied a small change, because the continuous variables can represent different classes of physical parameters, which may have different numerical ranges and magnitudes. The perturbation of each one of the continuous variables is then given by Eq. (1).

where ${{x}^{\prime}}_{i}$ is the value of variable *i* after the perturbation, and $N(0,{\delta}_{j})$ is a random number with a Gaussian distribution, zero mean, and standard deviation ${\delta}_{j}$. The standard deviation is different for each class j of variables given by

where $Vrang{e}_{j}$ is the numerical range for the variables of class *j*. The range is calculated by finding the difference between the maximum and minimum value allowed for each class of variables, which are defined as a constraint by the designer. Defining the standard deviation in this way avoids the use of different adjustable parameters for each variable class in the algorithm. In this case, it is only necessary to define $\sigma $.

If, after the perturbation, the continuous variable is outside of the constraints defined, the algorithm fixes the value of the variable to the closest boundary.

**Lens flipping mutation**

The lens flipping operator mimics a method used by optical designers to escape from local minima. Smith [48] suggests the technique of flipping over a lens in his book in one or more of the design examples.

With this operator, we can automate this technique, applying it to all of the lenses in the system, one at a time. The implementation of this algorithm is very simple. For a specific lens in the system, the numerical values of R1 and R2 are swapped and their signal changed.

**Lens position mutation**

This operator acts on the lens position cell of the codification string. The objective of this operator is to swap the lens positions in the system.

The lens position mutation operator works in the following way. The position of the first lens appearing in the codification string is swapped with the position of the second lens in the string. The position of the first lens in the string is swapped with the third lens in the string, and so on, until it has been swapped with the last lens in the string. Then, the second lens in the string is swapped with the position of the third, forth, and so on until it has been swapped with the last lens in the string. This process is repeated until the next-to-last lens in the string has been swapped with the last lens in the string.

**Glass mutation**

EO methods are advantageous because they can easily deal with discrete variables. In this way, the glasses can be used as variables, and we can always assume real glass data.

In the automatic design method present in this paper, the selection of the most suitable glasses for the specific design is the first task executed, as shown in Fig. 1. In doing so, we can vary the glass, while drastically reducing the design space. The number of glasses is reduced from hundreds to a few, and the glass mutation operator takes advantage of this. The mutation operator changes the glass type of each lens for each one of the available glasses selected by the method. This is done one lens at a time.

**Aperture stop shift mutation**

This operator is applied to the aperture stop position cell, which is the second cell in the codification string. The aperture stop position cell is given in terms of the surface number, which is counted from the first surface (the object) to the last surface (the image). The aperture stop can be placed at any surface but the object and image position. Because the possibilities are small and finite, this operator changes the aperture stop position by placing it at all possible surfaces.

**Number of lens mutation**

This operator adds or deletes lenses in the system. To do that, it acts on the first cell of the codification string, increasing or decreasing the number. The perturbation in this variable was limited to plus or minus one lens.

For the case in which one lens is added, the parameters of the new lens (R1, R2, T1, T2, and M1) are randomly generated. This new lens is then placed in all possible lens positions from 1 to $NL+1$, where $NL$ is the number of lenses before the application of this operator. For each position $NP$ occupied by the new lens, the position of each one of the lenses with an index greater than or equal to $NP$ is changed to their original position plus one.

For the case where one lens is deleted, all possibilities are also tested. All of the system lenses are deleted, one at a time. When the lens with index $DL$ is deleted, all of the lenses with an index greater than $DL$ are changed to their original position minus one.

#### 2.4. Exploitation phase

While EO methods are very good in exploring different regions in the design space, without getting stuck in local minima, they are not as efficient for tuning the optical design when the search is close to a minimum, because optical design problems are high-dimensional a with strong epistasis.

For this reason, many studies applying EO in lens design use hybrid methods, combining both local search and EO methods to find good solutions [19,20,28,30–32]. This hybridization combines the best characteristics of each method and overcomes their weaknesses.

In a hybridized process, the EO method is used for finding regions of solution space that are good candidates for locating the optimum candidate, and the classical optimization methods are used to actually find the optimum candidate. This means that the EO methods are used to find a set of good starting points and the classical methods use these starting points to find the optimum solution.

The exploitation phase, also known as the intensification phase, begins after each independent execution of the exploration phase terminates. The non-dominated solutions found during the previous independent execution of O-GEO are used as input in this phase. A local search is conducted for every non-dominated system for which real rays can be traced.

The local search is based on a descendent optimization method. In the exploitation phase, the objective function is formed by an image quality metric and the static internal constraint penalty function. The image quality metric was chosen in this phase because it is the most important one.

Because the method can only deal with continuous variables, optimization in the exploitation phase only affects the radii (R1 and R2) and the thicknesses (T1 and T2) of all lenses in the system. All other parameters, i.e., the glass material, aperture stop position, number of lenses, lens sequence, etc., are fixed during this local optimization.

The local search is done with an algorithm based in the Levenberg–Marquardt method [49]. The local optimization algorithm runs for each feasible system until a stopping criterion has been reached.

Once the local search has been applied for all the feasible input solutions, the algorithm checks the dominance criteria. Dominance is verified between the current systems after the local search and the non-dominated solutions found so far by the automatic design algorithm. The new set of non-dominated solutions is saved in a file with the codification used by O-GEO.

## 3. Simulator and the merit functions

In order to automatically design and optimize optical systems, we need to use an optical simulator. An optical simulator is defined herein as software capable of simulating the sequential light propagation through a set of optical elements, emulating the image formation of an optical system.

We also need to define a metric, or metrics, used to evaluate system performance. As described earlier, the optimization process consists of changing the design parameters with the goal of minimizing or maximizing the metric or metrics provided, while simultaneously respecting the constraints imposed by the problem.

This section presents the concept of the simulator, metrics, and merit functions used in this work for the automatic design of optical systems.

#### 3.1. The simulator

In our first approach, we used a commercial lens design software program as the simulator. The lens design software was linked to preliminary versions of the automatic design algorithm. The role of the commercial lens design software was only to trace rays through the system defined by the automatic design algorithm.

Unfortunately, we found some limitations with this approach. Performance was slow in the exchange of information between the lens design software and our automatic design algorithm and it was not possible to find a efficient way of changing the number of components in the lens design software during the optimization processes. These weaknesses led us to abandon the idea of using commercial lens design software for ray tracing purposes and develop our own simulator.

The simulator was intended only for research purposes and was not intended to be a sophisticated tool like commercial optical design software solutions. Our simulator was only capable of simulating light propagation by means of a sequential geometrical ray trace through axial symmetrical systems containing only spherical surfaces, which was all we needed for our research.

The optical design software was developed following a component-based architectural design method, also known as the building block method. In this kind of architecture, the software is decomposed into independent subroutines (functions), which can be combined to execute specific and complex tasks. This architecture is flexible, facilitating debugging, integration, evolution, and extension.

The optical system metrics proposed in this work were implemented in this simulator, too. Other auxiliary functions used during the optimization process as: lens edge thickness, lens edge separation, total system length, and others, were implemented as well, and were used to control the constraints in the systems.

The simulator took about one year of work to become fully functional. Every implemented function implemented was verified. Whenever possible, the results of the functions were compared to reported quantities given by commercial optical design software.

#### 3.2. The optimization metrics

In this section, we present the metrics used in the optimization algorithm proposed in this paper. We used three classes of metrics to evaluate the solutions: the image resolution, the system’s sensitivity to fabrication errors, and the system’s complexity. System complexity is simply taken as the number of optical elements used. However, the other two metrics involve more elaborate considerations and computations, which we explain in detail.

### 3.2.1-Image resolution metrics

For the image resolution metric, we applied two different merit functions: one for the exploration phase and one for the exploitation phase. Each merit function had special characteristics that are appropriate to the phase where they were applied. Both metrics were based on an estimation of the optical system’s wavefront error.

One of the key points for determining the performance and feasibility of evolutionary optimization methods applied to engineering problems is related to the computational cost necessary for calculating the merit function. These optimization methods are stochastic and must calculate the MF repeatedly for a large number of times to achieve an adequate solution.

When personal computers were not available, or when computers were not available at all, optical designers developed simple and fast methods to verify the quality of an optical system design. Perhaps the most efficient and famous is the Seidel aberration coefficients. These coefficients are calculated using a paraxial trace of only two meridional rays through the system. Several famous systems were designed using the Seidel coefficients, such as the Cooke triplet and the Ritchey–Chrétien telescope.

With the invention and evolution of computers, the design and optimization of optical systems using the Seidel coefficients declined. Since computers became fast enough to calculate real skew rays through an optical system, the traces of many rays has been preferred to construct the MF of optical systems, because ray tracing is a more accurate image quality assessment technique than Seidel coefficients. However, even with very fast computers to trace rays, improved rates in MF computation are welcome for evolutionary optimization of optical systems, resulting in a better and faster exploration of the complex design spaces intrinsic to the problem.

Another important issue in applying EO methods in the problem of lens design is related to feasibility of the systems. Many systems generated during the search are unfeasible, especially during random generation of the first population and by diversity operators (e.g., crossover or mutation) during early stage of the optimization. This issue is aggravated in large-scale problems (systems with many lenses). This is a consequence of the complex constraints involved in optical design problems. For unfeasible systems, standard image quality metrics, such as spot size and wavefront error, which depends on the trace of real rays are impossible to calculate due to the failure of real ray trace through the system.

These feasibility problems have been reported in several studies, along with methods to circumvent these issues [18,20,31].

With speed and feasibility issues in mind, we developed an image quality metric for the exploration phase given by the square root of the sum of squares of the RMS wavefront error of the whole FOV for each wavelength λ. For this calculation, we used the wave aberration function model $W\left({H}^{\text{'}},\rho ,\theta ,\lambda \right)$ given at the circular exit pupils of rotationally symmetric systems described by an infinite power series as a function of the normalized image height ${H}^{\text{'}}$ and the exit pupil coordinates $(\rho ,\theta )$. The wave aberration coefficients in $W$ are the peak-normalized ones ${W}_{ijk}$, which have been normalized in image height and exit pupil radius.

For the proposed metric, the wave aberration equation was expanded up to the fifth-order coefficients. The third-order coefficients (from ${W}_{040}$ to ${W}_{311}$) are the Seidel terms, and their calculations method can be found in any optical design book. The algebraic equations used to calculate the fifth-order wave aberration coefficients (from ${W}_{240}$ to ${W}_{333}$) were presented by Sasian [50]. The fifth-order coefficient computation uses only the data from the same two paraxial rays used to calculate the Seidel coefficients.

We ignore the distortion coefficients ${W}_{311}$ to ${W}_{511}$ in our metric because the distortion does not influence the image resolution. Furthermore, the first-order coefficients ${W}_{111}$ and ${W}_{020}$, known respectively as the lateral magnification and defocus, are taken to be zero for the principal wavelength ${\lambda}_{0}$, because the lateral magnification is not an aberration, and the paraxial focal plane of ${\lambda}_{0}$ is taken as the observation plane. However, the chromatic change of magnification, known as the lateral color, and the chromatic change of focus, also known as the axial color, or simply chromatic aberration, are important aberrations and were considered in the calculation. In this way, for other wavelengths $\lambda $, ${W}_{020}$ and ${W}_{111}$ are given by the following two equations, respectively [51]:

where ${n}^{\prime}$ is the index of refraction in the image space, $r\text{'}$ is the exit pupil semi-diameter, $R$ is the radius of curvature for the reference sphere $S$, $\Delta S(\lambda ,{\lambda}_{0})$ represents the difference in distance from the exit pupil position to the paraxial image plane, respectively, between the wavelengths ${\lambda}_{0}$ and $\lambda $, and $\Delta {H}^{\prime}(\lambda ,{\lambda}_{0})$ is the difference in image height between paraxial principal, or chief, rays coming from the edge of the FOV, respectively, for ${\lambda}_{0}$ and $\lambda $.

With the proposed metric for image quality based on the third- and fifth-order aberration coefficients, we obtained a very robust and fast technique for computing the metric. Because no trigonometric rays are traced, the metric is robust. The paraxial rays used to compute the desired coefficients are not subject to failure due to a ray missing a surface or encountering total internal reflection [48]. Only two paraxial rays from each wavelength are necessary to calculate the coefficients, resulting in a very computationally efficient function in the exploration phase of the algorithm.

For the exploitation phase, we used a different model to represent the wavefront error. For a given point object and wavelength, the aberration function of an optical system can be represented in terms of a complete set of Zernike circle polynomials [51]. The Zernike polynomial is an orthonormal polynomial over a unit circle. Therefore, the square RMS wavefront error for a specific field point ${H}^{\prime}$ and wavelength $\lambda $ is given by the sum of squares of all of the Zernike coefficients except the piston $({Z}_{00})$:

where ${Z}_{nm}^{}$ are the Zernike coefficients that depend on the image height ${H}^{\prime}$ and on the wavelength $\lambda $.In this way, the square of the RMS wavefront error can be approximated by

where $l$ and $k$ are the number of field points and number of wavelengths used to evaluate the system, respectively. The square root of Eq. (6) is used as an image quality metric for the local search algorithm.

To calculate the Zernike coefficients, the polynomial must be truncated at some point $n={n}_{max}$, and a set of real rays for each field position and wavelength defined must be traced through the system. The optical path difference (OPD) at the exit pupil with respect to the centroid is computed for each ray. With the OPD and the coordinate $\left(\rho ,\theta \right)$ of each ray in the exit pupil, a weighted least square fitting method is used to compute the Zernike coefficients for each field and wavelength. A Gaussian quadrature (GQ) ray distribution scheme was chosen to sample the pupil. With this scheme, only a few rays are necessary to obtain good accuracy for the image quality metrics [52].

The RMS wavefront error of the system could be easily computed by the square root of the direct weighted square sum of the OPD of each ray. However, according to Rayces [53], the relationship between the optical system variables and the Zernike coefficients is more linear than the relationship between the optical system variables and the OPD of each ray. Thus, utilizing the Zernike coefficients to compute the wavefront metric simplifies the design space topography. This results in better convergence when gradient-based optimization algorithms are used in the local search, which is the case for the DLS method used in this phase of the algorithm.

### 3.2.2-System sensitivity metric

For the sensitivity metric, we followed the same principles used to define the image quality metric for the exploration phase. Therefore, the metric had to be quickly evaluable, provide a good metric for tolerance, and be computed any situation.

In the literature, it was possible to find some metrics that would work for our application, such as the power distribution metric proposed by Sasian and Descour [54], the two sensitivity metrics proposed by Wang and Sasian [55], and the metric proposed by Isshiki*et al*. [41]. All of these metrics can be calculated with paraxial rays. However, though the selected metrics are validated empirically, there is a lack of theoretical background in the metric derivations. So, we developed a new metric for system sensitivity that fulfills our requirements and has a theoretical derivation.

In our case, our primary concern is the assembly tolerance, specifically related to the tilts and decentering of the surfaces or elements. These problems are the most pernicious, difficult to control, and the most difficult to debug when the system does not perform according to the image quality requirements.

When there are tilts and/or decenters, the system symmetry is broken, and the system might become completely asymmetric. Nevertheless, to simplify the metric without loss of generality, the decenter and/or the tilt are considered only for one direction (in our case, the tangential plane). This results in a plane-symmetric system, which is easier to treat than an asymmetric system. Once the estimated performance of a plane-symmetric system is known, the estimated performance of an asymmetric system can be computed by multiplying the estimated performance of a plane-symmetric system by the square root of two [56].

According to Mahajan [51], the changes in the contribution of surface to the primary aberration function (third-order aberration) due to its decenter ${\Delta}_{i}$ (in the y-direction) or its tilt ${\beta}_{i}$ (about the vertex in the tangential plane) is given by Eq. (7).

where ${a}_{s}{}_{i},{a}_{c}{}_{i},{a}_{a}{}_{i},{a}_{d}{}_{i},{a}_{t}{}_{i}$ are the third-order coefficients for the exit pupil of surface $i$, not for the exit pupil of the system; ${r}_{i}$ and ${\theta}_{i}$ are the polar coordinates in the exit pupil of surface $i$; and ${{h}^{\prime}}_{i}$ is the object image height produced by surface $i$. Here, ${\epsilon}_{i}$ and ${E}_{i}$ are given for decenter and tilt cases, respectively, by

Here, ${M}_{i}$ and ${m}_{i}$ are the object and exit pupil lateral magnification produced by surface $i$, respectively, and ${S}_{i}$ and ${s}_{i}$ are the axial distances of object and entrance pupil from the surface $i$, respectively.

We can rewrite Eq. (7) in the form:

The coefficients ${a}_{cci}$, ${a}_{lai}$, ${a}_{fdi}$, and ${a}_{qti}$ are named, respectively, the constant or uniform coma, linear astigmatism, field tilt, and quadratic distortion [51,57]. The prefix adjective, *uniform*, *linear*, and *quadratic*, in the coefficient names, expresses the dependence of the aberration on the field.

According to Mahajan [51], for a multi-surface system, the perturbation of a surface not only affects its aberration contribution but also the aberration contribution of the surfaces that follow it, even if those surfaces are not perturbed. This happens because the location of the image point and the center of the exit pupil change for the perturbed surface, which are the object point and entrance pupil of the next surface. However, to simplify the definition of the metric and to increase the computation speed, we made the following assumptions. If the decenters and/ or tilts are considered small, on the order of the assembly errors (tens of microns), it is reasonable to assume that the change to the position of the image and exit pupil of the perturbed surface are so small that the aberrations induced by the unperturbed surfaces are unchanged when compared with the unperturbed system. In other words, only the intrinsic aberrations of a perturbed surface are considered, and the extrinsic aberrations of the following surfaces are ignored.

By making this assumption, we can perform a change of variables in Eq. (12), rewriting it in terms of the system’s normalized exit pupil coordinates and normalized field as follows:

where

Here, *j* is the number of surfaces in the system, $a$ is the system’s exit pupil semi-diameter, and $h{\text{'}}_{max}$ is the image height produced by the system.

With Eq. (17), the sensitivity metric can be defined by the root square sum (RSS) of the RMS change in aberration function for wavelength ${\lambda}_{0}$, which is induced independently by each surface tilt as follows:

where

For metric computation, only the surface tilts are considered, and the amount of tilt is assumed to be the same for all of the surfaces. The decenter is equivalent to a thickness change and a surface tilt, where the thickness change is very small, normally negligible, and below the tolerances. Therefore, the surface decenters are best treated as a surface tilt [56]. The distortion coefficients ${W}_{211}{}_{i}$ are assumed to be zero for the sensitivity metric. As mentioned before, the distortion does not affect the resolution of the image. The field tilt coefficients ${W}_{120}{}_{i}$ may also be ignored, if the focal plane tilt can be considered as a compensator.

This relationship between the aberration coefficients ${a}_{s}\dots {a}_{t}$ and the peak-normalized aberration coefficients ${W}_{ijk}$ are given by:

Now, we can substitute Eq. (25) to (28) in Eq. (13) to (15), and we can substitute the results in Eq. (18) to (20). These substitutions make it possible to write the peak-normalized wavefront aberration coefficients from Eq. (17) (${W}_{031i}(\lambda ),{W}_{122i}(\lambda ),\text{and}{W}_{120i}(\lambda )$) in terms of the Seidel wavefront peak-normalized aberration coefficients of each surface from the non-perturbed system as follows:

From these substitutions, we defined the sensitivity metric so that it was only dependent on the Seidel wavefront peak-normalized aberration coefficients of each surface. These coefficients are a subset of the coefficients used in the image quality metric defined for the exploration phase. Therefore, our outlined metric complied with all of the defined requirements.

## 4. Examples and results

In order to test the novel proposed method for the automatic design of optical systems, we applied it to two different real lens design problems. The first one was an optical system intended for an autonomous star tracker instrument and the second was intended for a multi-spectral, spaceborne, remote-sensing instrument. In both cases, the basic requirements and constraints were addressed and the method was put to run. As we are going to see, the results presented herein verify the efficiency of the algorithm.

#### 4.1. First example, the star tracker lens

Autonomous Star trackers are attitude sensors used in spacecraft, which are capable of providing accurate attitude information in the arc-seconds range. The instrument provides the attitude when images of the dark sky containing stars are captured, processed, and compared with known star coordinates obtained from a star catalog. The instrument hardware is basically composed of an imaging optical system, a silicon matrix detector (CCD or CMOS), driving electronics, and CPU. Defining the requirements for an autonomous star tracker optical system is not a straightforward task; it is a trade-off among the field of view, sensitivity, image quality, physical size, and weight. It is out of the scope of this paper to discuss how the requirements were defined and we simply are going to present them. Table 1 shows the defined basic optical requirements and constraints for the star tracker lens.

According to the methodology proposed, as shown in Fig. 1, the first thing to be done is to select the most suitable glasses for the design. For that, the glass selection method is executed [38]. The inputs for the glass selection phase can be extracted from Table 1. The focal length and the F/# are taken directly from the table. For the wavelengths, we used: 0.443, 0.483, 0.52, 0.558, 0.601, 0.647, 0.698, 0.761, 0.846μm. The primary wavelength *λ _{0}* was set to 0.601μm, which is around the peak spectral response covered by the instrument. More than 85% of the area under the curve resultant from the multiplication of normalized spectral responsivity of the detector and the normalized average of the star temperature spectral emission, is within 0.443 and 0.846μm.

The Schott glass catalog was selected [58] to run the method. However, some specific glasses from this catalog were discarded: Lithotec-CAF2, N-PK51, N-PK52A, N-FK51A, P-PK53, N-PSK53A and N-PSK53. Despite these glasses being very good options for color correction, they were rejected due to their undesirable thermal behavior.

Due to the large spectral band and small F/#, we ran the method for arrangements of three glasses. The limit *F _{1}* defined for this case was 9. The post-Pareto analysis was applied only for the solutions in the Pareto ranking 1, using the method presented in [38] Section 4.1.2, where the metric $\left|{\overline{g}}_{i}\right|$ was calculated using only

*F*and

_{1}*F*.

_{3}In Table 2 we can see the first 10 rows of the output table, sorted from the smallest to the biggest $\left|{\overline{g}}_{i}\right|$.

We selected the set in the second row of the Table 2: N-FK5, SF57 and N-SK5. The set in the first row was not selected due to the high value of *F _{3}* and the low transmittance of P-SF68 in the blue range.

Once the glass materials has been selected for the design, the method gets into the loop of the exploration and exploitation phases, as shown in Fig. 1. In this example we allow the number of lenses to vary from 5 to 9. No automatic stopping criterion was used; the algorithm was allowed to run until the user stopped the execution.

Four instances of the algorithm were executed in parallel. The Pareto fronts resulting from each instance were combined and the non-dominated solutions were filtered out, resulting in the final Pareto front shown in Fig. 4. Each bubble in this graph represents a trade-off solution in terms of the image quality (abscissa), tolerance (ordinate), and number of lenses (third axis represented by the dot color). The number of lenses used by the system is also printed inside the bubble representing its locus in the objective function space. In the same figure, we show the design layout for some of the solutions found.

As expected, the algorithm returned many different trade-off solutions, unlike the single solution returned by mono-optimization methods. This provides the optical designers better insight of the possible compromises for the optical system under design.

This resultant Pareto front was obtained after 6.21E + 8 merit function calculations, including the exploration and exploitation phases from all instances. For this experiment, the average merit function evaluation rate was 2.46E + 2 per second per core in an Intel quad core i5-2500 CPU. The total experiment time was around 7.3 days, running the algorithm on a single desktop computer.

We selected one system from the Pareto front to give an idea of the image quality of the solutions. The selected system is the one indicated in Fig. 4 with a red arrow. The selected system is located in the Pareto front knee region, where solutions present the maximal trade-offs among objectives.

Figure 5 shows the layout and MTF curves for the selected system. The MTF has been plotted up to 33.3lp/mm, which is the Nyquist frequency of the detector used in the instrument. The required MTF for the optical system in the Nyquist frequency must be above 0.5 for the whole FOV. We can see from the MTF plot that the selected system is not far from the image quality required. The prescription data for this system is presented in the Appendix section.

#### 4.2. Second example, the multi-spectral remote-sensing camera lens

In this second example, the proposed automatic design method was used to search for solutions of a very challenging system in terms of color correction. We wanted a single refractive optical system capable of covering and providing excellent image quality in five spectral bands, going from the blue (0.45-0.485-0.52 μm), passing through green (0.52-55-0.59 μm), red (0.63-0.66-0.69 μm), NIR (0.77-0.83-0.89 μm), and reaching the SWIR (1.5-1.6-1.7 μm) spectral region. This system was designed for a spaceborne, push broom, remote-sensing camera to monitor natural resources on Earth. The nominal instrument ground sampling distance was set to 20m. *Table 3* shows the system’s basic requirements and constraints.

This is the same lens design problem presented in [38] with a slight difference in the system’s effective focal length and FOV, which were adjusted for a more suitable orbit height. In this case, as the glass selection for this system was already presented in [38], we will skip the glass selection phase, taking the advantage of the results obtained in that paper. Thus, the selected glasses for our design were: N-BAF52, N-KZFS11 and N-BAK2.

With the glasses selected, the automatic design method proceeds to the loop of the exploration and exploitation phases. The number of lenses in this example was allowed to vary from 5 to 9. Once again, no automatic stopping criterion was used; the algorithm was allowed to run until the user stopped the execution.

Four instances of the algorithm were executed in parallel. The resultant Pareto front is shown in Fig. 6. For a better visualization of the most promising designs, we truncated the image quality metric scale at 0.2 waves.

Including exploration and exploitation merit functions, 4.00E + 8 computations were necessary to obtain the Pareto front shown in Fig. 6. The average merit function computation rate in this experiment was 1.33E + 2 per second per core in an Intel quad core i5-2500 CPU. This experiment took about 8.7 days, running on a single, desktop computer.

One system from the Pareto was selected in order to give an idea of the image quality of the solutions. The red arrow in Fig. 6 indicates the selected system.

Figure 7 shows the layout and MTF curves for the selected system. Two MTF graphs are shown, one for the visible and NIR region and the other for the SWIR band. We separated them into two different graphs due to the differences in their Nyquist frequency. The Nyquist frequency for VIS-NIR is 77.7lp/mm and for SWIR is 38.5lp/mm. The MTF curves reveal a system with very good image quality. The prescription data for this system is presented in the Appendix section. Note that, with the exception of the six-lens system, the system we picked was the worst in terms of RMS wavefront error shown in the Fig. 6.

It is important to note that the layouts and the image quality presented are exact from the systems determined by the automatic design software. We did not fix, change, or further optimize these designs. Indeed, some lens shapes in these designs are unusual. Some of the negative lenses are too thick, some positive and low power meniscus lenses are too thin, and some of the edge spaces between the lenses are too small. However, the designer can fix these small problems “manually” without significantly compromising either the image quality or the system sensitivity. These unusual lens shapes can be avoided by implementing more elaborate constraints.

From the results of these two experiments, we can observe the vast number of trade-off designs available for each lens design problem. The multi-objective approach provides the designer with great visibility of the available possibilities. Knowing the available possibilities and the compromises involved is important for developing a solution to any engineering problem. Consequently, these examples illustrate the power of the automatic multi-objective design approach proposed in this paper.

Although the solutions found by the automatic design algorithm are not final designs, they provide great starting points for further optimizations and provide an excellent overview for the designer regarding the available tradeoffs in terms of image quality, sensitivity, and number of lenses.

## 5. Conclusion

Studying the available methods of optical design and optimization, we noted that all existing approaches had drawbacks and/or missing elements. We chose to focus on: (i) design from scratch, (ii) a multi-objective approach that considered not only the image quality, but also the system’s complexity and sensitivity, (iii) optimum choice of lens materials for color correction, and (iv) broad exploration of the design space to find the best possible trade-off solutions.

With these points in mind, a novel automatic optical design method was born. The new automatic design method is based on a multi-objective memetic optimization algorithm. The multi-objective approach simultaneously, but separately, optimizes the image quality, tolerance, and complexity of the system. The memetic technique divides the search for optical designs into three different phases: glass selection; exploration; and exploitation.

The proposed method includes several scientific contributions on a number of different levels. The overall design strategy has particular characteristics; notably its three phases, their sequences and loops (as seen in Fig. 1), the design from scratch and its multi-objective approach to consider fabrication issues.

The novel EO algorithm developed for the exploration phase was based on a customization of the GEO algorithm. The newly developed O-GEO algorithm incorporates many features into a single algorithm. The search is driven by three different metrics in a multi-objective approach: image quality, tolerance, and number of lenses. Our method allows variation of the number of lenses during the search, which has only been explored once before in a computationally demanding implementation [26,27]. Furthermore, our O-GEO method is one of few EO algorithms customized for optical design, incorporating different diversity operators suitable for the problem by mimicking techniques that are traditionally applied manually by optical designers. This is a new strategy for EO algorithms applied to lens design problems. The codification proposed for the solutions in O-GEO makes many of the algorithm features possible.

The image quality and sensitivity metrics applied during the exploration phase are other contributions to this work. These computationally efficient metrics provide a good approximation of the system characteristics and robustly handle the situation when a real ray cannot be traced through the system.

The application of the automatic lens design method presented herein only needs the system requirements as input. As a result, our method provides a family of trade-off designs, supplying the designer with a complete picture of the available possibilities.

The design examples demonstrated the power of the proposed method to produce competitive optical designs from scratch. It has shown to be a great method for the design of insensitive and simple systems, giving the designer many trade-off options, with systems presenting different architectures.

Although the proposed method has shown excellent results for the experiments carried out, it does have limitations and room for improvement. At the moment, the method can only deal with rotationally symmetrical systems composed of spherical singlet lenses. Nevertheless, it is quite feasible to improve the method and overcome its current restrictions to incorporate: aspherical surfaces, non-rotationally symmetrical systems, cemented lenses, and catoptric and catadioptric systems.

## Appendix A Prescription data of the lens systems shown in Fig. 5 and Fig. 7

The prescription data for the systems shown in Fig. 5 and Fig. 7 are presented in Table 4 and Table 5, respectively.

## Acknowledgments

B. F. C. Albuquerque gratefully acknowledges the Kidger Optics Associates for supporting these studies through the 2010 Michael Kidger Memorial Scholarship in Optical Design.

## References and links

**1. **J. L. Rayces and M. Rosete-Aguilar, “Critical view of three lens design methods: damped least squares, Spencers and Glatzels,” Proc. SPIE **4927**, 77–89 (2002). [CrossRef]

**2. **M. J. Colaço and G. S. Dulikravich, “A survey of basic deterministic, heuristic and hybrid methods for single objective optimization and response surface generation,” in *Thermal Measurements and Inverse Techniques*, H. R. B. Orlande, O. Fudym, D. Maillet, and R. M. Cotta, eds. (CRC, 2011), pp. 355–405.

**3. **M.-H. Lin, J.-F. Tsai, and C.-S. Yu, “A review of deterministic optimization methods in engineering and management,” Math. Probl. Eng. **2012**, 756023 (2012). [CrossRef]

**4. **M. Cavazzuti, *Optimization Methods, from Theory to Design Scientific and Technological Aspects in Mechanics* (Springer, 2013).

**5. **C. Blum and A. Roli, “Metaheuristics in combinatorial optimization: overview and conceptual comparison,” ACM Comput. Surv. **35**(3), 268–308 (2003). [CrossRef]

**6. **T. F. Gonzalez, *Handbook of Approximation Algorithms and Metaheuristics, Computer & Information Science Series* (Chapman & Hall/CRC, 2007).

**7. **L. Bianchi, M. Dorigo, L. Gambardella, and W. Gutjahr, “A survey on metaheuristics for stochastic combinatorial optimization,” Nat. Comput. **8**(2), 239–287 (2009). [CrossRef]

**8. **E.-G. Talbi, *Metaheuristics: From Design to Implementation* (J. Wiley & Sons, 2009).

**9. **V. K. Viswanathan, I. O. Bohachevsky, and T. P. Cotter, “An attempt to develop an “intelligent” lens design program,” Proc. SPIE **0554**, 10–17 (1986). [CrossRef]

**10. **M. van Turnhout, P. van Grol, F. Bociort, and H. P. Urbach, “Obtaining new local minima in lens design by constructing saddle points,” Opt. Express **23**(5), 6679–6691 (2015). [CrossRef] [PubMed]

**11. **D. Shafer, “Global optimization in optical design,” Comput. Phys. **8**(2), 188–195 (1994). [CrossRef]

**12. **H. Qin, “Particle swarm optimization applied to automatic lens design,” Opt. Commun. **284**(12), 2763–2766 (2011). [CrossRef]

**13. **Z. Li and B. Yang, “Optical design and optimization of a miniature projector with liquid lenses via modified ant colony algorithm,” Opt. Eng. **51**(7), 073001 (2012). [CrossRef]

**14. **E. G. M. Lacerda and A. C. P. L. F. Carvalho, “Introdução oas algoritmos genéticos,” in *Sistemas Inteligentes: Aplicações a Recursos Hídricos e Ciências Ambientais, Coleção ABRH de Recursos Hidricos; 7,* C. O. Galvão, and M. Valença, eds. (UFRGS, 1999), pp. 99–150.

**15. **A. E. Eiben and J. E. Smith, *Introduction to Evolutionary Computing, Natural Computing Series* 1st ed. (Springer, 2003).

**16. **D. Vasiljevic, *Classical and Evolutionary Algorithms in The Optimization of Optical Systems, Genetic Algorithms and Evolutionary Computation Series*, 1st ed. (Kluwer Academic Publishers, 2002).

**17. **D. C. van Leijenhorst, C. B. Lucasius, and J. M. Thijssen, “Optical design with the aid of a genetic algorithm,” Biosystems **37**(3), 177–187 (1996). [CrossRef] [PubMed]

**18. **I. Ono, S. Kobayashi, and K. Yoshida, “Global and multi-objective optimization for lens design by real-coded genetic algorithms,” Proc. SPIE **3482**, 110–121 (1998). [CrossRef]

**19. **K. E. Moore, “Algorithm for global optimization of optical systems based on genetic competition,” Proc. SPIE **3780**, 40–47 (1999). [CrossRef]

**20. **X. Chen and K. Yamamoto, “An experiment in genetic optimization in lens design,” J. Mod. Opt. **44**(9), 1693–1702 (1997). [CrossRef]

**21. **J. Beaulieu, C. Gagné, and M. Parizeau, “Lens system design and re-engineering with evolutionary algorithms,” in Proceedings of Genetic and Evolutionary Computation Conference (GECCO, 2002), pp. 155–162.

**22. **J. Sakuma and S. Kobayashi, “Latent variable crossover for k-tablet structures and its application to lens design problems,” in Proceedings of Conference on Genetic and Evolutionary Computation (GECCO) (ACM, 2005), pp. 1347–1354. [CrossRef]

**23. **Y.-C. Fang, C.-M. Tsai, J. Macdonald, and Y.-C. Pai, “Eliminating chromatic aberration in Gauss-type lens design using a novel genetic algorithm,” Appl. Opt. **46**(13), 2401–2410 (2007). [CrossRef] [PubMed]

**24. **S. Thibault, C. Gagné, J. Beaulieu, and M. Parizeau, “Evolutionary algorithms applied to lens design,” Proc. SPIE **5962**, 66–76 (2005). [CrossRef]

**25. **Y. Nagata, “The lens design using the CMA-ES algorithm,” in *Genetic and Evolutionary Computation-GECCO 2004, Vol. 3103 of Lecture Notes in Computer Science,* K. Deb, ed. (Springer, 2004), pp. 1189–1200.

**26. **J. R. Koza, S. H. Al-Sakran, and L. W. Jones, “Automated re-invention of six patented optical lens systems using genetic programming,” in Proceedings of Conference on Genetic and Evolutionary Computation (GECCO) (ACM, 2005), pp. 1953–1960. [CrossRef]

**27. **L. W. Jones, S. H. Al-Sakran, and J. R. Koza, “Automated synthesis of a human-competitive solution to the challenge problem of the 2002 international optical design conference by means of genetic programming and a multi-dimensional mutation operation,” in Proceedings of Conference on Genetic and Evolutionary Computation (GECCO) (ACM, 2006), pp. 823–830. [CrossRef]

**28. **L. N. Hazra and S. Chatterjee, “A Prophylactic strategy for global synthesis in lens design,” Opt. Rev. **12**(3), 247–254 (2005). [CrossRef]

**29. **J. Branke, K. Deb, K. Miettinen, and R. Slowinski, *Multiobjective Optimization: Interactive and Evolutionary Approaches* (Springer-Verlag, 2008).

**30. **S. Joseph, H. W. Kang, and U. K. Chakraborty, “Optical design with epsilon-dominated multi-objective evolutionary algorithm,” in *Adaptive and Natural Computing Algorithms, Vol. 4431 of Lecture Notes in Computer Science*, B. Beliczynski, A. Dzielinski, M. Iwanowski, and B. Ribeiro, eds. (Springer, 2007), pp. 77–84.

**31. **C. Gagné, J. Beaulieu, M. Parizeau, and S. Thibault, “Human-competitive lens system design with evolution strategies,” Appl. Soft Comput. **8**(4), 1439–1452 (2008). [CrossRef]

**32. **L. Li, Q.-H. Wang, X.-Q. Xu, and D.-H. Li, “Two-step method for lens system design,” Opt. Express **18**(12), 13285–13300 (2010). [CrossRef] [PubMed]

**33. **I. Ono, Y. Tatsuzawa, S. Kobayashi, and K. Yoshida, “Designing lens systems taking account of glass selection by real-coded genetic algorithms,” in Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMS)*,*3*,* (IEEE, 1999), pp. 592–597. [CrossRef]

**34. **J. C. Tesar, “Mozart, dice, and glass selection,” SPIE **4092**, 1–6 (2000).

**35. **F. L. De Sousa, F. M. Ramos, P. Paglione, and R. M. Girardi, “New stochastic algorithm for design optimization,” AIAA J. **41**(9), 1808–1818 (2003). [CrossRef]

**36. **P. Moscato, “On evolution, search, optimization, genetic algorithms and martial arts,” Tech. Rep., CALTECH Report 826 (1989).

**37. **P. Moscato and C. Cotta, “A gentle introduction to memetic algorithms,” in *Handbook of Metaheuristics, Vol. 57 of International Series in Operations Research & Management Science*, F. Glover and G. A. Kochenberger, eds. (Springer, 2003), pp. 105–144.

**38. **B. F. C. de Albuquerque, J. Sasian, F. L. de Sousa, and A. S. Montes, “Method of glass selection for color correction in optical system design,” Opt. Express **20**(13), 13592–13611 (2012). [CrossRef] [PubMed]

**39. **R. L. Galski, F. L. De Sousa, and F. M. Ramos, “Application of a new evolutionary algorithm to the optimum design of a remote sensing satellite constellation,” in *Proceedings of 5th International Conference on Inverse Problems in Engineering: Theory and Practice**,**2**,* (Leeds University, 2005), paper G01.

**40. **I. Mainenti-Lopes, F. L. De Sousa, and L. C. G. de Souza, “The generalized extremal optimization with real codification,” in *Proceedings of International Conference on Engineering Optimization (EngOpt)* (EngOpt, 2008).

**41. **M. Isshiki, L. Gardner, and G. G. Gregory, “Automated control of manufacturing sensitivity during optimization,” Proc. SPIE **5249**, 343–352 (2004). [CrossRef]

**42. **M. Jeffs, “Reduced manufacturing sensitivity in multi-element lens systems,” in *International Optical Design Conference*, 2002 OSA Technical Digest Series (Optical Society of America, 2002), paper IMC4. [CrossRef]

**43. **K. Fuse, “Method for designing a refractive or reﬂective optical system and method for designing a diffraction optical element,” United States Patent 6,567,226 (2003).

**44. **J. P. McGuire, Jr., “Designing easily manufactured lenses using a global method,” in *International Optical Design Conference*, 2006 OSA Technical Digest Series (Optical Society of America, 2006), paper TuA6. [CrossRef]

**45. **M. Isshiki, D. Sinclair, and S. Kaneko, “Lens design: Global optimization of both performance and tolerance sensitivity in *International Optical Design Conference*, 2006 OSA Technical Digest Series (Optical Society of America, 2006), paper TuA5.

**46. **A. Epple and H. Wang, “Design to manufacture- from the perspective of optical design and fabrication,” in *Optical Fabrication and Testing Conference*, 2008 OSA Technical Digest Series (Optical Society of America, 2008), paper OFB1. [CrossRef]

**47. **B. F. C. de Albuquerque, L.-Y. Liao, A. S. Montes, F. L. de Sousa, and J. Sasián, “A multi-objective approach in the optimization of optical systems taking into account tolerancing,” Proc. SPIE **8131**, 813105 (2011). [CrossRef]

**48. **W. Smith, *Modern Lens Design*, 2nd ed. (McGraw-Hill, 2004).

**49. **A. Girard, “Calcul automatique en optique géométrique,” Rev. Opt. Theor. Instrum. **37**, 225–241 (1958).

**50. **J. Sasián, “Theory of sixth-order wave aberrations,” Appl. Opt. **49**(16), D69–D95 (2010). [CrossRef] [PubMed]

**51. **V. N. Mahajan, *Optical Imaging and Aberration* (SPIE Press, 1998).

**52. **G. W. Forbes, “Optical system assessment for design: numerical ray tracing in the Gaussian pupil,” J. Opt. Soc. Am. A **5**(11), 1943–1956 (1988). [CrossRef]

**53. **J. L. Rayces, “Classical methods of optimization in lens design,” 5890 N. Placita Alberca, Tucson-AZ, 85718 (personal communication, 2009).

**54. **J. Sasian and M. R. Descour, “Power distribution and symmetry in lens systems,” Opt. Eng. **37**(3), 1001–1004 (1998). [CrossRef]

**55. **L. Wang and J. Sasian, “Merit figures for fast estimating tolerance sensitivity in lens systems,” Proc. SPIE **7652**, 76521P (2010). [CrossRef]

**56. **J. Sasian, “Tolerance II: Opt 517 lens design”, Tucson, AZ: College of Optical Sciences, The University of Arizona, Fall 2011. (Lecture notes, 2011).

**57. **J. Sasian, “How to approach the design of a bilateral symmetric optical system,” Opt. Eng. **33**(6), 2045–2061 (1994). [CrossRef]

**58. **SCHOTT N. America, Inc., “Optical glass catalogue- ZEMAX format, status as of 13th September 2011, http://www.us.schott.com/advanced_optics/english/tools_downloads/download/index.html?PHPSESSID=utt2cbk96nlk3gf7gjpb7ggt54#Optical%20Glass