## Abstract

Phase elements can be used in optical systems to achieve similar design goals to traditional geometric optical elements. If we replace traditional geometrical optical elements in optical systems by phase elements (such as diffractive optical elements and metasurfaces) which have phase functions loaded on the geometric surface substrates, it is possible to generate imaging optical systems that offer better performance, increased compactness, lighter weight, and easier alignment and manufacturing than conventional imaging systems. Here we propose a design method for imaging systems consisting of freeform-surface-substrate phase elements. The design process begins from an initial system that uses simple geometric planes or other predefined geometric surfaces without phase functions. After point-by-point construction and iteration steps, the geometric substrate surfaces and closed-form phase functions can then be calculated quickly and efficiently. The resulting design can be used as a good starting point for further optimization. To illustrate the generality and feasibility of the proposed design method, we present two high-performance compact systems as design examples. Both systems meet the design requirements, with small distortions after optimization. Their modulation transfer function (MTF) curves are close to the diffraction limit. This design framework can be used to design next-generation imaging systems using phase elements for applications including near-eye-displays, high-performance cameras and remote sensing and detection. The proposed method also offers insight into design of imaging systems that are constrained to conformal substrate shapes or integrated substrates.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Freeform optical surfaces provide more degrees of freedom for use in optical design and thus can provide a stronger ability to correct aberrations than spherical and aspherical surfaces. In recent years, with the continuing development of advanced fabrication technology, freeform surfaces have been used successfully in the fields of both imaging and non-imaging. In the imaging field, these surfaces can be used in applications such as near-eye displays [1–3] and high-performance reflective systems [4–9], etc. The use of freeform optical surfaces is regarded as a revolution in the optical design field [10]. However, along with the interest in use of these surfaces, challenges have also arisen. As the needs of both scientists and consumers expand, freeform surfaces are expected to be used to design high-performance systems with advanced specifications and ultra-compact configurations. In some configurations, the light inside the system will have to be deflected at large angles and will thus lead to unphysical surface shapes. In some extreme design tasks, the variables offered by these freeform surfaces are insufficient, which will lead to the use of more elements in the system design and increased system complexity, size and mass.

Another recent trend in optical design has involved the use of phase elements [11–13]. In terms of regulating the wavefront, traditional geometric optical elements can also be considered as “phase elements”. However, in order to distinguish the element which has phase function loaded on the substrate from the pure geometric surface, phase elements mentioned in this paper are mainly defined as diffractive optical elements (DOEs) or metasurfaces [11,14–16] which can manipulate the wavefront in different ways. These elements can achieve similar design goals to those of traditional geometric optical elements to some degree, although the theoretical basis used for the bending of light is very different. Traditional geometric optical elements deflect the light rays through reflection or refraction at the interfaces to control the wavefront. Diffractive optical element redirects an optical beam using diffraction rather than refraction or reflection. Metasurfaces control the wavefront by introducing spatial variations generated by effective dielectric and magnetic properties. The flat phase element, which has a phase function that is applied onto a planar geometric substrate, is the most commonly used type of phase element. These elements enable light deflection along unconventional paths when compared with geometric surfaces, which is very useful in the design of compact and nonsymmetric systems. In addition, the elements can be ultra-thin, which means that systems consisting of these elements are small in size and lightweight. Although flat phase elements offer many advantages, it can be inferred that the aberration properties of their phase functions are similar to those of freeform surfaces. The degree of design freedom obtained may not be sufficient to complete complex design tasks. In addition, if the flat phase elements are used to realize strong light convergence or divergence, the corresponding phase structures are difficult to fabricate. To deal with the problems with freeform surfaces and flat phase elements described above, we can apply the phase function to a freeform surface geometric substrate rather than a plane. In this paper, we call this type of phase element a freeform-surface-substrate phase element (FSP element). When the freeform surface and the phase function are combined into a single element, most of their advantages can be also integrated into this element. On the one hand, FSP element consists of a geometric surface substrate and a phase function, the design freedom is significantly increased (which is a significant development for aberration correction), thereby improving the imaging quality of the system. On the other hand, as the phase function and geometric freeform surface are integrated on a single element, the total number of elements required for the system can be reduced, which will increase the system compactness and reduce the system weight. In summary, use of FSP elements in imaging system design can generate high-performance systems that combine very compact configurations with advanced system parameters.

Traditionally, the design of imaging systems that consist of curved phase elements (e.g., hybrid diffractive-refractive (HDR) elements) begins from an existing starting point that has similar system specifications and a similar configuration to that of the design requirements. Further optimization is then performed to obtain the final design result. However, for a complex system design containing FSP elements and using advanced system specifications, feasible starting points are very rare. Therefore, the design process will be very long and may even fail. To deal with this problem, direct or point-by-point design approaches may provide a good solution. For systems using only geometrical freeform surfaces, representative methods include the partial differential equations (PDEs) design method [17,18], the simultaneous multiple surface (SMS) method [19] and the construction-iteration method (CI-3D) [8,20,21]. For the system with only flat phase elements, related point-by-point method has also been proposed [11]. Mendes-Lopes et al. extended the SMS method to the design of curved phase elements [22]. Kinoform surfaces are calculated along with the remaining surfaces of the optical system. This approach realized good control of two or three wavefronts using only one or two surfaces. The number of field points considered in the design process is thus restricted and the system configuration is co-axial. A general method should be able to handle design problems that consider the use of multiple FSP elements, nonsymmetric configurations, and rays from multiple fields and different pupil coordinates.

Motivated by the above problems, in this paper, we propose a design method for an imaging system that consists of freeform-surface-substrate phase (FSP) elements. We focus on the fundamental design of the geometric substrate surfaces and closed-form phase functions. The design process starts from an initial system that uses simple geometric planes without phase functions. Based on the design requirements provided, the geometrical freeform surface substrate shapes and phase functions are all calculated using a point-by-point design process while considering rays from multiple fields and different pupil coordinates. After point-by-point construction and iteration steps, the geometric substrate surfaces and closed-form phase functions of multiple FSP elements in nonsymmetric system configurations can then be calculated quickly and efficiently. The resulting design can be regarded as a good starting point for further system optimization. In fact, the substrates of the phase elements are not restricted to the freeform surfaces generated during the design process and other predefined or conformal surface shapes can be used. The feasibility and effectiveness of the proposed method are demonstrated using two high-performance design examples. The first is a Wetherell-type reflective system that uses three FSP elements. The second uses a design strategy that integrates multiple phase functions on a single conformal spherical substrate. The design framework presented here can be used to design next-generation imaging systems based on phase elements for use in areas including near-eye-displays, high-performance cameras, and remote sensing and detection.

## 2. Design method

#### 2.1 Ray-tracing fundamentals

Generally, an actual imaging system that works for a certain light beam width can image an object of a certain size. When designing non-rotationally symmetrical imaging systems, this means that the feature light rays to be considered in the design process should come from different fields and from different apertures. The polar ray grid is used to define the feature light rays in systems with the generally used circular aperture. In each single field, the aperture is divided into *QA* angles. Along the radial direction of each of these angles, the radius is divided into *QP* pupil positions. Then, if *QF* fields are to be considered, the number of feature light rays can be given by *QT = QA×QP×QF*. If the system is symmetrical about the meridional plane, the rays contained in half the field-of-view (FOV) are sufficient for the design process.

The ultimate goal of optical design is to redirect the light rays that are emitted from the object space to the corresponding ideal image points on the image plane. When the feature light rays have been defined, the propagation of a general feature light ray *R _{i}* (

*i =*1,2…,

*QT*) at an FSP element must be investigated. As illustrated in Fig. 1, it is assumed that

*R*comes from point

_{i}

*S**, intersects the FSP element at point*

_{i}

*P**in the real three-dimensional space (RS), and is then redirected to point*

_{i}

*E**. The unit vectors for the incident direction and the outgoing direction can be expressed as*

_{i}*= (*

**r**_{i }

*P**−*

_{i}

*S**)/|*

_{i}

*P**−*

_{i}

*S**| = [*

_{i}*r*,

_{i,x}*r*,

_{i,y}*r*]

_{i,z}^{T}and

*r**=(*

_{i}'

*E**−*

_{i}

*P**)/|*

_{i}

*E**−*

_{i}

*P**| = [*

_{i}*r*,

_{i,x}’*r*,

_{i,y}’*r*]

_{i,z}’^{T}respectively. The optical path length (OPL) from

*to*

**S**_{i}

*E**can be given by:*

_{i}*n*and

*n’*are the refractive indices before and after the element,

*m*is the diffraction order,

*λ*is the wavelength, and

*ϕ*is the phase function of the FSP element and is a function of the local

*x*and

*y*coordinates.

*ϕ*(

*P**) denotes the phase value at point*

_{i}

*P**.*

_{i}Based on Fermat’s principle [23], we have:

When Eqs. (1) and (2) are taken together, the vector form of the general reflective/refractive equation can be given as [11]:

*N**denotes the global normal of the geometric substrate at point*

_{i}

*P**in the RS. If the analytical expression for the freeform substrate shape is*

_{i}*z*=

*f*(

*x*,

*y*), then ${{\boldsymbol N}_i} = {\textrm{[}\frac{{\partial f}}{{\partial x}}\left|{\begin{array}{{c}} {}_{{{\boldsymbol P}_i}} \end{array}} \right.,\;\frac{{\partial f}}{{\partial y}}\left|{\begin{array}{{c}} {}_{{{\boldsymbol P}_i}} \end{array}} \right.,\; - 1\textrm{]}^\textrm{T}}$. ∇

*ϕ*(

*P**) represents the gradient vector of the phase function at point*

_{i}

*P**.*

_{i}**is a transformation matrix for ∇**

*T**ϕ*(

*P**) from the local coordinate system to the global coordinate system.*

_{i}*yOz*plane.

*α*is the tilt angle between the

*y*-axis of the local coordinate system and the

*y*-axis of the global coordinate system. Given that

*λ*,

*r**, and*

_{i}

*r**have been obtained, then according to Eq. (3),*

_{i}’

*N**can be calculated when ∇*

_{i}*ϕ*(

*P**) is obtained and vice versa.*

_{i}#### 2.2 Construction process

In many cases, the entire design process starts from a simple initial system that consists of pure geometrical planes. These planes should be located approximately at their intended positions, and should redirect the light rays approximately in their intended paths. In other cases, the initial system may be a system with a specific optical power or an intermediate design. Occasionally, the substrate for the phase element may be determined in advance. Nevertheless, the geometrical shape of the substrate and the phase functions in the system for the design are constructed one-by-one in this design stage, starting from the initial system and following a predefined order. During the design of a specific freeform surface substrate or a specific phase function, all other geometric substrates and phase functions retain their current states. In other words, the locations of these elements and the mathematical expressions for the freeform surface shapes and the phase functions are all known. The freeform surface substrate is constructed using the discrete data points in the RS. The function graph or profile of the phase function can be also regarded as a three-dimensional surface in a virtual 3D space. In this work, we call it the phase function space (PS), as shown in Fig. 2. If we can obtain the data points on the profile of the phase function in the PS, then the phase function can also be constructed. For a point ** P **= [

*x**,

*y**,

*f*(

*x**,

*y**)]

^{T}on the geometric substrate in the RS, the phase value at this point is

*ϕ*(

**)=**

*P**ϕ*(

*x**,

*y**). Here,

*x** and

*y** are the

*x*and

*y*coordinates, respectively, of point

**in the local coordinate system. The point on the phase profile in the PS that corresponds to**

*P***is**

*P*

*P*_{ϕ}=[

*x**,

*y**,

*ϕ*(

*x**,

*y**)]

^{T}. The projection plane shown in Fig. 2 can be regarded as a plane that lies parallel to the local

*xOy*plane, which will be used during construction of the phase function.

### 2.2.1 Construction of the geometric surface substrate

During generation of the geometric substrate shapes or the phase functions, the known parameters and unknown parameters are different. Here we begin by exploring the construction of the geometric freeform surface substrate. When designing the geometrical substrate for the *Ω*th element, the freeform surface substrate shapes and the phase functions of the other FSP elements are all known (including the phase function of the *Ω*th element). The case where *ϕ* (*x*, *y*) =0 for the *Ω*th element indicates that the element is degraded to a pure geometric freeform surface with no phase function.

During the construction process, a ray from the total feature ray set is selected to act as the first feature ray *R*_{1}. The corresponding intersection of *R*_{1} with the initial element in the RS is regarded as the first feature point *P*_{1}, as shown in Fig. 3(a). When the *i*th feature point *P** _{i}* has been calculated, as illustrated in Fig. 3(b), it is expected that the corresponding feature ray

*R*can be redirected toward its ideal image point

_{i}

*I*

_{i,}_{ideal}on the image plane. According to Fermat’s principle, the variation of the optical path length (

*OPL*) between

*P**and*

_{i}

*I*

_{i,}_{ideal}is zero. Note that the

*OPL*should contain the geometrical path lengths along with the equivalent path lengths induced by the phase functions on the FSP elements that are located between the

*Ω*th element and the image plane. In this way, we can determine the ideal optical path for

*R*between

_{i}

*P**and*

_{i}

*I*

_{i,}_{ideal}and thus obtain the intersection

*E**of*

_{i}*R*with the element located after the

_{i}*Ω*th element in the RS. The intersection

*S**of*

_{i}*R*with the element located before the

_{i}*Ω*th element in the RS can be calculated via ray tracing. We can then calculate the unit incident direction vector

*r**and the outgoing vector*

_{i}

*r**. Equation (3) can then be transformed into the following equation without a cross product:*

_{i}’*ɛ*is a scale factor,

*N**is the surface normal vector of*

_{i}

*P**in the RS. The left part and*

_{i}

*N**both have the same direction. Because the analytical expression for the phase function for the*

_{i}*Ω*th element is known, $\nabla \phi \textrm{(}{{\boldsymbol P}_i}\textrm{) = [}\frac{{\partial \phi }}{{\partial x}}\left|{\begin{array}{{c}} {}_{{{\boldsymbol P}_i}} \end{array}} \right.,\;\frac{{\partial \phi }}{{\partial y}}\left|{\begin{array}{{c}} {}_{{{\boldsymbol P}_i}} \end{array}} \right.,\;0{\textrm{]}^\textrm{T}}$ can be calculated. We can then obtain

*N**. The next feature point,*

_{i}

*P*

_{i}_{+1}can be determined using the “nearest-ray algorithm” [5], as illustrated in Fig. 3(b).

*P*

_{i}_{+1}is calculated as the intersection of the corresponding feature ray

*R*

_{i}_{+1}with the tangent plane of the nearest data point among the total of

*i*data points that have been calculated ahead of

*P*

_{i}_{+1}. The above steps are then repeated until the coordinates and the normal vectors are generated for all the feature points. The closed-form expression for the geometrical substrate shape is obtained via a fitting process that considers both the coordinates and the normals of the data points in the RS, as shown in Fig. 3(c). The original geometric substrate shape of the

*Ω*th element in the initial system can then be updated.

The shape of freeform surface substrate and the phase function of a specific element are described in the same local coordinate system. The origin of the local coordinate system for both the freeform substrate and the phase function for each element is expected to be located at the intersection of the chief ray of the central field with the element. The initial system should fulfill this condition. However, during the successive constructions of the substrate shapes and the phase functions in the system, the actual intersection of the chief ray of the central field with each element may deviate from this origin. After a specified geometric substrate shape is generated, the local coordinate system for this element will be updated to match the new intersection of the chief ray. As a result, the mathematical expression of the phase function of this element will then in fact be described in the new local coordinate system. However, the new geometric substrate shape is calculated based on the phase function that was described in the initial local coordinate system. The mismatch between these two local coordinate systems will then affect the expected ray deflection, thus leading to imaging performance degradation. To correct this error, after each time that the geometric surface shape is calculated, it is necessary to renew a “shifted” version of the phase function in the new coordinate system to match the original phase function. For example, if the local coordinate system shifts by a distance *δ* in the + *y* direction after surface shape construction and the original phase function is *ϕ*(*x*, *y*), then the new phase function described in the new coordinate system should be *ϕ*(*x*, *y*+*δ*).

### 2.2.2 Construction of the phase function

Next, we explore the construction of the phase function. When designing the freeform phase function of the *Ω*th element, the freeform surface substrate shapes and phase functions of the other FSP elements are all known (including the geometric surface substrate for the *Ω*th element). For ease of understanding, the local *x* and *y* coordinates of the feature points are projected into a space called the projection space. As shown in Fig. 4, the point on the phase profile in the PS that corresponds to *P** _{i}*= [

*x*

_{i}^{local},

*y*

_{i}^{local},

*z*

_{i}^{local}]

^{T}in the RS is given by

*P**= [*

_{i,ϕ}*x*

_{i}^{local},

*y*

_{i}^{local},

*ϕ*(

*P**)]*

_{i}^{T}, where the local

*x*and

*y*coordinates are the same. The case where the surface normal

*N**is a uniform vector [0, sinα, cosα]*

_{i}^{T}for all data points at the

*Ω*th element in the RS indicates that this element has degraded to a flat phase element.

During construction, the first feature ray *R*_{1} is selected and the corresponding intersection of *R*_{1} with the FSP element in the RS is *P*_{1}. The point in the PS that corresponds to *P*_{1} is *P*_{1,ϕ}, which is shown in Fig. 4(a). Based on Fermat’s principle, when the *i*th feature point *P** _{i}* in the RS and the point

*P**in the PS have been obtained, the ideal optical path for*

_{i,ϕ}*R*between

_{i}

*P**and*

_{i}

*I*

_{i,}_{ideal}can be determined. Given the geometric substrate shape

*z*(

*x*,

*y*) of the

*Ω*th element, we can then calculate the surface normal ${{\boldsymbol N}_i} = {[N_i^x,N_i^y,N_i^z]^\textrm{T}} = {\left[ {\frac{{\partial {z_i}({{x_i},{y_i}} )}}{{\partial {x_i}}},\frac{{\partial {z_i}({{x_i},{y_i}} )}}{{\partial {y_i}}}, - 1} \right]^{\rm T}}$ at point

*P**using the differential geometry. The value of*

_{i}*here can also be obtained via a real ray tracing process in optical design software. By combining Eqs. (1)–(5), we obtain:*

**N**_{i}*ɛ*has no practical use. Then, $\nabla \phi \textrm{(}{{\boldsymbol P}_i}\textrm{) = [}\frac{{\partial \phi }}{{\partial x}},\;\frac{{\partial \phi }}{{\partial y}},\;0{\textrm{]}^\textrm{T}}$ can be calculated. In other words, the “surface normal” of the phase profile

*N**= ${[\frac{{\partial \phi }}{{\partial x}}\left|{\begin{array}{{c}} {}_{{{\boldsymbol P}_i}} \end{array}} \right.,\frac{{\partial \phi }}{{\partial y}}\left|{\begin{array}{{c}} {}_{{{\boldsymbol P}_i}} \end{array}} \right., - 1]^\textrm{T}}$ at*

_{i,ϕ}

*P**,*

_{i}_{ϕ}in the PS can be obtained. The next feature ray

*R*

_{i}_{+1}and the next feature point

*P*

_{i}_{+1,ϕ}in the PS can be determined using the “nearest-ray algorithm” that was used in the phase function calculation [11].

*P*

_{i}_{+1,ϕ}is on the tangent plane (in the PS) of the nearest data point among the

*i*data points (in the PS) that have been calculated already, as shown in Fig. 4(b). The steps above are repeated until all the feature points and their corresponding normal vectors (in the PS) are generated. These feature points are then fitted into a phase function by considering both the coordinates and the normals of the points in the PS, as shown in Fig. 4(c). The original phase function of the

*Ω*th element in the initial system can then be updated.

Similar to the generation of the geometric surface shapes, a “shift” error will also occur during the phase function generation process. However, a simple translation of the geometric surface shape after calculation of the phase function is not easy to implement for a general description of a curved surface with a conic base in the surface expression. When an offset is added to all terms that contain the power of *y*, the surface is no longer expressed using a surface description with a general base conic. Therefore, to avoid image quality degradation being caused by this mismatch between the two local coordinate systems when fitting the phase function, the phase function is fitted under the current local coordinate system directly.

#### 2.3 Iterative process

When all preliminary construction work (including the construction of the geometric substrate surfaces and the phase functions) is complete, the resulting imaging performance may be not good enough to provide a suitable starting point. An iterative process is used in this case to improve the imaging performance. The resulting system obtained using the preliminary construction process is used as the input for the iterative stage. In each round of the iteration, the geometric surfaces and the phase functions are also calculated one-by-one. The iterative process is similar to the construction process: when designing a specific freeform surface substrate or a specific phase function, all other geometric substrates and phase functions are maintained in their current states. The feature points are obtained and are then fitted to obtain a closed-form analytical solution that is used to update the previous surface substrate or phase function. Here, the surface normals of the data points are also calculated using the same method used in the construction process so that the connections with the RS and the PS are considered. The difference in this case is that the coordinates of the feature points are not obtained through a calculation process that follows the nearest-ray principle; instead, the intersections of the feature rays with the current substrate surface or phase function are used directly. The parameter *σ* is used to evaluate the imaging performance after each iteration step and is defined as:

*QT*represents the total number of feature light rays used in a specific design, and

*σ*denotes the distance between the actual image point and the ideal image point corresponding to the

_{μ}*μ*

^{th}ray. The actual image point refers to the intersection of the actual light ray with the image plane, which can be obtained by real ray tracing. For convenience, the coordinates are calculated relative to a predefined global coordinate system. The global coordinates of the ideal image point for a specific field can be written as [21] (Here we consider the conventional case whereby the optical system has plane symmetry about the meridional plane):

*EFL*represents the focal length of the system; (

*ω*,

_{x}*ω*) is the specific sample field which the feature ray

_{y}*R*belongs to; (

_{i}*x*

_{o},

*y*

_{o},

*z*

_{o}) are the global coordinates of the center of the image plane (the ideal image point of the central ﬁeld (

*ω*

_{x}_{,central},

*ω*

_{y}_{,central}) among the full FOV);

*α*

_{o}is the tilt angle of image plane relative to the global

*x*-axis.

We can use the change in *σ*_{RMS} to record the iteration efficiency during subsequent iterations. When the change in *σ*_{RMS} is smaller than a predefined threshold, the iteration is considered to be convergent and the iteration process stops. In addition, negative feedback can be used during the iterations to accelerate convergence [20]. The resulting system is input into the optical design software for further optimization to obtain the final design result.

The flowchart for the overall point-by-point design process is presented in Fig. 5.

## 3. Design examples

In this paper, to illustrate the generality and feasibility of the design method presented above, two three-mirror imaging systems composed of FSP elements were designed.

#### 3.1 Design example A

Example A is an off-axis unobscured three-mirror system that uses a traditional Wetherell configuration [24] and has an aperture stop located on the secondary mirror. This system has an 80 mm entrance pupil diameter, a focal length of 120 mm and a FOV of 8°×6°. The *F*-number of this design is 1.5. The operating wavelength is 1064 nm. An initial planar system, which contains three geometrical planes with no applied phase functions that are located at positions in the vicinity of their expected locations, is established as shown in Fig. 6(a). This system works under diffraction order *m *= 1.

For the design of non-rotationally symmetrical imaging systems, the feature light rays that are considered in the design process should come from the full aperture and the full FOV. When the system is symmetrical about the meridional plane, a half-full FOV is sufficient. Furthermore, the following six feature fields are sampled: (0°, 0°), (0°, 3°), (0°, −3°), (4°, 0°), (4°, 3°), and (4°, −3°). The circular aperture for each field was divided into 16 polar angles. Seven equidistantly-sampled pupil positions along each radial direction were used. This means that *QA *= 16 and *QP *= 7. In total, *QT *= *QA*×*QP*×*QF *= 16×7×6 = 672 feature light rays were used in the point-by-point design process. The feature light rays emitted from the object space were expected to be redirected toward their corresponding ideal image points on the image space, where these ideal image points were determined using the given object-image relationships.

Then, after a point-by-point design procedure that started from the initial planar system, the initial system evolved into a non-rotationally symmetrical off-axis three-mirror imaging system with a phase function applied onto each freeform mirror, as shown in Fig. 6(b). This system is taken to be a good starting point for the subsequent iteration process. Figure 7(a) shows the change in *σ*_{RMS} during the iteration steps while Fig. 7(b) shows the distortion grid for the system.

The optimization is performed using the optical design software CODE V [25]. All the geometric substrate shapes and phase functions in this system are expressed using XY polynomials (the surface expression for the substrate has a base conic). We select the XY polynomials up to the 4th order in this design. In fact, aberration theory determines that polynomial terms up to the 4th order correspond approximately to the primary aberrations. For the design example A here and the example B given in Section 3.2, the phase function of each phase element is described by the following equation (only the terms up to 4th order is used in design example A):

*ϕ*is the phase value represented by a unit of length, and

*A*is the coefficient of the polynomial term. For design example B, the terms up to 5th order are used. See Data File 1 for the detailed coefficients of phase functions. In general, the complex systems are optimized using the successive approximation method. Consider this system as an example to explain the optimization methods. First, only the surface curvatures of all the geometric substrate shapes and the decenter and tilt values of the FSP elements are used as variables during the optimization process. Then, conic constants are introduced as variables into all the shapes and optimization is implemented. Next, the polynomial coefficients are gradually added as variables and the basis used to divide the steps is the order of the polynomial terms, from lower order to higher order. In fact, the terms up to the 4th order are sufficient for this system and can provide satisfactory results.

_{i}During the optimization process, we select the default error function type based on transverse ray aberration in CODE V. Stricter constraints are gradually introduced to control the focal length, the distortion, obscuration elimination and the compactness of the system simultaneously. After a quick optimization, we can obtain the final design result, as shown in Fig. 8(a). The modulation transfer function (MTF) approached the diffraction limit, as depicted in Fig. 8(b). The final distortion grid is shown in Fig. 8(c). For asymmetric imaging systems, the image quality in both the meridian and sagittal planes should be evaluated separately. In addition, the image quality for different fields in full field-of-view (not only in y direction, but also in other directions) should be analyzed. When analyzing the image quality, it might be not comprehensive enough considering only the MTF curves in meridional direction and sagittal direction. Designers can comprehend the imaging quality in the sagittal/meridian planes of different fields by analyzing the ray aberration graphs of the system, and can determine the main types of aberrations. Figures 8(d) and 8(e) show the ray aberration graphs of the meridian and sagittal directions for design example A.

This example can be seen as a difficult design case which can be used for remote sensing and detection (especially for laser receiver system used in remote sensing and detection). The system has advanced system specifications: a small *F*-number, 80 mm entrance pupil diameter and a relatively large FOV. In addition, the system works under 1064 nm wavelength rather than often used long-wavelength-infrared in freeform three-mirror system designs. Using the design method given in this paper, the geometric substrate surfaces and closed-form phase functions can then be calculated quickly and efficiently, and the resulting design can be used as a good starting point for further optimization. After a quick optimization, the final system can be obtained and the MTF approached the diffraction limit.

#### 3.2 Design example B

In some cases, the design requires application of phase functions onto fixed surface substrate shapes, which can be regarded as a type of conformal design to some degree. In this design, the primary mirror (M1) and the tertiary mirror (M3) are integrated on a single geometric spherical substrate element with a certain radius. We then apply the phase functions on this single element. This design strategy can efficiently reduce the number of degrees of freedom (DOFs) during assembly and reduce the fabrication difficulty to some extent.

The folding geometry of the system is similar to that given in [26]. The off-axis unobscured three-mirror system has an aperture stop in front of the primary mirror. This system has a 40 mm entrance pupil diameter, a focal length of 100 mm and a FOV of 4°×3°. The *F*-number of this design is 2.5 and the system works under diffraction order *m *= 1. The operating wavelength is 1064 nm. The secondary mirror (M2) is an FSP element. The primary mirror and the tertiary mirror are integrated on a common spherical substrate phase element. The spherical substrate radius is 250 mm. While the primary mirror and the tertiary mirror have the same geometric substrate shape, we can fabricate two different phase functions on the different areas of this single element. In general, geometric freeform surface shapes are generated by turning in many cases and it is not easy to integrate the different freeform surface shapes (corresponding to the different parts) into a single element. However, when fabricating a DOE, the phase functions are generally realized by etching and exposure processes. It is thus easier to realize multiple different phase functions on a single element. Similar results can be obtained in the case where a metasurface is used. Additionally, this design example also illustrates the validity of the method proposed in this paper, which not only works for freeform surface substrates but also works for other traditional surface substrates (including aspherical surfaces, conic surfaces, and spherical or even planar substrates).

To simplify the design process while also reducing the total time for the point-by-point design process, an approximately equivalent coaxial system was used to provide guidance for design of the off-axis system in this example. Based on the specification provided, we use spheres to generate a coaxial three-mirror system. Note that the primary mirror and the tertiary mirror have a common radius. Paraxial optical theory is used to give a formula that expresses the relationship among the system optical power, the surface curvatures, and the distances between the surfaces [26]:

*c*

_{1},

*c*

_{2}, and

*c*

_{3}represent the surface curvatures of the three mirrors;

*Φ*is the optical power of the whole system; −

*d*

_{1},

*d*

_{2}, −

*l*

_{3}’ are the distances between M1 and M2, M2 and M3, and M3 and the image plane, respectively. Here, based on the given specification, the parameter values can be written as follows:

*Φ*=−1/100 mm

^{−1},

*d*

_{1}=−

*d*

_{2}=−200 mm (a user-defined value that corresponds to an appropriate structure), and

*c*

_{1}=

*c*

_{3}=−1/250 mm

^{−1}. By combining Eqs. (10) and (11),

*l*

_{3}’ and

*c*

_{2}can also be calculated. A group of appropriate solutions was used in this design, where

*l*

_{3}'=−228.87 mm and

*c*

_{2}=−1/1800 mm

^{−1}. To achieve an unobscured system, we used a biased FOV in the

*y*direction with a central field of (0°, 12°). The aperture stop is decentered in the +

*y*direction and the secondary mirror is tilted counterclockwise to realize a compact and unobscured folding geometry. After these operations, we obtain the initial spherical system, as shown in Fig. 9(a).

Six feature fields were sampled: (0°, 12°), (0°, 10.5°), (0°, 13.5°), (2°, 12°), (2°, 10.5°), and (2°, 13.5°). A total of 672 feature light rays were used in the subsequent process. Then, after the point-by-point procedure, the initial system was evolved into a non-rotationally symmetrical off-axis three-mirror imaging system with phase functions applied to each mirror, as shown in Fig. 9(b). The secondary mirror has a substrate surface shape that can be described using an XY polynomial, while the other two mirrors have surface shapes that are depicted using a common sphere. All the phase functions are XY polynomials. The distortion grid after the iterations is shown in Fig. 10. The resulting system is used as a good starting point for the final optimization.

The optimization process is performed in CODE V. During this optimization process, the surface curvature of the secondary geometric substrate shape and the decenter and tilt values of all the elements are initially set as variables. Then, the polynomial coefficients up to the 4th order are gradually added as variables. We further introduce 5th order coefficients as free parameters to enable better distortion correction. When we continued to introduce polynomials of orders higher than the 5th order, the image quality only improved slightly.

To control the focal length, the distortion, the actual image height, and the obscuration and compact properties of the system simultaneously, stronger constraints are gradually imposed on the system design. In addition, a special constraint about making the primary mirror and tertiary mirror share a common sphere must be applied here. Figure 11(a) shows the optical layout of the final system after optimization. The MTF approached the diffraction limit, as depicted in Fig. 11(b). The final distortion grid is shown in Fig. 11(c). Figures 11(d) and 11(e) show the ray aberration graphs of the meridian and sagittal directions for design example B. See Data File 1 for the detailed coefficients of phase functions for the elements in this design.

Use of FSP elements in imaging system design can generate high-performance systems that combine very compact configurations with advanced system parameters. Phase elements can be used to correct chromatic aberrations in refractive systems. However, the two examples given in Section 3 can also show the advantages if FSP elements are used in reflective systems. For the design example A in Section 3.1, similar design example using only geometric freeform surfaces can be found in [5]. For the system given in [5], the system configuration, the field-of-view and *F*-number, the entrance pupil diameter are all similar to Example A given in this paper. For the design example B in Section 3.2, similar design examples using only geometric freeform surfaces can be found in [26]. The system specifications (FOV, *F*-number, entrance pupil diameter) and the system configuration are all the same for the two designs. Although the image quality of both the fully-optimized systems in [5] and [26] is good at a working wavelength in the long-wavelength-infrared band, if we reduce the working wavelength to 1064 nm which is used for design examples A and B in this paper, the MTF will be far from the diffraction-limit and the image quality will be very bad. However, for design examples A and B in this paper, the MTF still approached the diffraction limit and excellent image quality is achieved, as depicted in Figs. 8(b) and 11(b). The above comparisons show the advantages of FSP elements in reflective imaging systems.

The phase profiles or phase function (or the phase element) calculated by the point-by-point method or the final optimization can be realized by many ways, such as diffractive optical structures, holographic elements, and metasurface. Therefore, the two design examples in Section 3 can be in fact considered as systems employing diffractive optical element on an asymmetric freeform optical element. Other types of phase elements can also be used. For each type of phase element, different approaches have to be used to convert the fundamental phase profiles into the specific patterns or structures, considering the light-matter interaction or other physics aspects. In this paper, we focus on the efficient design method of the general and fundamental phase profiles and the geometric substrate surfaces of the elements. The structure design and fabrication issues are very complicated and they are beyond the scope of this paper. These are the researches we hope to do in the future.

## 4. Conclusions and discussions

Use of freeform-surface-substrate phase elements (FSP elements) in imaging system design can generate high-performance systems that combine very compact configurations with advanced system parameters. In this paper, we proposed a design method for imaging systems consisting of freeform-surface-substrate phase elements. The design process begins from an initial system that uses simple geometric planes or other predefined geometric surfaces without phase functions. After point-by-point construction and iteration steps using the rays from multiple fields and different pupil coordinates, multiple FSP elements in nonsymmetric configurations can be generated directly. The resulting system is input to the optical design software as a good starting point for optimization to obtain a high-performance imaging optical system. In other words, a good starting point consisting of FSP elements can be created by this method based on the specific requirements. The dependence on existing starting point and human effort is reduced. Using this design framework, it is possible to generate imaging optical systems that offer better performance, increased compactness, lighter weight, and easier alignment and manufacturing than conventional imaging systems. This design method can be applied in many areas, not only in imaging optics but also illumination or non-imaging optics.

The point-by-point design of multiple freeform-surface-substrate phase elements in the highly nonsymmetric system configuration requires that both the phase profile and the shape of the geometric substrate surface should be generated in a comprehensive way, which cannot be handled by existing “isolated” direct or point-by-point method. Even if the designer wants to design only the phase function or the freeform geometric substrate in a system consisting of FSP elements, existing “isolated” point-by-point methods do not work. In the method proposed in this manuscript, the key to achieve the design goal, is that a novel method is proposed to connect the information and of phase profile in the phase function space and the surface substrate in the real 3D space. The different surface normals of the geometric substrate are calculated by different gradient vectors of the phase function, and vice versa. In this way, accurate, efficient and versatile point-by-point design is made feasible. Combined with a new comprehensive iterative strategy for the complicated system and a novel shift error compensation method, point-by-point design of the entire system can be achieved.

The design framework presented in this paper can be used to design next-generation high-performance imaging systems based on freeform-surface-substrate phase elements for use in areas including near-eye-displays, high-performance cameras, and remote sensing and detection. To demonstrate the feasibility of the proposed design method, we present two three-mirror imaging system consisting of freeform surface substrates with complex phase functions on them. Specially, the first example was designed starting from an initial planar system, and the second example integrates the primary mirror and tertiary mirror on a single geometric substrate element with a certain radius. In both two examples, there are some elements which need to design both geometric substrate shape and phase function. For the two design examples given in this paper, the imaging performance approach to the diffraction limit with small distortions after optimization (if we use only freeform surface or flat phase element for the same design, the image quality is bad), which show the strong ability of this novel kind of system consisting of FSP elements. In particular, the proposed method can handle the design task of novel imaging systems that are constrained to conformal substrate shapes or integrated substrates with phase profiles added onto the substrate (Design example B), which are of increasing interest in the areas such as head-mounted-display and miniature cameras. Currently, only one wavelength can be considered in this paper, and it is a problem which should be overcome in the future.

## Funding

National Key Research and Development Program of China (2017YFA0701200); National Natural Science Foundation of China (61805012); Beijing Institute of Technology Research Fund Program for Young Scholars.

## Acknowledgments

We thank Synopsys for the educational license of CODE V.

## Disclosures

The authors declare that there are no conflicts of interest related to this article.

## References

**1. **H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express **26**(13), 17578–17590 (2018). [CrossRef]

**2. **D. Cheng, Y. Wang, C. Xu, W. Song, and G. Jin, “Design of an ultra-thin near-eye display with geometrical waveguide and freeform optics,” Opt. Express **22**(17), 20705–20719 (2014). [CrossRef]

**3. **Z. Zheng, X. Liu, H. Li, and L. Xu, “Design and fabrication of an off-axis see-through head-mounted display with an x-y polynomial surface,” Appl. Opt. **49**(19), 3661–3668 (2010). [CrossRef]

**4. **Y. Zhong and H. Gross, “Initial system design method for non-rotationally symmetric systems based on Gaussian brackets and Nodal aberration theory,” Opt. Express **25**(9), 10016–10030 (2017). [CrossRef]

**5. **T. Yang, J. Zhu, W. Hou, and G. Jin, “Design method of freeform off-axis reflective imaging systems with a direct construction process,” Opt. Express **22**(8), 9193–9205 (2014). [CrossRef]

**6. **Q. Meng, H. Wang, K. Wang, Y. Wang, Z. Ji, and D. Wang, “Off-axis three-mirror freeform telescope with a large linear field of view based on an integration mirror,” Appl. Opt. **55**(32), 8962–8970 (2016). [CrossRef]

**7. **A. Bauer, E. M. Schiesser, K. P. Thompson, and J. P. Rolland, “Starting geometry creation and design method for freeform optics,” Nat. Commun. **9**(1), 1756 (2018). [CrossRef]

**8. **J. Zhu, W. Hou, X. Zhang, and G. Jin, “Design of a low F-number freeform off-axis three-mirror system with rectangular field-of-view,” J. Opt. **17**(1), 015605 (2015). [CrossRef]

**9. **D. Reshidko and J. Sasian, “Method for the design of nonaxially symmetric optical systems using free-form surfaces,” Opt. Eng. **57**(10), 1 (2018). [CrossRef]

**10. **K. P. Thompson and J. P. Rolland, “Freeform Optical Surfaces: A Revolution in Imaging Optical Design,” Opt. Photonics News **23**(6), 30–35 (2012). [CrossRef]

**11. **T. Yang, D. Cheng, and Y. Wang, “Design method of nonsymmetric imaging systems consisting of multiple flat phase elements,” Opt. Express **26**(19), 25347–25363 (2018). [CrossRef]

**12. **T. Zhan, J. Xiong, Y. H. Lee, R. Chen, and S. T. Wu, “Fabrication of Pancharatnam-Berry phase optical elements with highly stable polarization holography,” Opt. Express **27**(3), 2632–2642 (2019). [CrossRef]

**13. **R. Tian, J. Liu, X. Li, X. Wang, and Y. Wang, “Design and fabrication of complicated diffractive optical elements on multiple curved surfaces,” Opt. Express **23**(26), 32917–32925 (2015). [CrossRef]

**14. **J. P. B. Mueller, N. A. Rubin, R. C. Devlin, B. Groever, and F. Capasso, “Metasurface polarization optics: independent phase control of arbitrary orthogonal states of polarization,” Phys. Rev. Lett. **118**(11), 113901 (2017). [CrossRef]

**15. **A. V. Kildishev, A. Boltasseva, and V. M. Shalaev, “Planar photonics with metasurfaces,” Science **339**(6125), 1232009 (2013). [CrossRef]

**16. **N. Yu and F. Capasso, “Flat optics with designer metasurfaces,” Nat. Mater. **13**(2), 139–150 (2014). [CrossRef]

**17. **G. D. Wassermann and E. Wolf, “On the Theory of Aplanatic Aspheric Systems,” Proc. Phys. Soc., London, Sect. B **62**(1), 2–8 (1949). [CrossRef]

**18. **D. Knapp, “Conformal Optical Design,” Ph.D. Thesis, University of Arizona (2002).

**19. **P. Gimenez-Benitez, J. C. Miñano, J. Blen, R. M. Arroyo, J. Chaves, O. Dross, M. Hernandez, and W. Falicoff, “Simultaneous multiple surface optical design method in three dimensions,” Opt. Eng. **43**(7), 1489–1503 (2004). [CrossRef]

**20. **T. Yang, J. Zhu, X. Wu, and G. Jin, “Direct design of freeform surfaces and freeform imaging systems with a point-by-point three-dimensional construction-iteration method,” Opt. Express **23**(8), 10233–10246 (2015). [CrossRef]

**21. **T. Yang, G. Jin, and J. Zhu, “Automated design of freeform imaging systems,” Light: Sci. Appl. **6**(10), e17081 (2017). [CrossRef]

**22. **J. Mendes-Lopes, P. Benítez, J. C. Miñano, and A. Santamaría, “Simultaneous multiple surface design method for diffractive surfaces,” Opt. Express **24**(5), 5584–5590 (2016). [CrossRef]

**23. **V. Perlick, * Ray optics Fermat's principle, and applications to general relativity*. (Springer, 2000).

**24. **W. B. Wetherell and D. A. Womble, “All-reflective three element objective,” U.S. Patent 4,240, 707 (1980).

**25. **Optical Research Associates, Code V Reference Manual (Synopsys Inc., 2012).

**26. **T. Yang, J. Zhu, and G. Jin, “Compact freeform off-axis three-mirror imaging system based on the integration of primary and tertiary mirrors on one single surface,” Chin. Opt. Lett. **14**(6), 060801 (2016). [CrossRef]