Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Freeform illumination optics for 3D targets through a virtual irradiance transport

Open Access Open Access

Abstract

Freeform illumination optics design for 3D target surfaces is a challenging and rewarding issue. The current researches on freeform illumination optics are mostly involved in planar targets, especially for the cases where the targets are perpendicular to the optical axis. Here, we propose a general method to design freeform optics for illuminating 3D target surfaces for zero-étendue sources. In this method, we employ a virtual observation plane which is perpendicular to the optical axis and transfer the irradiance on the 3D target surface to this virtual plane. By designing freeform optics to generate the transferred irradiance distribution, the prescribed irradiance distribution on the 3D target can be realized automatically. The influence of the freeform optics size is considered in the optics design process, which makes it possible to design illumination system for near-field configuration where the influence of the freeform optics size cannot be ignored. We demonstrate the robustness and elegance of the proposed method with three design examples.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Freeform illumination optics design for zero-étendue sources has been extensively researched in the last two decades. Theses works can be roughly classified into three main categories [1]: the Monge-Ampère equation method (MA) [27], supporting quadric method (SQM) [810], and the ray mapping method (RM) [1117]. And great achievements for specific illumination applications are constantly being exposed. These researches of freeform illumination optics are mostly valid in the applications in which the target plane receivers are perpendicular to the optics axis. However, very limited works have been studied for the 3D target illumination, which is also important in various applications, such as undulating road illumination, architecture lighting, machine vision inspection system, exhibition lighting for spatial impression and enjoyment of art. For these illumination applications, the lighting performance (specific irradiance distribution, efficiency, light pollution and etc.) is equally important.

Due to the unconstrained geometry of the illuminated surface, the precise light control on a 3D target is more challenging than the conventional plane targets. Freeform optics design for 3D geometry is still not well addressed and faces many unresolved challenges [18]. Ries et al. [2] connected the target irradiance with the wavefront curvature to design freeform illumination optics, there, the target surface normal is not assumed to be constant [as expressed in Eq. (9)], which implied the potentiality of freeform optics design for 3D illumination cases. However, the applicability of Ries’ method to non-planar surfaces is not discussed in their paper. Sun et al. proposed a method for producing a uniform irradiance distribution on the non-planar surface [19]. This method started with dividing the non-planar receiver into several grids, then the height matrix is obtained as a collection of the height of each grid, after that, a series of 2D curves are calculated in different directions and finally combined into freeform surface. Wu et al. directly designed the freeform illumination optics in highly tilted geometry based on the MA method, but the method only considered plane targets [20]. Feng et al. extended the iterative wavefront tailoring method to design freeform lens to produce the prescribed irradiance on undulating surfaces, and the excellent optical performance was obtained [21].

In this paper, we propose a general method to design freeform optics that could generate prescribed irradiance distributions on 3D targets for zero-étendue sources. Specifically, we employ a virtual observation plane which is perpendicular to the optical axis, and the prescribed irradiance distribution on the 3D target is mapped onto this virtual plane. Therefore, the initial 3D illumination problem is converted to designing the freeform optics for the corresponding irradiance distribution on the virtual observation plane. In the design process, the freeform surface information is considered in the calculation, which enables high ability of freeform illumination design with no limitations of far-field approximation and the geometry of the target surface. Besides, we further generalize the least-squares ray mapping (LSRM) method proposed in our previous work [15] to to redistribute almost all the rays emitted from the 180$^\circ$ opening angle of the light source, thus making ultra-high energy collection efficiency.

2. Design method

2.1 Virtual irradiance transformation

Figure 1(a) shows the sketch of the design geometry of our proposed method. The point-like light source is located at the origin of the global coordinate system ${xyz}$, which emits rays in the up half space. The entrance surface of the lens is an analytical surface and the exit surface is the freeform surface. The predefined 3D receiver is defined by a parametric space $\left \{(U,V) {\big |}\ |U|\leqslant 1 \textrm {mm}, |V|\leqslant 1\textrm {mm}\right \}$ and Cartesian coordinates $\textbf {t}(x_{\textrm {t}},y_{\textrm {t}},z_{\textrm {t}})$ with a predefined irradiance $I_{\textrm {t}}(U,V)$. The source plane is defined by Cartesian coordinates $\textbf {s}(x_{\textrm {s}},y_{\textrm {s}})$ with a given irradiance $I_{\textrm {s}}(x_{\textrm {s}},y_{\textrm {s}})$. The goal is to design a parametric surface $\textbf {f}(u,v)$=$(x_{\textrm {f}},y_{\textrm {f}},z_{\textrm {f}})$ that can redirect the rays emitted from the light source to the 3D target and obtain a prescribed irradiance $I_{t}(U,V)$. The subscript $\textrm {s, f, t}$ are the abstract descriptions to denote the light source, the freeform surface and the 3D target respectively.

 figure: Fig. 1.

Fig. 1. (a) Sketch of the design geometry; and (b)schematic representation of the irradiance and mapping transformation.

Download Full Size | PDF

To reduce the difficulties that arise from the 3D receiver due to its varying $z$-coordinate, we employ a virtual observation plane defined in coordinates $\textrm {v}(x_{\textrm {v}},y_{\textrm {v}},h_{\textrm {v}})$, where $h_{\textrm {v}}$ is a constant determining the height of the virtual plane, the subscript v is the abstract description to represent the virtual plane. If the prescribed irradiance $I_{\textrm {v}}(U,V)$ can be transformed to the virtual plane, the design problem can be reduced to the conventional plane target illumination problem which has been extensively studied. The irradiance transformation is implemented based on the concept that the amount of the energy carried by each light ray keeps invariant along the path of transmission in lossless systems.

The schematic representation of the proposed transformation method is illustrated in Fig. 1(b), where $\textbf {m}_{1}$ and $\textbf {m}_{2}$ represent the 3D target to virtual target map and source to virtual target map respectively. Let $\textbf {m}_{1}$ and $\textbf {m}_{2}$ be both flux partitioning grids, thus, the composite mapping $\textbf {m}_{2}\circ \textbf {m}_{1}^{-1}$ will be used as the source to 3D target map while maintaining the energy conservation. Besides, the boundary condition will be automatically satisfied, which means that the boundary rays from the light source will be mapped onto the boundaries of illumination areas on the 3D target $T$ and the virtual plane $V$. So, the main works of this research consist in calculating $\textbf {m}_{1}$ and $\textbf {m}_{2}$.

For far-field assumption, where the influence of the the freeform lens size can be ignored compared with the whole lighting system. The coordinate transformation $\textbf {v}(x_{\textrm {v}},y_{\textrm {v}})=\textbf {m}_{1}(\textbf {t}(U,V))$ between the 3D receiver and the virtual plane can be directly computed by tracing rays from the light source to the points on target $\textbf {t}(U,V)$, which forms a collinear relationship:

$$\left\{\begin{matrix} x_{\textrm{v}}= (h_{\textrm{v}}-z_{\textrm{t}})x_{\textrm{t}}/z_{\textrm{t}}+x_{\textrm{t}},\\ y_{\textrm{v}}= (h_{\textrm{v}}-z_{\textrm{t}})y_{\textrm{t}}/z_{\textrm{t}}+y_{\textrm{t}}.\\ \end{matrix}\right.$$
Then, we calculate the irradiance distribution $I_{\textrm {v}}({x_{\textrm {v}},y_{\textrm {v}}})$ with the assumption that the system is lossless,
$$I_{\textrm{v}}(x_{\textrm{v}},y_{\textrm{v}})=I_{\textrm{t}}(U,V)\frac{\partial(U,V)}{\partial(x_{\textrm{v}},y_{\textrm{v}})},$$
where $\partial (g_{1},g_{2})/\partial (a_{1},a_{2})$ denotes the Jacobian of vector-value function $\textbf {g}(g_{1},g_{2})$ to variables $(a_{1},a_{2})$. And the Jacobian of its inverse function $\partial (a_{1},a_{2})/\partial (g_{1},g_{2})$ can be calculated by $[\partial (g_{1},g_{2})/\partial (a_{1},a_{2})]^{-1}$.

For near-field configuration, where the influence of the freeform lens size cannot be ignored anymore. A big challenge is how to carry out the irradiance transformation considering the geometrical information of the freeform lens. To address this issue, we separate the calculation of $I_{\textrm {v}}({x_{\textrm {v}},y_{\textrm {v}}})$ into two parts: the freeform surface edge determination and the irradiance transformation. The former one makes the rays refracted at the edge of the lens strike the boundary of the 3D receiver, and the latter one ensures that the prescribed irradiance being transformed precisely.

$\textbf {Determining the edge of the freeform surface:}$

$(\textbf {a})$ divide the 3D receiver evenly in the parametric space $(U,V)$.

$(\textbf {b})$ calculate the initial irradiance $I_{\textrm {v}_0}({x_{\textrm {v}_{0}},y_{\textrm {v}_{0}}})$ with the far-field assumption using Eqs. (1) and (2), then, the initial freeform surface $(x_{\textrm {f}_{0}},y_{\textrm {f}_{0}},z_{\textrm {f}_{0}})$ can be derived using the improved LSRM method which will be discussed later.

$(\textbf {c})$ connect the discrete points on the freeform surface obtained previously with the evenly distributed mesh points on the 3D receiver, and extend them to the virtual plane, thus obtaining the transformed coordinate information $({x_{\textrm {v}_{i}},y_{\textrm {v}_{i}}})$ on the 3D receiver. The new irradiance $I_{\textrm {v}_{i}}({x_{\textrm {v}_{i}},y_{\textrm {v}_{i}}})$ on the virtual plane can be calculated with a Jacobian transform:

$$\left\{\begin{matrix} x_{\textrm{v}_i}= (h_{\textrm{v}}-z_{\textrm{t}})(x_{\textrm{t}}-x_{\textrm{f}_{i-1}})/(z_{\textrm{t}}-z_{\textrm{f}_{i-1}})+x_{\textrm{t}},\\ y_{\textrm{v}_i}= (h_{\textrm{v}}-z_{\textrm{t}})(y_{\textrm{t}}-y_{\textrm{f}_{i-1}})/(z_{\textrm{t}}-z_{\textrm{f}_{i-1}})+y_{\textrm{t}},\\ I_{\textrm{v}_i}(x_{\textrm{v}_i},y_{\textrm{v}_i})= I_{\textrm{t}}(U,V)\partial(U,V) / \partial(x_{\textrm{v}_i},y_{\textrm{v}_i}). \end{matrix}\right. {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \left( {i > 1} \right) , $$
where $i$ represents the number of the iteration. Similarly, a new freeform surface $(x_{\textrm {f}_{i}},y_{\textrm {f}_{i}},z_{\textrm {f}_{i}})$ can be constructed. The detailed method for the freeform construction can refer to [22].

$(\textbf {d})$ repeat $(\textbf {c})$ until the deviation between $(x_{\textrm {v}_i},y_{\textrm {v}_i})$ and $(x_{\textrm {v}_{i-1}},y_{\textrm {v}_{i-1}})$ is less than the threshold $\eta _{1}$.

We use a root mean square deviation (RMSD) to evaluate this deviation, which is defined as:

$$\textrm{RMSD}=\sqrt{\frac{1}{mn}\sum \sum \left [ \left ( x_{\textrm{v}_{i}} - x_{\textrm{v}_{i-1}}\right )^{2}+\left ( y_{\textrm{v}_{i}}-y_{\textrm{v}_{i-1}}\right )^{2}\right ]},$$
where $m$ and $n$ denote the maximum values of the grid number in both directions. If the boundary of the virtual illumination pattern is determined, the edge of the freeform surface is determined too. This guarantees that the edge rays in the source domain are refracted by the edge of the freeform surface and strike the boundary of the 3D receiver.

However, the locations of the rays originated from the light source passing through the points on the freeform surface to the 3D receiver are not necessarily evenly distributed. It is due to the fact that the calculated discrete points used to characterize the freeform surface are not evenly distributed. For a more precise irradiance transformation, we need to trace the light rays emitted from the source to the discrete points on the freeform surface and then continue tracing light rays to the 3D receiver, and finally extend the light rays to the virtual observation plane. $\textbf {Irradiance transformation:}$

$(\textbf {I})$ trace the light rays emanating from the light source, which pass through the points on the freeform lens obtained previously and strike the 3D receiver at $\textbf {t}(x_{\textrm {t}_{j}},y_{\textrm {t}_{j}},z_{\textrm {t}_{j}})$, then arrive at the virtual plane at $\textrm {v}(x_{\textrm {v}_{j}},y_{\textrm {v}_{j}},z_{\textrm {v}_{j}})$. In this process, the maps $\textbf {m}_{1}$ and $\textbf {m}_2$ are related to the freeform surface and the corresponding rays, which should be determined in parameter $(u,v)$. Therefore, the transformed irradiance on the virtual plane can be calculated as:

$$I_{\textrm{v}_j}(x_{\textrm{v}_j},y_{\textrm{v}_j})= I_{\textrm{t}_j}(U_{\textrm{t}_j},V_{\textrm{t}_j})\frac{\partial(U_{\textrm{t}_j},V_{\textrm{t}_j})}{\partial(u,v)}\frac{\partial(u,v)}{\partial(x_{\textrm{v}_j},y_{\textrm{v}_j})},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \left( {j > 1} \right), $$
where $j$ represents the number of the iteration. Then, the new freeform surface is constructed.

$(\textbf {II})$ repeat $(\textbf {I})$ until the deviation between $(x_{\textrm {v}_j},y_{\textrm {v}_j})$ and $(x_{\textrm {v}_{j-1}},y_{\textrm {v}_{j-1}})$ is less than the threshold $\eta _{2}$.

The small deviation between $(x_{\textrm {t}_j},y_{\textrm {t}_j})$ and $(x_{\textrm {t}_{j-1}},y_{\textrm {t}_{j-1}})$ means a more precise irradiance transformation between the 3D target and the virtual plane. It is obvious that the deviation between $(x_{\textrm {t}_j},y_{\textrm {t}_j})$ and $(x_{\textrm {t}_{j-1}},y_{\textrm {t}_{j-1}})$ is positively related to the deviation between $(x_{\textrm {v}_j},y_{\textrm {v}_j})$ and $(x_{\textrm {v}_{j-1}},y_{\textrm {v}_{j-1}})$. So, we use the deviation between $(x_{\textrm {v}_j},y_{\textrm {v}_j})$ and $(x_{\textrm {v}_{j-1}},y_{\textrm {v}_{j-1}})$ to represent the convergence of the irradiance transformation accuracy, and this convergence is quantified by Eq. (4).

2.2 Least-squares ray mapping method

Based on the above processes, we convert the design problem for generalized targets to designing the freeform lens for the corresponding irradiance distribution on a virtual observation plane. For the calculation of $\textbf {m}_2$, we further improve the LSRM method proposed in our previous work [15] to achieve a high collection efficiency of light rays emitted from the light source. In [15], the energy collection efficiency was only about ${73.88\%}$. It should be noted that the LSRM [15] method has nothing to do with optimal mass transport theory and Monge-Ampère equation. It is another formulation of illumination problem that is based on ray mapping method. The LSRM method can be considered as a heuristic method iteratively correcting an integrable ray mapping to approach energy conservation and boundary condition via some modifications of the original least-squares algorithm [23]. The LSRM method is suitable for a wide range of zero-étendue design problem with no limitations of far-field illumination and paraxial configuration. The previous stage proposed in [15] successfully calculated the integrable ray mapping but can be only employed with a rectangular source domain. In this paper, we further improve the LSRM method to enable the highly efficient freeform illumination optics design on a generalized target.

The generalization of LSRM method is by introducing a parametric space on both source and target domain and carefully choosing a source mapping to ensure the convergence of the algorithm. For a point-like source, we can project the light distribution of a unit hemisphere onto a unit circle as shown in Fig. 2(a). For Lambertian sources, the equivalent irradiance on the source plane is uniform [24]. A direct way is to establish a uniform grid on the source plane and calculate the target mapping. However, the rays emitted with large angles will be undersampled in the uniform sampling scheme. To address this issue, we establish a uniform grid $(\theta _{a},\phi _{a})$ on a unit hemisphere parameterized by $(\theta ,\phi )$ which we called "angle mapping" shown in Fig. 2(b). We omit some rays that are almost parallel to the source plane in order to form a regular surface with parameter $(u,v)$ in the lateral process. This mapping can be calculated by the least-squares method [23] with a relatively large $\alpha$ value. The source mapping $(s_{1},s_{2})$ can be easily computed by the spherical coordinates formula.

 figure: Fig. 2.

Fig. 2. (a) Projection irradiance on source plane; (b) schematic representation of the irradiance transfer between source and target domain.

Download Full Size | PDF

Based on the above considerations, the design problem is converted to calculate an integrable ray mapping between the parametric space and target domain which can be achieved by the LSRM method [15]. The algorithm consists of three steps in each iteration including the calculation of boundary points $(b_{1},b_{2})$, matrices $\textbf {P}$ of desired mapping and the new target map $(t_{1},t_{2})$. The first step will be the same as the method proposed in [15,23]. For the second step, the determinant of matrices $\textbf {P}$ equals to the ratio of irradiances of the parametric and the target domains. The irradiance of parametric domain can be calculated by:

$$E_{u}(u,v)= E_{s}(s_{1},s_{2})\frac{\partial(s_{1},s_{2})}{\partial(u,v)},$$
where $E_{u}$ and $E_{s}$ denote the irradiances of parametric and source domains. Then, the matrices $\textbf {P}=[p_{11},p_{12};p_{21},p_{22}]$ can be calculated using the Lagrange multiplier method with the real target map traced through the current freeform lens. Then, the new target map $(t_{1},t_{2})$ can be obtained via solving two decoupled Poisson equations with Robin boundary conditions:
$$\Delta t_1=\nabla\cdot\textbf{p}_1,\qquad(u,v)\in\Omega,$$
$$(1-\alpha)t_1+\alpha\nabla t_1\cdot\hat{\textbf{n}}=(1-\alpha)b_1+\alpha\textbf{p}_1\cdot\hat{\textbf{n}},(u,v)\in\partial\Omega,$$
$$\Delta t_2=\nabla\cdot\textbf{p}_2,\qquad(u,v)\in\Omega,$$
$$(1-\alpha)t_2+\alpha\nabla t_2\cdot\hat{\textbf{n}}=(1-\alpha)b_2+\alpha\textbf{p}_2\cdot\hat{\textbf{n}},(u,v)\in\partial\Omega,$$
where $\textbf {p}_1=(p_{11},p_{12})^T$, $\textbf {p}_2=(p_{21},p_{22})^T$ and $\hat {\textbf {n}}$ is the outward unit normal vector on the boundary $\partial \Omega$ of parametric domain $\Omega$. Iterating the above process until the calculated ray mapping getting close to the real target map, we can calculate an integrable ray mapping and the corresponding freeform lens.

We give a design example to demonstrate the effectiveness of our method to address the near-field design problem. The maximum collection angle $\theta _{max}$ is 75$^\circ$. The design configuration and simulated results are also shown in Fig. 3. In the next section, we will use the above method to design several freeform lenses producing complex patterns on 3D target surfaces.

 figure: Fig. 3.

Fig. 3. A near-field design example using the least-squares ray mapping method.

Download Full Size | PDF

3. Design examples

We provide three design examples to verify the effectiveness of the proposed method. The point-like LED is chosen as the light source. The lens material is polymethyl methacrylate (PMMA) with the refractive index of 1.49386. For the first two examples, we choose the sphere surface as the entrance surface of the lens, which is defined as $x^2+y^2+(z+9)^2=144(z\geq 0,$ unit:mm$)$. The source domain involved in computation is set as $\Omega _{\textrm {s}}=\left \{(x_{\textrm {s}},y_{\textrm {s}}){\big |}\ |x_{\textrm {s}}|\leqslant 0.99 \textrm {mm}\,|y_{\textrm {s}}|\leqslant 0.99\textrm {mm}\right \}$. As the Lambertian property of the LED, a very small amount of energy $(\sim 1.64\%)$ is not considered.

In the first example, the goal is to cast the letters "HUST" on a square background on a 3D surface. The ratio of the illuminance of the letters to that of the background equals 5. This 3D surface is predefined as:

$$\left\{\begin{matrix} x=600U, y=600V,\\ z=600+54(1-2U)^2e^{{-}4U^2-(2V+1)^2}-6e^{-(2U+1)^2-4V^2}\\-90(0.4U-20U^3-32V^5)e^{{-}4(U^2+V^2)}. \end{matrix}\right.$$
The height of the designed lens is $h_{\textrm {lens}}=10$ mm, and the far-field assumption is satisfied. The ray tracing model is shown in Fig. 4(a), the irradiance distribution on the 3D target surface is given in Fig. 4(b) and it shows that the light distribution is well controlled. The obtained source map and the virtual target map are presented in Figs. 4(c) and 4(d), where the 322$\times$322 grids are interpolated into 30$\times$30 grids for better visualization. We can clearly see the letters “HUST" on the freeform surface in Figs. 4(e) and (f). This is due to the fact that the illuminance value of the letters in the 3D target generated by the “HUST" regions on the surface is much greater than that of the square background.

 figure: Fig. 4.

Fig. 4. The far-field configuration: (a) the designed illumination system; (b) the irradiance distribution on the 3D target; (c) the source map; (d) the map on the virtual target plane; (e) the lens model; (f) the Gaussian curvature of the freeform surface.

Download Full Size | PDF

We use the RMSD to evaluate the integrability condition, the energy conservation as well as the boundary condition. Figure 5 shows the convergence of the RMSDs as the iteration increases from 1 to 50. Figure 5(a) represents the convergence of integrability condition which we use the RMSD of the predefined $\textbf {m}$ to the real target map $\textbf {t}$, where real target map $\textbf {t}$ is calculated by tracing the rays toward the virtual plane. Figures 5(b) and 5(c) represent the convergences of energy conservation and boundary condition respectively. The definition of the three convergence evaluations are the same as [15]. After 10 iterations, the RMSDs trend to be stable, which demonstrate the good convergence performance of the improved LSRM method.

 figure: Fig. 5.

Fig. 5. (a) The RMSD from predefined map $\textbf {m}$ to real map $\textbf {t}$; (b) the RMSD of energy conservation; (c) the RMSD of boundary points.

Download Full Size | PDF

The second example belongs to the near-filed configuration, we design a picture-generated illumination system, the ratio of the illuminance of the picture to that of the background equals 4. The 3D receiver, which is tilted placed relative to the $x y$ plane, is predefined as:

$$\left\{\begin{matrix} x=120U, y=90V,\\ z=90+8.1(1-2U)^2e^{{-}4U^2-(2V+1)^2}-0.9e^{-(2U+1)^2-4V^2}\\-13.5(0.4U-20U^3-32V^5)e^{{-}4(U^2+V^2)}+90y\textrm{tan}10^\textbf{o}. \end{matrix}\right.$$
The height of the designed lens is $h_{\textrm {lens}}=18$ mm, and the near-field assumption is satisfied. Figure 6(a) shows the ray tracing model of the illumination system, and the $y z$ section is given in Fig. 6(b), the Gaussian curvature of the freeform surface is given in Fig. 6(c). Comparing the original picture Fig. 6(d) with the simulated irradiance Fig. 6(e) on the tilted 3D target, we find that the light distribution is precisely controlled. The transformed irradiance distribution on the virtual plane and virtual target map are presented in Figs. 6(f) and 6(g). The algorithm is also implemented in 322$\times$322 grids and only 60$\times$60 grids are shown for better visualization. For this design, we perform five iterations and find that the successive maps deviation on the virtual plane which is expressed by Eq. (4) trends to be stable, and the RMSD is less than $10^{-2}$mm. The run-time of this example is about 58 minutes on a Windows 10 desktop PC (Intel i7-10700K CPU with 32 GB RAM).

 figure: Fig. 6.

Fig. 6. The near-field configuration with tilted geometry: (a) the designed illumination system; (b) $y$-$z$ plane of the optical system; (c) the Gaussian curvature of the freeform surface; (d) the original picture; (e) the simulated irradiance distribution; (f) the virtual irradiance distribution on the virtual plane; (g) the target map on the virtual plane.

Download Full Size | PDF

Figure 7(a) shows the initial irradiance distribution on the 3D receiver which is obtained directly based on the far-field assumption. It is clear that the boundary of the illumination pattern is beyond the preset one. Besides, the deviation between the simulated irradiance distribution and the predefined irradiance distribution is relatively large at the target edge. Figure 7(b) shows the final irradiance distribution obtained by the method introduced in section 2.1 for the near-field configuration. It is obvious that the optical performance has been greatly improved.

 figure: Fig. 7.

Fig. 7. (a) The initial irradiance distribution obtained based on the far-field assumption; (b) the improved irradiance distribution based on the near-field configuration.

Download Full Size | PDF

The third design is aimed for generating uniformly illuminated letters “HUST" on the 3D target surface, in contrast to the first design example, this design belongs to the near-field configuration. The target surface is defined as:

$$\left\{\begin{matrix} x=100U, y=100V,\\ z=90+9(1-2U)^2e^{{-}4U^2-(2V+1)^2}-e^{-(2U+1)^2-4V^2}\\-15(0.4U-20U^3-32V^5)e^{{-}4(U^2+V^2)}. \end{matrix}\right.$$
A prototype of this design example is implemented. For manufacturing convenience, we choose the entrance surface of the freeform lens to be flat, and the source domain of $\Omega _s=\left \{(x_{\textrm {s}},y_{\textrm {s}})|x_{\textrm {s}}|\leqslant 0.94\textrm {mm},|y_{\textrm {s}}|\leqslant 0.94\textrm {mm}\right \}$ is considered. The lens is fabricated with an accuracy of ${\pm 35\mu \textrm {m}}$ and the prototype of lens with the dimension of 33.4mm$\times$33.5mm$\times$10.7mm is shown in Fig. 8(a). The light source is an LED source (LED XLAMP BLUE 465NM 1616 SMD ), its central wavelength is about 460nm, the size of the LED emitter is 1.2mm$\times$1.2mm. Figure 8(b) shows the curved receiver made by 3D printing. The experimental setup is illustrated in Fig. 8(c) and the actual illumination pattern is shown in Fig. 8(d). The illuminance distributions along the lines ${x}$ = 22mm and ${y}$ = 21mm are depicted. We can see that the actual illumination system cannot maintain sharp edges of letters well, this is mainly due to the LED emitter size is not infinitesimal compared with the lens size. Besides, the fabrication issues and the alignment errors are another error source that contribute to the performance degradation.

 figure: Fig. 8.

Fig. 8. Experimental verification: (a) the fabricated lens prototype; (b) the 3D receiver; (c) the experimental setup and its (d) recorded illumination pattern.

Download Full Size | PDF

4. Conclusion

In conclusion, we developed a virtual irradiance transport method to convert the 3D surface illumination problem to the plane illumination problem. Associated with the improved LSRM, we can achieve precise and ultra-efficient light control for 3D target surfaces, regardless of the far-field or the near-field configurations. The superiorities were demonstrated in three challenging designs through simulation and experimental tests. Except for ray mapping method used in this paper, other well-developed zero-étendue algorithms, such as SQM and MA, can also be used for freeform optics design after the irradiance being transformed precisely. We expect that the proposed method will promote the further application of LED on 3D surface lighting in both fundamental research and practical illumination systems.

Funding

National Natural Science Foundation of China (61805088); Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20190809100811375); Key Research and Development Program of Hubei Province (2020BAB121); Wuhan National Laboratory for Optoelectronics (Innovation Fund); Natural Science Foundation of Jiangsu Province (BK20180233).

Disclosures

The authors declare no conflicts of interest.

References

1. R. Wu, Z. Feng, Z. Zheng, R. Liang, P. Benítez, J. C. Mi nano, and F. Duerr, “Design of freeform illumination optics,” Laser Photonics Rev. 12(7), 1700310 (2018). [CrossRef]  

2. H. Ries and J. Muschaweck, “Tailored freeform optical surfaces,” J. Opt. Soc. Am. A 19(3), 590–595 (2002). [CrossRef]  

3. R. Wu, L. Xu, P. Liu, Y. Zhang, Z. Zheng, H. Li, and X. Liu, “Freeform illumination design: a nonlinear boundary problem for the elliptic monge–ampére equation,” Opt. Lett. 38(2), 229–231 (2013). [CrossRef]  

4. K. Brix, Y. Hafizogullari, and A. Platen, “Designing illumination lenses and mirrors by the numerical solution of monge–ampère equations,” J. Opt. Soc. Am. A 32(11), 2227–2236 (2015). [CrossRef]  

5. C. Bösel and H. Gross, “Single freeform surface design for prescribed input wavefront and target irradiance,” J. Opt. Soc. Am. A 34(9), 1490–1499 (2017). [CrossRef]  

6. L. B. Romijn, J. H. M. ten Thije Boonkkamp, and W. L. IJzerman, “Freeform lens design for a point source and far-field target,” J. Opt. Soc. Am. A 36(11), 1926–1939 (2019). [CrossRef]  

7. Z. Feng, D. Cheng, and Y. Wang, “Iterative wavefront tailoring to simplify freeform optical design for prescribed irradiance,” Opt. Lett. 44(9), 2274–2277 (2019). [CrossRef]  

8. V. Oliker, “Controlling light with freeform multifocal lens designed with supporting quadric method(sqm),” Opt. Express 25(4), A58–A72 (2017). [CrossRef]  

9. F. R. Fournier, W. J. Cassarly, and J. P. Rolland, “Fast freeform reflector generation using source-target maps,” Opt. Express 18(5), 5295–5304 (2010). [CrossRef]  

10. D. Michaelis, P. Schreiber, and A. Bräuer, “Cartesian oval representation of freeform optics in illumination systems,” Opt. Lett. 36(6), 918–920 (2011). [CrossRef]  

11. A. Bruneton, A. Bäuerle, R. Wester, J. Stollenwerk, and P. Loosen, “High resolution irradiance tailoring using multiple freeform surfaces,” Opt. Express 21(9), 10563–10571 (2013). [CrossRef]  

12. C. Bösel and H. Gross, “Ray mapping approach for the efficient design of continuous freeform surfaces,” Opt. Express 24(13), 14271–14282 (2016). [CrossRef]  

13. K. Desnijder, P. Hanselaer, and Y. Meuret, “Ray mapping method for off-axis and non-paraxial freeform illumination lens design,” Opt. Lett. 44(4), 771–774 (2019). [CrossRef]  

14. S. Wei, Z. Zhu, Z. Fan, Y. Yan, and D. Ma, “Double freeform surfaces design for beam shaping with non-planar wavefront using an integrable ray mapping method,” Opt. Express 27(19), 26757–26771 (2019). [CrossRef]  

15. S. Wei, Z. Zhu, Z. Fan, and D. Ma, “Least-squares ray mapping method for freeform illumination optics design,” Opt. Express 28(3), 3811–3822 (2020). [CrossRef]  

16. L. L. Doskolovich, D. A. Bykov, A. A. Mingazov, and E. A. Bezus, “Optimal mass transportation and linear assignment problems in the design of freeform refractive optical elements generating far-field irradiance distributions,” Opt. Express 27(9), 13083–13097 (2019). [CrossRef]  

17. D. A. Bykov, L. L. Doskolovich, and E. A. Bezus, “Multiscale approach and linear assignment problem in designing mirrors generating far-field irradiance distributions,” Opt. Lett. 45(13), 3549–3552 (2020). [CrossRef]  

18. A. Bäuerle, A. Bruneton, R. Wester, J. Stollenwerk, and P. Loosen, “Algorithm for irradiance tailoring using multiple freeform optical surfaces,” Opt. Express 20(13), 14477–14485 (2012). [CrossRef]  

19. X. Sun, L. Kong, and M. Xu, “Uniform illumination for nonplanar surface based on freeform surfaces,” IEEE Photonics J. 11(3), 1–11 (2019). [CrossRef]  

20. R. Wu, L. Yang, Z. Ding, L. Zhao, D. Wang, K. Li, F. Wu, Y. Li, Z. Zheng, and X. Liu, “Precise light control in highly tilted geometry by freeform illumination optics,” Opt. Lett. 44(11), 2887–2890 (2019). [CrossRef]  

21. Z. Feng, D. Cheng, and Y. Wang, “Iterative freeform lens design for prescribed irradiance on curved target,” Opto-Electron. Adv. 3(7), 200010 (2020). [CrossRef]  

22. Z. Feng, B. D. Froese, and R. Liang, “Freeform illumination optics construction following an optimal transport map,” Appl. Opt. 55(16), 4301–4306 (2016). [CrossRef]  

23. C. Prins, R. Beltman, J. ten Thije Boonkkamp, W. IJzerman, and T. W. Tukker, “A least-squares method for optimal transport using the monge–ampère equation,” SIAM J. on Sci. Comput. 37(6), B937–B961 (2015). [CrossRef]  

24. Z. Zhu, S. Wei, R. Liu, Z. Hong, Z. Zheng, Z. Fan, and D. Ma, “Freeform surface design for high-efficient led low-beam headlamp lens,” Opt. Commun. 477, 126269 (2020). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (a) Sketch of the design geometry; and (b)schematic representation of the irradiance and mapping transformation.
Fig. 2.
Fig. 2. (a) Projection irradiance on source plane; (b) schematic representation of the irradiance transfer between source and target domain.
Fig. 3.
Fig. 3. A near-field design example using the least-squares ray mapping method.
Fig. 4.
Fig. 4. The far-field configuration: (a) the designed illumination system; (b) the irradiance distribution on the 3D target; (c) the source map; (d) the map on the virtual target plane; (e) the lens model; (f) the Gaussian curvature of the freeform surface.
Fig. 5.
Fig. 5. (a) The RMSD from predefined map $\textbf {m}$ to real map $\textbf {t}$ ; (b) the RMSD of energy conservation; (c) the RMSD of boundary points.
Fig. 6.
Fig. 6. The near-field configuration with tilted geometry: (a) the designed illumination system; (b) $y$ - $z$ plane of the optical system; (c) the Gaussian curvature of the freeform surface; (d) the original picture; (e) the simulated irradiance distribution; (f) the virtual irradiance distribution on the virtual plane; (g) the target map on the virtual plane.
Fig. 7.
Fig. 7. (a) The initial irradiance distribution obtained based on the far-field assumption; (b) the improved irradiance distribution based on the near-field configuration.
Fig. 8.
Fig. 8. Experimental verification: (a) the fabricated lens prototype; (b) the 3D receiver; (c) the experimental setup and its (d) recorded illumination pattern.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

{ x v = ( h v z t ) x t / z t + x t , y v = ( h v z t ) y t / z t + y t .
I v ( x v , y v ) = I t ( U , V ) ( U , V ) ( x v , y v ) ,
{ x v i = ( h v z t ) ( x t x f i 1 ) / ( z t z f i 1 ) + x t , y v i = ( h v z t ) ( y t y f i 1 ) / ( z t z f i 1 ) + y t , I v i ( x v i , y v i ) = I t ( U , V ) ( U , V ) / ( x v i , y v i ) . ( i > 1 ) ,
RMSD = 1 m n [ ( x v i x v i 1 ) 2 + ( y v i y v i 1 ) 2 ] ,
I v j ( x v j , y v j ) = I t j ( U t j , V t j ) ( U t j , V t j ) ( u , v ) ( u , v ) ( x v j , y v j ) , ( j > 1 ) ,
E u ( u , v ) = E s ( s 1 , s 2 ) ( s 1 , s 2 ) ( u , v ) ,
Δ t 1 = p 1 , ( u , v ) Ω ,
( 1 α ) t 1 + α t 1 n ^ = ( 1 α ) b 1 + α p 1 n ^ , ( u , v ) Ω ,
Δ t 2 = p 2 , ( u , v ) Ω ,
( 1 α ) t 2 + α t 2 n ^ = ( 1 α ) b 2 + α p 2 n ^ , ( u , v ) Ω ,
{ x = 600 U , y = 600 V , z = 600 + 54 ( 1 2 U ) 2 e 4 U 2 ( 2 V + 1 ) 2 6 e ( 2 U + 1 ) 2 4 V 2 90 ( 0.4 U 20 U 3 32 V 5 ) e 4 ( U 2 + V 2 ) .
{ x = 120 U , y = 90 V , z = 90 + 8.1 ( 1 2 U ) 2 e 4 U 2 ( 2 V + 1 ) 2 0.9 e ( 2 U + 1 ) 2 4 V 2 13.5 ( 0.4 U 20 U 3 32 V 5 ) e 4 ( U 2 + V 2 ) + 90 y tan 10 o .
{ x = 100 U , y = 100 V , z = 90 + 9 ( 1 2 U ) 2 e 4 U 2 ( 2 V + 1 ) 2 e ( 2 U + 1 ) 2 4 V 2 15 ( 0.4 U 20 U 3 32 V 5 ) e 4 ( U 2 + V 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.