Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Freeform surface topology prediction for prescribed illumination via semi-supervised learning

Open Access Open Access

Abstract

Despite significant advances in the field of freeform optical design, there still remain various unsolved problems. One of these is the design of smooth, shallow freeform topologies, consisting of multiple convex, concave and saddle shaped regions, in order to generate a prescribed illumination pattern. Such freeform topologies are relevant in the context of glare-free illumination and thin, refractive beam shaping elements. Machine learning techniques already proved to be extremely valuable in solving complex inverse problems in optics and photonics, but their application to freeform optical design is mostly limited to imaging optics. This paper presents a rapid, standalone framework for the prediction of freeform surface topologies that generate a prescribed irradiance distribution, from a predefined light source. The framework employs a 2D convolutional neural network to model the relationship between the prescribed target irradiance and required freeform topology. This network is trained on the loss between the obtained irradiance and input irradiance, using a second network that replaces Monte-Carlo raytracing from source to target. This semi-supervised learning approach proves to be superior compared to a supervised learning approach using ground truth freeform topology/irradiance pairs; a fact that is connected to the observation that multiple freeform topologies can yield similar irradiance patterns. The resulting network is able to rapidly predict smooth freeform topologies that generate arbitrary irradiance patterns, and could serve as an inspiration for applying machine learning to other open problems in freeform illumination design.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical systems to control the propagation of light play a major role in science and technology, and their importance will not diminish in the near future [13]. For decades, optical engineers have relied on systems with multiple (a)spherical surfaces, which have rotational symmetry [4,5]. Recent advances in manufacturing technology however, allow the fabrication of freeform optical surfaces with completely arbitrary shape, thereby offering greater flexibility for controlling the propagation of light [6,7]. Freeform optics are widely used in imaging systems to guide the light of points in object space effectively to corresponding points in image space [811]. Also in the field of illumination design, freeform components are extensively used to map the emitted light distribution from a source into a desired target pattern, while maintaining the luminous flux [1214]. The demand for such freeform illumination systems is rapidly growing, due to their application in fast-evolving fields such as optical lithography, automotive headlights and laser beam shaping [15,16].

Illumination optics are determined by the light source under consideration and the targeted light pattern. Their design typically comes down to calculating one or more optical surfaces that manipulate the incoming rays, in order to produce a certain prescribed irradiance distribution. Freeform illumination design methods can be separated in two categories: zero étendue algorithms and algorithms for extended light sources [15]. Zero-étendue methods are based on the assumption that the source is ideal, e.g., a point source or a collimated laser beam. Freeform design for zero-étendue sources has matured significantly, and various accurate calculation methods exist, such as the ray-mapping and Monge-Ampère methods [1720]. Unfortunately, the étendue of real light sources can seldom be neglected in practice. When applying zero-étendue algorithms for such extended light sources, the resulting pattern becomes blurry, and more dedicated design procedures are needed [21,22]. To solve this problem, wavefront tailoring and deconvolution-based algorithms have been proposed [2326]. Although these methods function well under certain specific conditions, e.g., single-chip LED emitters, they remain unsuitable for arbitrary extended light sources. Alternatively, a solution can be obtained through optimization of the parameterized freeform surface(s) [2731]. These iterative methods however, are typically computationally intensive and require a significant amount of convergence time. So, as opposed to zero-étendue algorithms, the search for fast and effective freeform algorithms with extended light sources is still ongoing.

Aside from the considered light source and targeted light distribution, there is another aspect that is of practical importance in the design of freeform components for illumination purposes: the resulting freeform shape. Most freeform design methods result in overall convex or concave freeform surface shapes (Fig. 1(a)). This can lead to visual discomfort glare when combined with high-brightness light sources [32]. A possible approach to address this important issue in lighting, is working with freeform lens arrays, where each lens element illuminates (part of) the complete target pattern. In doing so, the light flux towards each point of the target pattern is spread over the entire lens surface, similar to the case of a light diffuser. This results in a reduced brightness perception when looking towards the illumination optic, in comparison with a single-channel freeform lens [33]. Unfortunately, such freeform lens arrays have C2 discontinuities in between the individual lens elements. These discontinuities complicate manufacturing and lead to unwanted straylight [34]. Such problems could be avoided with smooth, continuous freeform surfaces that combine multiple convex, concave and saddle shaped regions, for which we introduce the term freeform topologies (Fig. 1(c)). For collimated light sources, such an oscillating freeform topology can be used to generate broad illumination patterns with much thinner optical elements, compared with overall convex or concave freeform surfaces. Designing such smooth and shallow, oscillating freeform topologies is challenging however. Within the domain of laser beam shaping, continuous phase plates (CPP) rely on a similar surface topology, but these are mainly used for converging laser beams, and current design methods do not enable tailoring of arbitrary irradiance shapes [35,36]. On the other hand, such topologies could be constructed by crafting a continuous curvature between the individual convex and concave lenslets of a freeform lens array. Such a multi-stage method will likely exhibit limited generalization to create a wide range of different, complex irradiance patterns. To this day, a direct design method for freeform topologies that are capable of generating arbitrary irradiance patterns remains unpublished.

 figure: Fig. 1.

Fig. 1. (a) Convex freeform illumination lens that generates a prescribed (far-field) target pattern from a given light source. (b) The light patterns generated by each of the four individual sub-lenses of a discontinuous lens array result together in the prescribed target pattern. (c) A smooth, continuous freeform surface topology that consists of multiple convex, concave and saddle-shaped regions avoids strong C2 surface discontinuities. (d) Such a freeform surface topology can be described as a NURBS surface that is controlled by a matrix of control points. Remark: in (a) and (b) the illumination component and illumination pattern are not drawn to scale for illustration purposes.

Download Full Size | PDF

To reduce the time and complexity of optical design, researchers have recently started to use machine learning techniques. Applying deep learning to solve inverse problems in photonics and optics has only recently arisen, but the potential is indisputable [3739]. It may arguably become one of the main catalysts in designing complex optical configurations in the near future [40]. In the field of computational holography for example, mature machine learning methods are already state-of-the-art [41,42]. Deep learning architectures for freeform design have also been presented in past research, but so far mainly within the domain of imaging optics, in order to find starting points close to a final solution [43,4446]. Within illumination design, one fully trained network has been demonstrated and the approach was restricted to creating very basic shapes [47]. The authors of this work also underlined the need for more advanced procedures. One of the main challenges to realize a fully trained network for freeform illumination design via supervised or unsupervised learning is the lack of a fast forward operator to link the input and output parameters. Monte-Carlo (MC) raytracing is typically used to evaluate the performance of freeform optics for a given light source. Taking into account the complex shape that freeform surfaces can adopt, tens of thousands to even millions of rays are needed to model the resulting irradiance pattern with limited statistical noise.

This paper presents a deep learning framework for solving the inverse problem of finding a refractive freeform surface topology in order to obtain a prescribed irradiance distribution from a predefined light source. The framework integrates supervised learning for modeling the raytracing on one hand, and semi-supervised learning for freeform surface prediction on the other hand. This double approach alleviates the computational burden to train the freeform surface prediction fully via Monte-Carlo raytracing. We demonstrate that rapid convergence can be obtained by considering the deviation of the irradiance resulting from a predicted freeform surface with the target irradiance as a loss function during training. This approach is different from the more typical method of considering the mean average error of the predicted freeform surface shape compared to the ground truth, which proves to have inferior convergence. This behavior is linked to the observation that two or more distinct freeform topologies can result in visually identical illumination patterns. The deep learning framework rapidly provides a suitable freeform design as is illustrated for various target patterns. As such, we demonstrate that deep learning does not only serve as a tool to enhance typical optimization procedures, but that it can be used as a fast, standalone method in freeform illumination design.

2. Methods

Problem statement Consider a thin refractive element, consisting of a planar entrance surface, orthogonal to the optical axis (z-axis) and a freeform surface at the exit. A smooth (C2 continuous) freeform topology can be represented as a non-uniform rational basis spline (NURBS) surface with $a \times b$ equidistant control points (Fig. 1(d)). When the x- and y-coordinates of these control points are predefined, only the z-coordinate is a free parameter that samples the local height of the freeform surface. With this parameterization, the freeform component is fully characterized by an $a \times b$ matrix ($\mathbf {\mathcal {F}}$). By restricting all z-coordinates to a certain interval [0, t], the thickness of the refractive element can be limited.

Light rays can be propagated from the source through the optical surfaces towards a detector surface. By binning the radiant flux of light rays on this detector, in an $m \times n$ spatial receiver grid, the irradiance distribution on the detector ($\mathcal {I}$) can be sampled. The propagation of the light through the optical system via raytracing thus corresponds with the mapping

$$\{\mathbf{\mathcal{F}} \in [0, t]^{a \times b}\} \rightarrow \{\mathcal{I} \in \mathbb{R}^{m \times n}_+\}.$$

The problem that is tackled in this work is modeling the inverse relation

$$\{\mathcal{I} \in \mathbb{R}^{m \times n}_+\} \rightarrow \{\mathbf{\mathcal{F}} \in [0, t]^{a \times b}\},$$
i.e., for any random irradiance distribution $\mathcal {I}$ on the target plane, predict the $\mathcal {F}$ that results in $\mathcal {I}$ via raytracing.

Framework structure Our deep learning framework consists of two different architectures (Fig. 2(a), 2(b)). The first deep learning architecture is a U-net that models the irradiance distribution on the receiver as a result of propagating light rays through the optical system. This architecture implements the mapping relation in Eq. (1) which is typically obtained with MC raytracing. U-nets are commonly used segmentation structures and are ideal for modeling 2D-to-2D mappings. They are widely used in medical imaging and for image deconvolution [48,49]. However, when tracing rays through arbitrary freeform surfaces, the resulting irradiance distribution often contains highly detailed features. U-nets generally predict images with relative low resolution, causing these high resolution features to vanish during prediction. To attain more detailed irradiance prediction, a super resolution CNN (SRCNN) is appended [50]. From a practical point of view, an input of $a \times b$ freeform control points is thus encoded into a set of feature maps, representing the local properties of the freeform surface. These local properties are then reconstructively upsampled and combined into an $m \times n$ irradiance distribution.

 figure: Fig. 2.

Fig. 2. (a) Illustration of the U-net-SRCNN model for predicting the obtained irradiance on the target plane for a certain freeform surface. The model is trained by considering pairs of freeform topologies and the corresponding simulated far-field irradiance distributions, via MC raytracing. Training loss is evaluated using the SSIM loss between the MC-simulated ground truth $\mathcal {I}$ and the U-net-SRCNN-predicted $\mathcal {\hat {I}}$. (b) Illustration of the model to predict a freeform grid $\hat {\mathcal {F}}$ from an input irradiance $\mathcal {I}$. Training is achieved by considering the SSIM loss between the input irradiance $\mathcal {I}$ and $\hat {\mathcal {I}}$, which is the predicted irradiance of $\hat {\mathcal {F}}$ by the first model.

Download Full Size | PDF

The full model, visualized in Fig. 2(a), is trained on structural similarity index measure (SSIM) loss in a supervised manner [51]:

$$\text{SSIM}(x, y) = \frac{{(2\mu_x\mu_y + C_1)(2\sigma_{xy} + C_2)}}{{(\mu_x^2 + \mu_y^2 + C_1)(\sigma_x^2 + \sigma_y^2 + C_2)}}$$
where $x$ and $y$ represent the horizontal and vertical directions within an image, and $\mu$ and $\sigma$ the variance. Furthermore, $\sigma _{xy}$ is the covariance of both directions, and $C_1$ and $C_2$ are constants that stabilize the nominator and denominator in the calculation. The training data is generated using MC raytracing in the defined optical setting, producing a set of freeform-irradiance pairs ($\mathcal {F}$, $\mathcal {I}$). Moreover, data augmentation is performed by considering the inverse rotational symmetry of $\mathcal {F}$ and $\mathcal {I}$. In particular, since the freeform topology is characterized by a square grid, its rotation results in a reverse rotation of the corresponding irradiance distribution. In short:
$$\mbox{rot}(90, \mathcal{F}) \rightarrow \mbox{rot}({-}90, \mathcal{I}).$$

By considering this symmetry, each freeform surface can be rotated clockwise 3 times, resulting in 3 augmented freeform-irradiance pairs for training the models.

The second network architecture is designed to model the mapping relation of Eq. (2) and is schematically shown in Fig. 2(b), together with the considered learning strategy. This architecture consists of a typical CNN encoder network, containing a 2D maxpooling head and multi-layer perceptron regressor (MLP). As visualized in Fig. 2(b), the CNN thus encodes an irradiance distribution into a set of feature maps, of which the maximal element is chosen via maxpooling. Based on these pooled features, the freeform control points are then regressed in the MLP layer, thus predicting the surface parameters $\hat {\mathbf {\mathcal {F}}}$ for an input irradiance $\mathcal {I}$ [52]. To ensure the prediction of a freeform surface topology that yields an irradiance distribution resembling the input irradiance, it is required to capture both the local and global characteristics of the input irradiance distributions $\mathcal {I}$ in the training phase. This is achieved by using a three-channel input into the network, consisting of $\mathcal {I}$, along with its exponential and logarithmic transformations. The integration of such transforms with CNNs has proven beneficial in past research, e.g., for the enhancement of feature extraction in low-light areas within images [53]. This is due to the increased value range in low-irradiance areas after applying a logarithmic transform, and the opposite effect follows from an exponential transform. Supplying these transforms along with the original images then allows the model to simultaneously process low-, medium- and high-irradiance areas of the input distribution.

The pretrained U-net-SRCNN model is used in the training phase of this second network for two different tasks. First of all, it is used to produce a large set of input irradiance distributions for the learning phase of the second model, without needing to run a huge amount of MC raytracing simulations. The advantage of this approach compared to using completely arbitrary irradiance distributions, is that these distributions are a result of the considered freeform topology parameterization, and can thus in principle be obtained with the considered freeform component. Secondly, the pretrained raytracing model is used during the actual training phase to produce a pseudo-labeled irradiance $\hat {\mathcal {I}}$ for any predicted $\hat {\mathbf {\mathcal {F}}}$ [54]. The assumption is that these generated pseudo-labels serve as legitimate irradiance distributions, reflecting the MC raytraced results. Their goal is to compare them with the input irradiance distributions during the training phase of model 2. So with $m_1$ and $m_2$ representing model 1 and model 2 respectively, the trainable loss $\mathcal {L}$ is evaluated as:

$$\mathcal{L}\{\mathcal{I}, m_2(\mathcal{I})\} = 1 - \mbox{SSIM}\{\mathcal{I}, m_1[m_2(\mathcal{I})]\} = 1 - \mbox{SSIM}\{\mathcal{I}, m_1(\mathcal{\hat{F}})\},$$
with $m_1(\mathcal {\hat {F}})$ the generated pseudo-labeled irradiance to be compared with the ground truth $\mathcal {I}$. By including the (frozen) raytracing model in the training phase of the freeform prediction architecture, the emphasis of model 2 is on replicating the ground truth irradiance distribution, as opposed to replicating the ground truth freeform surface, which is not considered in the training of the second model.

This approach is somehow related to the unsupervised learning strategy that was adopted in computational holography, in favor of supervised learning using an extensive set of random phase masks paired with their simulated amplitude pattern [42]. A main difference is that computational holography can rely on an analytic forward/backward operator for the complex field at the image plane. For the considered case, the role of this analytic operator is taken over by the pretrained U-net-SRCNN model.

3. Results

Optical simulation setting The framework is applied in a specific optical setting, to illustrate its usage and performance. A planar light source of $3 \times 3$ mm square, with a lambertian radiation pattern, is illuminating a refractive element of $10 \times 10$ mm square at a distance of 40 mm, with a planar entrance surface and freeform exit surface. The refractive index of the component is 1.5, and a maximum thickness of $t =0.8$ mm is considered. This component redirects the incident light towards a square receiver plane at a distance of $d_{rec} = 500$ mm and with a side length $s_{rec}=$ 1000 mm, i.e., in the far-field of the lens (see Fig. 2(a)). Only rays that intersect with the refractive element are traced towards the receiver plane. The freeform surface is characterized by a matrix of $11 \times 11$ equidistant control points of the corresponding, third degree NURBS surface [55]. In order to produce irradiance patterns that cover the entire target plane, freeform surface topologies with multiple hills and valleys are necessary. This is a consequence of the shallow lens thickness and the fact that large surface slopes are needed to realize the required deflection angles. This is only possible by combining multiple positive and negative surface slopes.

The dataset for supervised learning of the U-net is generated in the commercial software LightTools [56], which allows MC raytracing through freeform (NURBS) lens surfaces. To start, 25000 random freeform topologies are generated by selecting an arbitrary z-coordinate within the chosen interval for each lens surface control point. The corresponding irradiance distribution $\mathcal {I}$ for each freeform topology $\mathcal {F}$ is then calculated by tracing 15000 rays from the light source towards the receiver plane with a spatial receiver grid of 50 $\times$ 50 bins. To reduce roughness in distributions that are spread out across the entire receiver surface area, a small smoothing kernel of 3 $\times$ 3 is implemented. Training is executed on the resulting $\{\mathcal {F}, \mathcal {I}\}$ pairs, using a 90-10 train-validation split. More details on the architectures and training procedure in this specific setting are added in Appendix A.

Irradiance prediction Fig. 3(a) shows quantitative and visual results for the U-net-SRCNN model. The predicted irradiance distribution by the model is compared with the corresponding Monte-Carlo raytracing result for three different cases, together with the SSIM loss. The bottom figure shows an outlier in terms of SSIM, compared to the average SSIM of the complete validation set, which is 98.3%. The average 1.7% error is likely due to the simulation noise in the considered raytracing simulations, since 15000 MC rays are traced towards a 50 $\times$ 50 receiver grid; following a typical $1/\sqrt {N}$ approximation with $N=15000/50/50$, the noise per bin is estimated at 41%. A more extensive simulated rayset would be required to suppress these noise effects, but this turned out to be unnecessary. Indeed, in Fig. 3(b), the validation irradiance for a random SSIM outlier is compared with the MC simulated irradiance for the same freeform lens but with a rayset of 1 million rays. Taking the absolute pixel-wise deviation of the predicted irradiance by the U-net with the newly raytraced irradiance as a measure, and by comparing it with the pixel-wise deviation between the original and newly raytraced samples, the differences are smaller when comparing the predicted irradiance with the simulated sample with 1 million rays, even though the model has been trained on the original, noisy samples. This thus demonstrates that the model is not solely capable of reproducing raytracing, but it also denoises the samples that it was trained on. It is important to note that in recent research, U-net CNNs have already been suggested to be useful for fast irradiance evaluation of freeform components [57]. However, this previous architecture considered only 49 control points with an inference time of 67 ms, which is around 6 times slower than our proposed architecture. For the proposed semi-supervised learning approach, inference speed of this U-net is crucial to ensure efficient training. Furthermore, the raysets for MC raytracing were also large ($2 \times 10^6$), leading to a time-consuming data generation process.

 figure: Fig. 3.

Fig. 3. (a) MC raytracing simulated irradiance versus predicted irradiance by the U-net-SRCNN model. The bottom image represents a low SSIM outlier in the validation set. (b) Comparison of pixel-wise errors: the irradiance simulated using 1 million rays is compared with the originally simulated irradiance using 15000 rays (left), and the U-net prediction, as shown on the right. Additionally, SSIM values are calculated for each comparison. Since the focus of this study lies in irradiance shapes, each irradiance was individually normalized.

Download Full Size | PDF

Given that the proposed model is capable of generating accurate irradiance distributions, it can be used as a rapid alternative for the MC raytracing simulations. Following this logic, a synthetic dataset of 3.4 million freeform $\mathcal {F}$ - irradiance $\mathcal {I}$ pairs was generated within minutes. This enables learning on a large synthetic dataset of input irradiance distributions that are a result of the considered freeform topology, enhancing generalization and reducing overfitting for the more complex, reverse freeform prediction task. The 3.4 million irradiance samples were again separated in a 90-10 train-validation split.

Freeform prediction on validation data. Fig. 4(a) shows quantitative and visual results for the freeform prediction model. Inference is done in 11 ms for a single sample, and can be further enhanced with a GPU-specific inference optimizer, such as NVIDIA’s TensorRT module. The predicted results are studied for three validation cases, where in each case, a ground truth freeform $\mathcal {F}$ - irradiance $\mathcal {I}$ pair is compared with the predicted freeform control points $\mathcal {\hat {F}}$ by the CNN encoder network and the resulting irradiance distribution with this predicted freeform ($\hat {\mathcal {I}}_{sim}$). The resulting irradiance distribution has been simulated with MC raytracing using an extensive rayset of 1 million rays, rather than with the U-net model used in the training. This assures a fair assessment of the capabilities of the model to predict an $\mathcal {\hat {F}}$ that results in a prescribed $\mathcal {I}$ via raytracing. A quantitative assessment of the freeform prediction accuracy is provided by again considering the SSIM between the input $\mathcal {I}$ and the raytraced irradiance distribution $\hat {\mathcal {I}}_{sim}$. The visual similarity between $\mathcal {I}$ and $\hat {\mathcal {I}}_{sim}$ illustrates the performance of the developed framework. However, one also notices the visual discrepancy between $\mathcal {F}$ and $\mathcal {\hat {F}}$. Figure 4(b) considers this discrepancy more in detail for one specific case, and shows that the pixel-wise deviation for the freeform control points and the interpolated NURBS topologies is much higher than for the irradiance distributions, a feature that is witnessed for most of the validation cases. This observation supports the hypothesis that radically different freeform surface topologies can produce visually identical irradiance distributions. Therefore, training for SSIM on the pseudo-labeled U-net irradiance versus the targeted irradiance is a more logical approach than training for mean average error of the predicted freeform surface compared to the ground truth.

 figure: Fig. 4.

Fig. 4. (a) Freeform prediction results for three validation cases. Ground truth ($\mathcal {F}, \mathcal {I}$) against the predicted freeform matrix ($\mathcal {\hat {F}}$) and corresponding MC simulated irradiance ($\hat {\mathcal {I}}_{sim}$) using 1 million rays. (b) (i) Pixel-wise error for the normalized predicted freeform surface control points $|\mathcal {F} - \mathcal {\hat {F}}|$ as well as the interpolated NURBS surfaces. (ii) Pixel-wise error for the simulated irradiances. Notice the significant difference in freeform surface topology (i) while producing an almost identical irradiance distribution (ii).

Download Full Size | PDF

Freeform prediction on custom targets. The model performance is also verified in terms of predicting freeform topologies that produce prescribed irradiance distributions, which are neither in the training nor in the validation set. This is the main target of the developed framework. Figure 5(a) shows an overview of the results for three chosen irradiance patterns ($\mathcal {I}_t$). The predicted freeform surface topology ($\hat {\mathcal {F}}$) and corresponding raytraced irradiance pattern ($\hat {\mathcal {I}}_{sim}$) are given, together with the SSIM of the simulated pattern with respect to the prescribed irradiance. The results indicate that the model is capable of producing fairly complicated freeform surface topologies that match the target irradiance distributions. Still, one could wonder if the model produces the best freeform topology prediction in terms of SSIM, since there is no raytraced ground truth in this case. As a test, a rotate test-time augmentation is applied by considering all 90$^{\circ }$-rotations of the target pattern. The obtained freeform topologies with the resulting MC-simulated irradiance and corresponding SSIM values are shown in Fig. 5(b). These results re-affirm that multiple freeform topologies can result in nearly-identical irradiance patterns and the model finds one of these topologies. In other words, the inverse problem is ill-posed, at least within the limitations of MC raytracing. From a practical point-of-view however, such rotational test-time augmentations, or similar alternatives, could prove interesting to generate multiple solutions, out of which the best performing, most smooth or most oscillating option could be selected, depending on the specific application.

 figure: Fig. 5.

Fig. 5. Freeform prediction results for certain custom target irradiances $\mathcal {I}_t$. (a) $\mathcal {I}_t$ against the predicted freeform surface topologies $\hat {\mathcal {F}}$ and the MC raytraced irradiance $\hat {\mathcal {I}}_{sim}$, resulting from $\hat {\mathcal {F}}$. (b) Rotate test-time augmentation on the triangular target. The results illustrate the existence of various freeform surface topologies that produce almost visually identical irradiance distributions.

Download Full Size | PDF

Training performance. Finally, the efficiency of the proposed training approach is assessed through a comparative analysis with two alternative methodologies.

  • (a) Training in a supervised manner, on L1 loss between $\mathcal {F}$ and $\mathcal {\hat {F}}$, using the full synthetic dataset.
  • (b) Training on the SSIM loss with pseudo-labeled irradiances, but using the 25000 MC raytraced irradiance samples instead of the predicted irradiances by the U-net-SRCNN model.

Figure 6 considers two target distributions to illustrate the main performance differences: an irradiance distribution from the validation set linked to an actual freeform topology, and a custom prescribed irradiance. For both cases, the regression activation maps (RAM) and resulting raytraced irradiances $\hat {\mathcal {I}}_{sim}$ from the predicted freeform topologies are shown for method (a) and (b), compared to the proposed approach. Regression activation maps are included since they provide a detailed representation of how a model localizes discriminative regions affecting the regression outcome [58]. Comparing RAM for the first target pattern, it is visible that method (a) fails to produce confident feature maps at relevant locations. In comparison, model (b) manages to locate the core of the supplied irradiance as the most relevant area, but some noise remains. The proposed training method clearly delivers the most accurate localization, with activation maps that overlap with the target distribution. This results in the highest SSIM value for the corresponding raytraced irradiance. Similar results can be seen for the custom target distribution. In this case, the benefit of relying on synthetic data over the base MC raytraced data (method b) is visually clear when looking at the obtained irradiance distribution.

 figure: Fig. 6.

Fig. 6. Performance comparison of three different learning approaches. Regression activation maps (RAM, normalized) and the produced irradiance ($\hat {\mathcal {I}}_{sim}$) of the predicted freeform topology $\mathcal {\hat {I}}$ are shown, both for a target irradiance from the validation set and a custom target irradiance.

Download Full Size | PDF

Comparison with state-of-the-art A goal of this study was to produce freeform surface topologies that differ from classical freeform surfaces with an overall convex/concave shape. To illustrate this capacity, a generated freeform topology is compared with the solution that is calculated by the LightTools Freeform Design Feature tool, using also $11 \times 11$ control points. In both cases, a large uniform triangle is considered as target irradiance pattern. Figure 7(a) shows the comparison between the resulting NURBS surfaces in both cases. Notice the expected concave shape in the LightTools generated freeform surface versus the varying freeform topology generated by our framework. This results in a 2.8$\times$ smaller surface height. The resulting irradiance distributions are shown in (ii). In both cases, the limited amount of control points results in clear visual deviations from the target pattern; however, the SSIM of the solution that is generated by LightTools is approx. 10% lower than the solution by the proposed method. The oscillating shape of the topology might raise concerns about TIR losses; however, with fresnel losses enabled in the raytracing simulation, the transmission efficiency is 91.7%, being only 0.6% lower compared to the efficiency of the classical freeform surface (92.3%).

 figure: Fig. 7.

Fig. 7. (a) Comparison between freeform surfaces generated by LightTools and the proposed framework, in terms of (i) freeform surface shape, (ii) resulting irradiance distribution at receiver plane and (iii) radiance distribution at freeform component. (b) The spatial spreading of the radiance distribution at the illumination component, towards a specific receiver direction, can be sampled with a radiance meter in the LightTools software. The size of the freeform element was not drawn at scale for illustration purposes.

Download Full Size | PDF

Finally, it is verified that the freeform surface topology, results in a more spread out light distribution from the exit surface towards each point of the target pattern. To do this, both freeform components are simulated in LightTools, and a small radiance meter is positioned at the receiver surface, with its field-of-view aimed at the freeform exit surface. In this way, the radiance at the exit from the illumination component is sampled from a specific receiver direction. In the case of the concave lens, light is only propagating from a specific position/area on the freeform surface, towards a corresponding position in the target pattern. There is thus a one-to-one mapping of the freeform surface to the target pattern. In the case of the freeform surface topology however, light is propagating from multiple positions/areas on the freeform surface, towards each position in the target pattern, as is illustrated in (iii). This spatial spreading of the radiance distribution over the entire illumination component is an effective approach to avoid visual discomfort glare.

4. Discussion

This paper presents a semi-supervised learning framework for predicting refractive freeform topologies that produce a certain target irradiance. In contrast to prior work on machine learning for freeform design that predominantly relies on 1D multi-layer perceptron-like networks with contextual information, this study employs 2D convolutional neural networks (CNN) to model the relationship between the obtained irradiance and freeform topology. To train the network, a U-net is used for modeling the Monte-Carlo raytracing of light from a predefined light source, in order to generate pseudo-labeled irradiance distributions that are compared with the input target irradiances. We demonstrate that this semi-supervised learning approach for freeform topology prediction is superior compared to a supervised learning approach using ground truth freeform topology/irradiance pairs. The resulting framework offers an end-to-end solution for rapid, smooth freeform topology design to produce an arbitrary prescribed target irradiance. The framework is trained within a specific optical setting, using a restricted parameterization for the freeform lens topology. This implies that the model is only capable of predicting freeform components within this parameterization and for the considered illumination setting. Despite these limitations, the proposed framework offers significant opportunities regarding the design and implementation of novel freeform optics.

The freeform components that are introduced in this paper, with their smooth oscillating surface topology and resulting limited thickness, could serve as a novel beam shaping technology. Current freeform micro-optical components always consist of a periodic array of multiple individual lenses or mirrors [59,60], and while the fabrication of such components has evolved significantly over the past years, the inevitable grooves in between the different lens/mirror elements still pose significant manufacturing challenges in order to avoid stray light [61]. The smooth surface topologies that result from the proposed network avoid such strong surface C2 discontinuities. The resulting beam shaping components can be used to generate non-paraxial target distributions, when illuminated with (nearly) planar incident wavefronts from LEDs or lasers.

The proposed framework and training method could be extended to other, more general freeform illumination design problems. A straightforward extension would be a surface topology parameterization with more control points. This requires up-scaling of the model, for which larger CNN variants, or even transformers, could be used. Along with the extension of the model, more data will also be required as well as upscaling of the irradiance resolution. A larger set of control points could allow the generation of even more complex irradiance targets, as well as the representation and design of larger/thinner optical components with more convex/concave/saddle-shaped regions. However, it should be clear that the level of detail is also limited by the size of the light source in comparison to the distance to the freeform component, when only one freeform surface is considered. A more interesting expansion from a practical point of view, could therefore be the extension of the framework to arbitrary (extended) light sources and multiple freeform surfaces. This may be realized by using the spatial distribution of the light source as an additional input. Predicting multiple freeform surfaces could be achieved by increasing the final MLP output head size. Of course, this would also require a significant increase in the amount of training data. For the implementation of a flexible machine learning model for general freeform illumination, it is clear that computation power will be a must.

While the presented semi-supervised learning strategy proves superior to a supervised learning strategy, for the prediction of shallow freeform surface topologies, it remains to be seen if this would also be the most effective strategy for the design of overall convex-concave freeform surfaces. The training of such a framework would certainly require a more specific surface parameterization that enforces concaveness or convexness. Alternatively, contextual data about the freeform shape could be added in the training phase of the 2D-CNN, e.g. by concatenating an encoded prompt about the surface shape in the final linear layer of the network. Whatever the outcome, also in this case, the use of an additional network for modeling the raytracing will result in much faster and more effective training.

The discussion above makes clear that the investigation of advanced machine learning techniques for the design of freeform illumination optics has only been started. In this respect, the proposed framework can serve as a demonstration that deep learning allows rapid and standalone freeform optical design.

Appendix A: Framework specifications

Raytracing model (U-net) The raytracing model receives an 11 $\times$ 11 matrix of control points that unambiguously determine a freeform surface topology. This matrix is interpolated to match the convolution kernels (bilinear, 50 $\times$ 50). The resulting matrix then passes through a U-net encoder-decoder structure, with the encoder being the state-of-the-art ConvNeXt [62] (Fig. 8(a)). These feature maps are then decoded into a corresponding illuminance distribution via convolutions. This U-net output is a 25 $\times$ 25 pixel irradiance, which is upscaled to the irradiance resolution of 50 $\times$ 50 pixels via bilinear interpolation. Consequently, the upscaled images are passed through a trainable SRCNN architecture [50] with kernel sizes of 7, 5 and 3, respectively. Such SRCNN uses non-linear, e.g. ReLU, activations and convolutions to enhance image resolution by filling in some of the high-frequency image details.

 figure: Fig. 8.

Fig. 8. (a) U-net-SRCNN model with visualization of how the control points are interpolated and how the SRCNN constructs the final irradiance distribution. (b) The freeform prediction model, with highlights on the input transformations and the MLP structure that predicts the freeform surface control points.

Download Full Size | PDF

Freeform prediction model The main component of the freeform prediction architecture is a CNN encoder, which again is ConvNeXt [62]. The model takes a 3-channel image as input, consisting of:

$$\{\mathcal{I}, \mbox{log}(\mathcal{I}), \mbox{exp}(\mathcal{I})\}.$$

Using the logarithmic and exponential transforms of $\mathcal {I}$ allows the model to explore the global and local properties more easily (Fig. 8(b)). The model head contains a 2D maxpooling layer, which is a $2 \times 2$ filter that runs over all the extracted feature maps, maintaining the maximal value as the output. This significantly reduces overfitting. The maxpooling output is passed through a non-linear mapping multi-layer perceptron that outputs 121 variables, with GELU activation. The output is then reshaped into a $11 \times 11$ grid $\hat {\mathcal {F}}$ of freeform topology control points.

Training setup All models in the proposed method use ImageNet-21k pre-trained weights [63], although the contribution of these is likely limited. Training is done using exponential learning rate decay with linear warm-up. Training and testing was performed on an NVIDIA RTX 3070 GPU with 8GB VRAM. A full overview of the training, as well as the used encoder variant, is shown in Table 1. For the freeform prediction models, the training procedure is given for each of the methods discussed in the training performance section.

Tables Icon

Table 1. Full training set-up for all the considered models.

Funding

Agentschap Innoveren en Ondernemen (HBC.2020.2713).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. H. J. Caulfield and S. Dolev, “Why future supercomputing requires optics,” Nat. Photonics 4(5), 261–263 (2010). [CrossRef]  

2. F. Capasso, “The future and promise of flat optics: a personal perspective,” Nanophotonics 7(6), 953–957 (2018). [CrossRef]  

3. D. K. Nikolov, A. Bauer, and F. Cheng, “Metaform optics: bridging nanophotonics and freeform optics,” Sci. Adv. 7(18), eabe5112 (2021). [CrossRef]  

4. F. Duerr and H. Thienpont, “Freeform imaging systems: Fermat’s principle unlocks “first time right” design,” Light: Sci. Appl. 10(1), 95 (2021). [CrossRef]  

5. J.-B. Volatier and G. Druart, “Differential method for freeform optics applied to two-mirror off-axis telescope design,” Opt. Lett. 44(5), 1174–1177 (2019). [CrossRef]  

6. J. Reimers, A. Bauer, K. P. Thompson, et al., “Freeform spectrometer enabling increased compactness,” Light: Sci. Appl. 6(7), e17026 (2017). [CrossRef]  

7. B. Zhang, G. Jin, and J. Zhu, “Towards automatic freeform optics design: coarse and fine search of the three-mirror solution space,” Light: Sci. Appl. 10(1), 65 (2021). [CrossRef]  

8. W. Jahn, M. Ferrari, and E. Hugot, “Innovative focal plane design for large space telescope using freeform mirrors,” Optica 4(10), 1188–1195 (2017). [CrossRef]  

9. J. Li, P. Fejes, and D. Lorenser, “Two-photon polymerisation 3d printed freeform micro-optics for optical coherence tomography fibre probes,” Sci. Rep. 8(1), 14789 (2018). [CrossRef]  

10. T. Yang, G.-F. Jin, and J. Zhu, “Automated design of freeform imaging systems,” Light: Sci. Appl. 6(10), e17081 (2017). [CrossRef]  

11. T. Yang, J. Zhu, X. Wu, et al., “Direct design of freeform surfaces and freeform imaging systems with a point-by-point three-dimensional construction-iteration method,” Opt. Express 23(8), 10233–10246 (2015). [CrossRef]  

12. R. Wu, L. Yang, and Z. Ding, “Precise light control in highly tilted geometry by freeform illumination optics,” Opt. Lett. 44(11), 2887–2890 (2019). [CrossRef]  

13. S. Wei, Z. Zhu, Z. Fan, et al., “Least-squares ray mapping method for freeform illumination optics design,” Opt. Express 28(3), 3811–3822 (2020). [CrossRef]  

14. A. N. Heemels, A. J. Adam, and H. P. Urbach, “Limits of realizing irradiance distributions with shift-invariant illumination systems and finite étendue sources,” J. Opt. Soc. Am. A 40(7), 1289–1302 (2023). [CrossRef]  

15. R. Wu, Z. Feng, and Z. Zheng, “Design of freeform illumination optics,” Laser Photonics Rev. 12(7), 1700310 (2018). [CrossRef]  

16. K. A. Ibrahim, D. Mahecic, and S. Manley, “Characterization of flat-fielding systems for quantitative microscopy,” Opt. Express 28(15), 22036–22048 (2020). [CrossRef]  

17. K. Desnijder, P. Hanselaer, and Y. Meuret, “Ray mapping method for off-axis and non-paraxial freeform illumination lens design,” Opt. Lett. 44(4), 771–774 (2019). [CrossRef]  

18. R. Wu, L. Xu, and P. Liu, “Freeform illumination design: a nonlinear boundary problem for the elliptic monge–ampére equation,” Opt. Lett. 38(2), 229–231 (2013). [CrossRef]  

19. A. Madrid-Sánchez, F. Duerr, and Y. Nie, “Freeform optics design method for illumination and laser beam shaping enabled by least squares and surface optimization,” Optik 269, 169941 (2022). [CrossRef]  

20. C. Prins, J. ten Thije Boonkkamp, and J. Van Roosmalen, “A monge–ampère-solver for free-form reflector design,” SIAM J. Sci. Comput. 36(3), B640–B660 (2014). [CrossRef]  

21. C. Bösel and H. Gross, “Compact freeform illumination system design for pattern generation with extended light sources,” Appl. Opt. 58(10), 2713–2724 (2019). [CrossRef]  

22. D. A. Birch and M. Brand, “Design of freeforms to uniformly illuminate polygonal targets from extended sources via edge ray mapping,” Appl. Opt. 59(22), 6490–6496 (2020). [CrossRef]  

23. O. Dross, R. Mohedano, P. Benitez, et al., “Review of sms design methods and real world applications,” in Nonimaging optics and efficient illumination systems, vol. 5529 (SPIE, 2004), pp. 35–47.

24. S. Sorgato, J. Chaves, H. Thienpont, et al., “Design of illumination optics with extended sources based on wavefront tailoring,” Optica 6(8), 966–971 (2019). [CrossRef]  

25. E. V. Byzov, S. V. Kravchenko, and M. A. Moiseev, “Optimization method for designing double-surface refractive optical elements for an extended light source,” Opt. Express 28(17), 24431–24443 (2020). [CrossRef]  

26. S. Wei, Z. Zhu, W. Li, et al., “Compact freeform illumination optics design by deblurring the response of extended sources,” Opt. Lett. 46(11), 2770–2773 (2021). [CrossRef]  

27. Z. Liu, P. Liu, and F. Yu, “Parametric optimization method for the design of high-efficiency free-form illumination system with a led source,” Chin. Opt. Lett. 10(11), 112201 (2012). [CrossRef]  

28. F. Fournier and J. Rolland, “Optimization of freeform lightpipes for light-emitting-diode projectors,” Appl. Opt. 47(7), 957–966 (2008). [CrossRef]  

29. X. Mao, H. Li, Y. Han, et al., “Two-step design method for highly compact three-dimensional freeform optical system for led surface light source,” Opt. Express 22(S6), A1491–A1506 (2014). [CrossRef]  

30. W. Situ, Y. Han, H. Li, et al., “Combined feedback method for designing a free-form optical system with complicated illumination patterns for an extended led source,” Opt. Express 19(S5), A1022–A1030 (2011). [CrossRef]  

31. L. Li and X. Hao, “Optimizing triangle mesh lenses for non-uniform illumination with an extended source,” Opt. Lett. 48(7), 1726–1729 (2023). [CrossRef]  

32. K. Desnijder, W. Deketelaere, M. Vervaeke, et al., “Design of a freeform, luminance spreading illumination lens with a continuous surface,” in Illumination Optics V, vol. 10693 (SPIE, 2018), pp. 89–99.

33. K. Desnijder, W. Ryckaert, P. Hanselaer, et al., “Luminance spreading freeform lens arrays with accurate intensity control,” Opt. Express 27(23), 32994–33004 (2019). [CrossRef]  

34. W.-L. Zhu, F. Duan, and X. Zhang, “A new diamond machining approach for extendable fabrication of micro-freeform lens array,” Int. J. Mach. Tools Manuf. 124, 134–148 (2018). [CrossRef]  

35. J. Neauport, X. Ribeyre, and J. Daurios, “Design and optical characterization of a large continuous phase plate for laser integration line and laser megajoule facilities,” Appl. Opt. 42(13), 2377–2382 (2003). [CrossRef]  

36. C. Yang, H. Yan, J. Wang, et al., “A novel design method for continuous-phase plate,” Opt. Express 21(9), 11171–11180 (2013). [CrossRef]  

37. Y. Shen, N. C. Harris, and S. Skirlo, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11(7), 441–446 (2017). [CrossRef]  

38. P. R. Wiecha, A. Arbouet, C. Girard, et al., “Deep learning in nano-photonics: inverse design and beyond,” Photonics Res. 9(5), B182–B200 (2021). [CrossRef]  

39. Y. Xu, X. Zhang, Y. Fu, et al., “Interfacing photonics with artificial intelligence: an innovative design strategy for photonic structures and devices based on artificial neural networks,” Photonics Res. 9(4), B135–B152 (2021). [CrossRef]  

40. L. Gao, Y. Chai, D. Zibar, et al., “Deep learning in photonics: Introduction,” Photonics Res. 9(8), DLP1–DLP3 (2021). [CrossRef]  

41. M. H. Eybposh, N. W. Caira, and M. Atisa, “Deepcgh: 3d computer-generated holography using deep learning,” Opt. Express 28(18), 26636–26650 (2020). [CrossRef]  

42. T. Shimobaba, D. Blinder, and T. Birnbaum, “Deep-learning computational holography: A review,” Front. Photonics 3, 8 (2022). [CrossRef]  

43. Y. Nie, J. Zhang, R. Su, et al., “Freeform optical system design with differentiable three-dimensional ray tracing and unsupervised learning,” Opt. Express 31(5), 7450–7465 (2023). [CrossRef]  

44. B. Mao, T. Yang, and H. Xu, “Freeformnet: fast and automatic generation of multiple-solution freeform imaging systems enabled by deep learning,” Photonics Res. 11(8), 1408–1422 (2023). [CrossRef]  

45. W. Chen, T. Yang, D. Cheng, et al., “Generating starting points for designing freeform imaging optical systems based on deep learning,” Opt. Express 29(17), 27845–27870 (2021). [CrossRef]  

46. T. Yang, D. Cheng, and Y. Wang, “Direct generation of starting points for freeform off-axis three-mirror imaging system design using neural network based deep-learning,” Opt. Express 27(12), 17228–17238 (2019). [CrossRef]  

47. C. Gannon and R. Liang, “Using machine learning to create high-efficiency freeform illumination design tools,” arXiv, arXiv:1903.11166 (2018). [CrossRef]  

48. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention: 18th International Conference, Proceedings, Part III (Springer, 2015), pp. 234–241.

49. K. Yanny, K. Monakhova, R. W. Shuai, et al., “Deep learning for fast spatially varying deconvolution,” Optica 9(1), 96–99 (2022). [CrossRef]  

50. C. Dong, C. C. Loy, K. He, et al., “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016). [CrossRef]  

51. Z. Wang, A. C. Bovik, H. R. Sheikh, et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

52. C. Lee, G. Song, and H. Kim, “Deep learning based on parameterized physical forward model for adaptive holographic imaging with unpaired data,” Nat. Mach. Intell. 5(1), 35–45 (2023). [CrossRef]  

53. L. Shen, Z. Yue, F. Feng, et al., “Msr-net: Low-light image enhancement using deep convolutional network,” arXiv, arXiv:1711.02488 (2017). [CrossRef]  

54. D.-H. Lee, “Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,” in Workshop on challenges in representation learning, ICML, vol. 3 (Atlanta, 2013), p. 896.

55. L. Piegl and W. Tiller, The NURBS book (Springer Science & Business Media, 1996).

56. “Lighttools 8.7.0, synopsys optical solutions, Mountain View, California 2019,”.

57. H. Tang, Z. Feng, D. Cheng, et al., “Fast irradiance evaluation of freeform illumination lenses based on deep learning,” in AOPC 2022: Novel Optical Design; and Optics Ultra Precision Manufacturing and Testing, vol. 12559 (SPIE, 2023), pp. 108–113.

58. Z. Wang and J. Yang, “Diabetic retinopathy detection via deep convolutional networks for discriminative localization and visual explanation,” arXiv, arXiv:1703.10757 (2017). [CrossRef]  

59. T. Aderneuer, O. Fernandez, and A. Karpik, “Surface topology and functionality of freeform microlens arrays,” Opt. Express 29(4), 5033–5042 (2021). [CrossRef]  

60. J. Bec, C. Li, and L. Marcu, “Broadband, freeform focusing micro-optics for a side-viewing imaging catheter,” Opt. Lett. 44(20), 4961–4964 (2019). [CrossRef]  

61. S. Kumar, Z. Tong, and X. Jiang, “Advances in the design and manufacturing of novel freeform optics,” Int. J. Extrem. Manuf. 4(3), 032004 (2022). [CrossRef]  

62. Z. Liu, H. Mao, C.-Y. Wu, et al., “A convnet for the 2020s,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022), pp. 11976–11986.

63. T. Ridnik, E. Ben-Baruch, A. Noy, et al., “Imagenet-21k pretraining for the masses,” arXiv, arXiv:2104.10972 (2021). [CrossRef]  

64. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv, arXiv:1711.05101 (2017). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (a) Convex freeform illumination lens that generates a prescribed (far-field) target pattern from a given light source. (b) The light patterns generated by each of the four individual sub-lenses of a discontinuous lens array result together in the prescribed target pattern. (c) A smooth, continuous freeform surface topology that consists of multiple convex, concave and saddle-shaped regions avoids strong C2 surface discontinuities. (d) Such a freeform surface topology can be described as a NURBS surface that is controlled by a matrix of control points. Remark: in (a) and (b) the illumination component and illumination pattern are not drawn to scale for illustration purposes.
Fig. 2.
Fig. 2. (a) Illustration of the U-net-SRCNN model for predicting the obtained irradiance on the target plane for a certain freeform surface. The model is trained by considering pairs of freeform topologies and the corresponding simulated far-field irradiance distributions, via MC raytracing. Training loss is evaluated using the SSIM loss between the MC-simulated ground truth $\mathcal {I}$ and the U-net-SRCNN-predicted $\mathcal {\hat {I}}$. (b) Illustration of the model to predict a freeform grid $\hat {\mathcal {F}}$ from an input irradiance $\mathcal {I}$. Training is achieved by considering the SSIM loss between the input irradiance $\mathcal {I}$ and $\hat {\mathcal {I}}$, which is the predicted irradiance of $\hat {\mathcal {F}}$ by the first model.
Fig. 3.
Fig. 3. (a) MC raytracing simulated irradiance versus predicted irradiance by the U-net-SRCNN model. The bottom image represents a low SSIM outlier in the validation set. (b) Comparison of pixel-wise errors: the irradiance simulated using 1 million rays is compared with the originally simulated irradiance using 15000 rays (left), and the U-net prediction, as shown on the right. Additionally, SSIM values are calculated for each comparison. Since the focus of this study lies in irradiance shapes, each irradiance was individually normalized.
Fig. 4.
Fig. 4. (a) Freeform prediction results for three validation cases. Ground truth ($\mathcal {F}, \mathcal {I}$) against the predicted freeform matrix ($\mathcal {\hat {F}}$) and corresponding MC simulated irradiance ($\hat {\mathcal {I}}_{sim}$) using 1 million rays. (b) (i) Pixel-wise error for the normalized predicted freeform surface control points $|\mathcal {F} - \mathcal {\hat {F}}|$ as well as the interpolated NURBS surfaces. (ii) Pixel-wise error for the simulated irradiances. Notice the significant difference in freeform surface topology (i) while producing an almost identical irradiance distribution (ii).
Fig. 5.
Fig. 5. Freeform prediction results for certain custom target irradiances $\mathcal {I}_t$. (a) $\mathcal {I}_t$ against the predicted freeform surface topologies $\hat {\mathcal {F}}$ and the MC raytraced irradiance $\hat {\mathcal {I}}_{sim}$, resulting from $\hat {\mathcal {F}}$. (b) Rotate test-time augmentation on the triangular target. The results illustrate the existence of various freeform surface topologies that produce almost visually identical irradiance distributions.
Fig. 6.
Fig. 6. Performance comparison of three different learning approaches. Regression activation maps (RAM, normalized) and the produced irradiance ($\hat {\mathcal {I}}_{sim}$) of the predicted freeform topology $\mathcal {\hat {I}}$ are shown, both for a target irradiance from the validation set and a custom target irradiance.
Fig. 7.
Fig. 7. (a) Comparison between freeform surfaces generated by LightTools and the proposed framework, in terms of (i) freeform surface shape, (ii) resulting irradiance distribution at receiver plane and (iii) radiance distribution at freeform component. (b) The spatial spreading of the radiance distribution at the illumination component, towards a specific receiver direction, can be sampled with a radiance meter in the LightTools software. The size of the freeform element was not drawn at scale for illustration purposes.
Fig. 8.
Fig. 8. (a) U-net-SRCNN model with visualization of how the control points are interpolated and how the SRCNN constructs the final irradiance distribution. (b) The freeform prediction model, with highlights on the input transformations and the MLP structure that predicts the freeform surface control points.

Tables (1)

Tables Icon

Table 1. Full training set-up for all the considered models.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

{ F [ 0 , t ] a × b } { I R + m × n } .
{ I R + m × n } { F [ 0 , t ] a × b } ,
SSIM ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
rot ( 90 , F ) rot ( 90 , I ) .
L { I , m 2 ( I ) } = 1 SSIM { I , m 1 [ m 2 ( I ) ] } = 1 SSIM { I , m 1 ( F ^ ) } ,
{ I , log ( I ) , exp ( I ) } .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.