Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ultra-thin multifocal integral LED-projector based on aspherical microlens arrays

Open Access Open Access

Abstract

Multifocal imaging has been a challenging and rewarding research focus in the field of imaging optics. In this paper, an ultra-thin multifocal integral LED-projector based on aspherical microlens array (MLA) is presented. A two-layer aspherical sub-lens with NA = 0.3 is proposed as a sub-channel projector and the optimization design ensures high optical integration precision and improves optical efficiency. To avoid the tailoring loss of the projected images between multi-plane projections, the central-projection constraints between size and projection distance for the multifocal projection are defined. The depth of focus (DOF) analysis for MLA and sub-lens is also introduced to proof the sufficiency of realizing multifocal projection. Combined with the radial basis function image warping method, multifocal sub-image arrays were acquired, and three types of multifocal integral projection were realized, breaking through the traditional limitations of the single-focal DOF. A prototype with thickness of less than 4 mm is developed. Substantial simulations and experiments are conducted to verify the effectiveness of the method and the design.

© 2022 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

MLA has the advantages of compactness and large field of view (FOV) due to their short effective focal length and high lens curvature [1]. MLA has been developed and applied to various optical systems over the past two decades. The MLA manufacturing method such as thermal reflow [2], gray scale lithography polymer [3], soft lithography [4] and femtosecond laser micro-nanofabrication via two photon polymerization [5], have been widely used in mass production. Furthermore, MLA fabrication methods with some special functions are also proposed, such as multifocal MLAs using multilayer photolithography [6] and curving microfluid [7], MLAs with tailored optical properties using inkjet printing process [8]. A wide variety of MLA imaging applications have emerged [9], such as light-field imaging [10], compound eye imaging [1113], integral display [1416]. For non-imaging application fields, an MLA can be used for beam shaping [1719], homogenization [20,21], and other micro-optics application fields [22,23].

Compact short-distance projection is becoming an increasing need for lighting applications, such as interior and automotive projection lighting with narrow and large-distorted surfaces [24]. Because the traditional projection imaging system has only one single-focal imaging plane (when not zooming) and the DOF are generally small, the actual imaging distance is changed by the projection surface having curved or multi-depth surfaces, resulting in the final blurred image. Z. Feng proposed a freeform design method for shaping a beam with different illumination distributions at multiple different projection depths [25]. However, the freeform-based method Z. Feng proposed assumes the source as a point or that the amplitude and phase distribution of the input beam are known, which is unrealistic. In addition, the method based on freeform beam shaping is difficult to generate patterns with higher resolution. For the proposed MLA-based LED-projector, there are no requirements on a strict collimated input beam and the profile machining tolerance of the MLA. The integral image produced by MLA projector can be sharper and the different images can be realized by replacing the mask layer. Furthermore, the MLA-based projector usually exhibits a more compact structure (the total thickness of MLA projector is less than 5mm). M. Sieler realized a micro LED-projector using an MLA based on the paraxial imaging theory [26,27]. The MLA-based integral projector has also been used in Bayerische Motoren Werke (BMW) automotive surroundings-illuminating device for generating a light distribution on the ground surrounding the motor vehicle [28]. However, it is difficult to obtain the projection effect of large-size, distortion-free and multifocal integral imaging based on the paraxial analysis, hindering the application and upgrade of the MLA system. In previous work [29], one single-focal projection is realized with a size of 5.5 times than M. Sieler’s method, but the problem of light efficiency is prominent. Compared with the MLA imaging analysis of the paraxial method proposed by M. Sieler, the method presented could also provide flexible degrees of freedom in the microlens design. In this paper, the central-projection constraint relations between projection sizes and distances of the multifocal projection are defined. The DOF analysis of MLA and sub-lens is introduced to proof the sufficient condition of realizing multifocal projection. The unblurred multifocal projection was realized by making full use of the large-DOF characteristic of microlens, which broke the limitation of imaging depth caused by DOF of MLA. A less than 4 mm ultra-thin projector with aspherical MLAs and three types of multifocal integral projection were also realized. In addition, due to projection of multiple depths and different images can be realized by a single MLA-projector, the repeated design of a new optical system can be avoided, thus reducing the cost.

Figure 1 shows the basic principle of integral projection imaging based on MLA. As shown in Fig. 1(a), the MLA 1 is corresponding to the projection MLA (PMLA) and it functions as a multi-channel imaging system, and each element image aibi (i = 1…5) is projected to form the same image ab by the sub-lens of MLA 1 separately. The integral image ab is expected to be a well-superimposed image. To realize a good overlapping of the images from different sub-channels, the sub-image (SI) of image aibi should be arranged and preprocessed. Because the arrangement of the MLA is usually regular and uniform, the SI in different sub-channels has different offset. For example, comparing to the SI a4b4, to form the same-location projected image ab, the SI a5b5 should has a larger off-axis offset relative to its sub-channel optical axis. The SIs of image aibi constitute the layer of sub-image array (SIA). The MLA 2 is the condenser MLA (CMLA), which is constituted of plano-convex sub-lens. Each condenser sub-lens can improve light efficiency and lighting each SI for each sub-channel. To obtain sufficient flux and a good imaging effect for each sub-channel, the condenser sub-lens, the SI mask, and the projection sub-lens constitute a projection system based on the principle of Kohler illumination. Figure 1(b) shows the sub-channel imaging optical path in an integral imaging. The condenser sub-lens Lens 2 of the sub-channel converges the incident collimated sub-beam to the SI mask ab, and the SI is projected to form the image ab through the projection sub-lens Lens 1. Because the integral projection image is made from the superposition of the imaging of each micro sub-channel, it has the inherent advantages of short-distance projection and high illumination uniformity.

 figure: Fig. 1.

Fig. 1. Basic principle of integral projection imaging based on MLA. (a) Integral imaging optical path with MLAs. (b) Sub-channel imaging optical path in integral imaging.

Download Full Size | PDF

In section 2 of this paper, we introduce the basic principle of multifocal integral projection based on MLA. Three types of multifocal integral projection effect are briefly introduced. The projection flux of the MLA projector and the influence factor are analyzed. The central-projection constraints between size and projection depth of the multifocal projection are defined. The DOF analysis of MLA and sub-lens is also introduced to proof the sufficiency of multifocal projection. In section 3, we realized the design and analysis of the aspherical projection sub-lens. The construction of offset matrixes for hexagonal-arranged MLA are also conducted. In addition, the method of generating the three types of multifocal projection SIA are detailed. In section 4, substantial simulations, experiments and the evaluations are conducted to verify the effectiveness of method we proposed.

2. Basic principle of multifocal integral projection based on MLA

Figure 2(a) shows the schematic diagram of compact multifocal integral LED-projector based on MLAs. The rear CMLA is illuminated by the collimated beam from LED source, and the SIA is imaged by the PMLAs at the target plane. As shown in Fig. 2(a), for imaging a well-integral image “A” on the two target planes at same time, each SI contains information “A” corresponding to the two target surfaces. Central-FOV imaging point S at target surface 2 is formed by integral imaging of the object points Si, while edge-FOV imaging point K at the target surface 1 is formed by integral imaging of the object points Ki. The dotted and solid lines in blue are corresponding to the chief rays of Ki and Si in each sub-channel. The SIA of multifocal projection shows a fusion of SIAs for each target surface. The overall images on the two target surfaces are formed by the superposition of the SIs projected by each sub-channel. Each SI in the SIA contains all the image information for each projection plane, but the information in each sub-channel is different. The SI information changes obviously from the center sub-channel to the edge sub-channel of the MLA projector. This variation is mainly due to the differences in projection depths and the differences between sub-channel positions. In this paper, as shown in Fig. 2(b), three types of multifocal projection can be realized by the proposed LED-projector: (1) Type 1: Projection of the same pattern at different distances. An integral image “A” on the near plane and another “A” integrated on the far plane simultaneously. Projection of the same pattern at different distances can increase the depth range of the pattern and maintain a higher pattern clarity when the actual projection distance or the flatness of the target surface cannot be guaranteed, which greatly reduces the influence of environmental factors. (2) Type 2: Projection of the different patterns at different distances. Integral images “A” and human portrait can be formed on two different distances simultaneously. (3) Type 3: Independent projection of different patterns at different distances. Letters “N” and “F” are projected on two different distances separately and simultaneously.

 figure: Fig. 2.

Fig. 2. (a) Schematic diagram of short-distance integral LED-projector based on MLAs and (b) proposed three types of multifocal effect.

Download Full Size | PDF

Optical efficiency is an important factor in optical imaging system. The flux of the single-aperture projector can be expressed as:

$$\varPhi = \frac{{\pi ABT}}{{4{{(f/\# )}^2}}} \propto \frac{{{f^2}}}{{{{(f/\# )}^4}}}$$
where A is the size of the mask transmittance area, B is the brightness of the light source, T is the transmittance of a single channel, and (f/#) = f/p. The most significant energy loss of the MLA projector comes from the introduced SIA. Assuming that the SIA mask is represented by a binary image with size m and n, I(i, j)= 0 implies blocked and I(i, j)= 1 implies transmitted. The projected flux of the MLA projector with number of sub-channels N is expressed as:
$${\varPhi _{\textrm{array}}} = \sum\limits_{n = 1}^N {{\varPhi _n}} {T_n} \propto \frac{{{f^2}N}}{{{{(f/\# )}^4}}}{T_{\textrm{array}}} \propto \frac{{\sum\limits_{i = 1,j = 1}^{m,n} {I(i,j)} }}{{mn}}$$
where Фn is the incoming flux in the n-th sub-channel and Tn is the transmission rate of the n-th sub-channel. Tarray is the fill factor (transmittance) of the SIA and it is one of the sources of transmission losses. As presented in Eq. (2), the flux is proportional to the transmittance of the SIA mask and the filling rate and transmittance of sub-channel in MLA (here, the Fresnel loss and absorption loss of MLA are not considered). In this study, MLAs and SIA masks with high filling rates and high transmittance were realized, thereby ensuring the light efficiency of the system.

As shown in Fig. 3(a), owing to the same optical sub-aperture size and the regular arrangement of the sub-channels, the projection areas of MLA projector with N sub-channels will exhibit a certain deviation (N is the number of sub-channels through the MLA aperture). The blue and red lines are corresponding to the chief rays of edge FOV for the top-edge and bottom-edge sub-channels of the MLA projector (the sub-lenses in the maximum-aperture positions of the MLA). In the case of short-distance (0.1 m∼1 m), the deviation is not negligible and the illumination attenuation of (N−1)p width at the boundary of the projection areas will be clear, where p is the aperture size of sub-channel. To guarantee a sufficiently uniform and complete imaging effect, the common projection area is inside the conical space region in light purple, and the actual projected image should be within the common projection area of sizes Dn and Df corresponding to the near and far target plane. The common projection area is a projection intersection region of all sub-channels and the shape of its boundary contour is usually determined by the projection distance and the arrangement of sub-lenses in the MLA. Ln and Lf are the projection distances corresponding to the near and far target plane, respectively. The projection distances Ln and Lf, the common areas Dn and Df of all sub-channel projection imaging can be satisfied as follows:

$$\begin{aligned} {D _n} &= {\textit {S} _n} - (N - 1) \textit p = \frac{{{L _n}}}{\textit{s}}{\textit p} - (N - 1)p \\ &{D _f} = \frac{{{L _f}}}{\textit{s}}\textit{p} - (N - 1)p \end{aligned}$$
$$\frac{{{D _n}}}{{{D _f}}} = \frac{{{S _n} - (N - 1)p }}{{{S _f} - (N - 1)p }} = \frac{{{L _n} - (N - 1)\textit{s}}}{{{L _f} - (N - 1)\textit{s}}}$$

Equation (4) depicts the geometric constraints of multifocal projection which are nearly a central projection from point O. As shown in Fig. 3(a), the constraints restrict the sizes of the projected patterns at different distances to avoid the tailoring losses of images between multi-plane projections. For example, to realize the effect of Type 1 illustrated in Fig. 2(b), once the projection positions and sizes violate Eq. (4), the projection image at one depth would also be cut and lost by projection images at other depths. In the implementation of multifocal projection in the following chapters, the sizes of the patterns at different depths shall conform to this definition. In addition, the common area D on each target plane can be acquired by tracing chief rays of the maximum FOV of each sub-channel, it will be detailed in the Sec. 3.3. Figure 3(b) shows the micro illustration of ray transportation in the MLA projector, and the ray transportation for the ω-th sub-channel can be expressed by the paraxial matrix optics as:

$$\begin{aligned} &\left( {\begin{array}{l} {\alpha {^{\prime}_{\omega ,\gamma }}}\\ {y{^{\prime}_{\omega ,\gamma }}} \end{array}} \right) = \left( {\begin{array}{cc} 1&0\\ {{L_\gamma }}&1 \end{array}} \right)\left[ {\left( {\begin{array}{c} 0\\ { - \omega p} \end{array}} \right) + \left( {\begin{array}{ll} 1&{ - \frac{1}{{{f_2}}}}\\ 0&1 \end{array}} \right)\left( {\begin{array}{ll} 1&\textrm{0}\\ s&1 \end{array}} \right)\left( {\begin{array}{l} {{\alpha_{\omega ,\gamma }}}\\ {{y_{\omega ,\gamma }}} \end{array}} \right)} \right]\\ \begin{array}{cc} {}&{} \end{array} &\quad= \left( {\begin{array}{c} {{\alpha_{\omega ,\gamma }} - \frac{{{\alpha_{\omega ,\gamma }}s + {y_{\omega ,\gamma }}}}{{{f_2}}}}\\ {\left( {{\alpha_{\omega ,\gamma }} - \frac{{{\alpha_{\omega ,\gamma }}s + {y_{\omega ,\gamma }}}}{{{f_2}}}} \right){L_\gamma } + {\alpha_{\omega ,\gamma }}s + {y_{\omega ,\gamma }} - \omega p} \end{array}} \right) \end{aligned}$$
where Lγ is the projection distance of γ-th projection plane, f2 is the sub-lens focal length of PMLA, and s is the object length of the SIA. αω,γ and yω,γ are the incident angle and height on the projection plane, while α and y are the emission angle and height on the SIA plane. The imaging height yω,γ is always what we are interested in. From Fig. 3(b), according to the paraxial calculation, the incident angle αω,γ at the projection plane satisfies:
$$\alpha {^{\prime} _{\omega ,\gamma }}\textrm{ = }\frac{{y{^{\prime} _{\omega ,\gamma }}^{\prime} \omega p}}{{{L_\gamma }}}$$

Inserting Eq. (6) into (5), the angle can be canceled, and the imaging height yω,γ can be further expressed as:

$$y{^{\prime} _{\omega ,\gamma }}\textrm{ = } - \frac{{{L_\gamma }}}{s}{y_{\omega ,\gamma }} + 2{L_\gamma }\omega p(\frac{1}{s} - \frac{1}{{{f_2}}}) + \omega p$$

Equation (7) establishes the relation between object point of the ω-th SI on the SIA plane and the imaging point on the projection plane. To realize an integral imaging, the imaging point of ω-th and ω+1-th sub-channels must be overlapped at the projection plane Lγ, which means yω+1,γ = yω,γ. Therefore, it satisfied:

$$y{^{\prime} _{\omega \textrm{ + }1,\gamma }} - y{^{\prime} _{\omega ,\gamma }}\textrm{ = } - \frac{{{L_\gamma }}}{s}({y_{\omega \textrm{ + }1,\gamma }} - {y_{\omega ,\gamma }}) + 2{L_\gamma }p(\frac{1}{s} - \frac{1}{{{f_2}}}) + p = 0$$

Then, we can get:

$${y_{\omega + 1,\gamma }} - {y_{\omega ,\gamma }}\textrm{ = }2ps(\frac{1}{s} - \frac{1}{{{f_2}}} + \frac{1}{{2{L_\gamma }}})$$

For a certain projected distance Lγ, Eq. (9) shows that the position difference of object points at SIA in adjacent sub-channels is constant at a given projected image height. However, the position difference of object points at SIA in adjacent sub-channels will increase when the projection distance Lγ reduces. This also explains the characteristics of SIA shown in Fig. 2, the positions of SI patterns corresponding to different projection distances are different. When the patterns have larger deviates from the center optical axis of the sub-channel, the corresponding integral image is formed on the projection plane with a closer distance.

 figure: Fig. 3.

Fig. 3. Illustration for MLA projection. (a) Macro constrains between projection sizes and distances. (b) Micro illustration of sub-channel ray transportation in MLA projector.

Download Full Size | PDF

DOF can describe the imaging depth of the imaging system; that is, the imaging can still be clear in a certain range of DOF near the imaging focal plane. As shown in Fig. 4(a), the geometric DOF of the single-aperture sub-channel and the MLA projector system can be expressed as follows:

$${DOF _{\textrm{sub - lens}}} = \frac{\varepsilon }{{\mathrm{tan}V ^{\prime}}} \approx \frac{L }{{p /2}}\varepsilon = \frac{{2M\textrm{s}}}{p }\varepsilon$$
$${DOF _{\textrm{MLA}}} = \frac{\varepsilon }{{\mathrm{tan}U \mathrm{^{\prime}}}} \approx \frac{L }{{\frac{{(\textrm{N} - 1)p }}{2}}}\varepsilon = \frac{{2L }}{{(\textrm{N} - 1)p }}\varepsilon$$
where L is the designed projection distance; ε is the smallest dimension of imaging point Q that can be distinguished, V and U are corresponding to the converging angles of imaging point Q in sub-channel and MLA, respectively; M is the projection magnification of the sub-channel; DOFMLA is the DOF of the MLA projector. As depicted in Eqs. (10) and (11), the best quality of integral imaging of the MLA will depend on the integral spots generated by the DOFMLA. However, generally, DOFsub-lens >> DOFMLA, despite that the increase of distance L will increase DOFMLA slowly for a single-focal integral image, the size of the MLA Np greatly limits the DOFMLA of the system. The analysis diagram of DOFsub-lens and DOFMLA is shown in the Fig. 4(b), in which we set p = 1 mm and N = 21 for attaining a 20 mm aperture size of MLA. If the view distance of human eye is 1 m, ε ≈ 0.3 mm and DOFsub-lens ≈ 180 mm, while DOFMLA ≈ 9 mm. Large DOFsub-lens is a sufficient condition for the realization of multifocal projection. If only a single-focal plane is realized, which wastes the large-DOF feature of the sub-lens, then the DOF range of the integral pattern is very limited, the image quality will deteriorate when a large-DOF projection is needed. The realization of multifocal plane will also further expand the DOF of the system, the space of DOF will be continuous, and finally the imaging with continuous depth can be realized. For example, since multiple DOF will be generated when multiple projection distances generated, to realize the effect of Type 1, when the projections at 200 mm, 300 mm … 600 mm distances are realized at the same time, under the constrain of Eq. (4), the image will be well-integrated imaging from 200 mm to 600 mm. In addition, DOFMLA can also approximately describe the DOF of a traditional single-channel projection system with the same aperture as the MLA, which also highlights the advantages and characteristics of the MLA projector over the traditional single-channel projection system.

 figure: Fig. 4.

Fig. 4. DOF analysis of sub-lens and MLA. (a) DOF presentation; (b) DOF analysis of sub-lens and MLA.

Download Full Size | PDF

The basic optical principles of MLA projection can be described and introduced adequately by using the matrix optical analysis here. In fact, because the analysis is based on paraxial optics, only optical lenses with small aperture and curvature can be well characterized and described, which greatly limits the upgrading of surface shape and structure of projection sub-lens. In addition, the optical aberrations are ignored, especially the distortion effect of the projection sub-lens which greatly destroy the integral precision and imaging quality of MLA projector. In the next section, the projection sub-lens with aspherical surfaces is optimized and chief ray tracing is used by optical design software CODE V to absolutely guarantee the correctness of the location of the object and imaging points.

3. Design method of multifocal LED-projector based on MLAs

3.1 Optimization and optical analysis of the sub-lens and MLA

An impeccable integral projection with the highest integral accuracy depends on both the optical imaging quality of each projection sub-lens and the high-precision integration of all the projection images from each sub-channel [29]. Therefore, aspherical surface is used to improve the optical imaging quality. Figure 5(a) shows the optimized large-NA aspherical projection sub-lens with a sub-aperture size of 1 mm. The projection sub-lens comprised two identical plano-convex sub-lenses and the convex surface was the aspheric surface. Material polycarbonate was used in the sub-lens optimization, and the MLAs were also made of this material. In the optimization, the second aspheric surface was picked up on the first aspheric surface. The aspheric surface can be used to expand the projection FOV. The aspheric surface usually can be expressed as:

$$z = \frac{{c{r^2}}}{{1 + \sqrt {1 - (1 + k){c^2}{r^2}} }} + \sum\limits_{i = 2}^n {{a_{2i}}} {r^{2i}}$$

The effective focal length of the sub-lens is 1.65 mm and NA = 0.3. Half-FOV projection size of 135 mm is realized when the projection distance is 300 mm. Since the stop is placed on the front surface and the aperture of stop is increased to get close to the sub-channel aperture, the luminous flux is greatly improved compared to the structure described in our previous work [29]. The light efficiency of sub-channel improves by 30% owing to the larger pupil size while improving the imaging quality. In addition, the application of two aspheric surfaces reduces the attenuation of image quality caused by the increase of stop aperture. Figure 5(b) shows a nearly 8% maximum distortion in the edge FOV. In the same FOV (image height at projection plane), there is a maximum distortion difference of 1.5% for visible-light wavelengths (486−656 nm), so there is also a certain lateral chromatic aberration. Fortunately, due to the uniform changes of lateral chromatic aberration in the whole FOV and the projection image is integrated by multiple sub-channels, the influence of chromatic aberration on the actual integral imaging effect is not obvious. Figure 5(c) shows the geometrical spot diagram. The RMS geometrical spot size in the visible spectrum is close to 1 mm. A full-FOV nearly 270 resolvable pixels can be achieved which is sufficient for an illumination application. In this design example, we adopt a relative larger sub-aperture size p = 1. 0 mm of the sub-lens. Because the Fresnel Number FN is much larger than 100 (FN >> 1) in the visible spectrum, the factual imaging aberrations influence are larger than the diffraction effect, the influence suffered by the micro-aperture diffraction effects could be neglected [30]. Figure 5(d) shows the designed MLA integral projector with a size of 20 mm × 20 mm. A 100% filling-rate hexagonal arrangement was adopted in the MLAs to further improve the light efficiency. To realize a high light efficiency and low crosstalk, the design of the condenser sub-lens should closely match the NA of the projection sub-lens, which means the image side NA of the condenser sub-lens should be equal to the NA of the projection sub-lens. The condenser MLA can adopt a spherical MLA, and it only functions as a concentrator. In addition, the sub-aperture size and arrangement of the condenser sub-lens should be the same as those of the PMLA. The SIA is attached to the rear plane of the PMLAs and the plane part of the CMLA. The ultra-thin and large-NA MLA projector with an overall thickness of 3.8 mm is realized (2.4 mm for PMLAs, 0.1 mm for SIA, and 1.3 mm for CMLA).

 figure: Fig. 5.

Fig. 5. (a) Optical structure of optimized projection sub-lensess, (b) field curves and (c) geometry spot diagram analysis of optimized projection sub-lens, (d) MLA integral projector.

Download Full Size | PDF

3.2 Generation of offset for the sub-lens in MLA

The sub-lens obtained by the above optimization is set to be the central sub-lens of the MLA projector, which means the optical axis of this sub-lens will also be the central axis of the whole MLA. Therefore, the offset of the optimized sub-lens is set to zero in the X, Y and Z directions. According to the sub-aperture size, the required hexagonal array arrangement and the required array size, the offset data matrixes of the other sub-lenses relative to the central sub-lens in the required MLA can be acquired. The offset matrixes obtained above can be used to loop addressing the position of each sub-lens in the required MLA. According to the offset matrixes, the addressing position of each sub-lens and further ray tracing for the designed MLA can be realized by only moving the optimized sub-lens.

Figure 6(a) shows the 100% filling-rate hexagonal arrangement of the projection sub-lens. To define the sub-lens position in the hexagonal arrangement of MLA conveniently, the hexagonal arrangement of the MLA can be decomposed into two different rectangular arrangements of MLA corresponding to offset matrices M1 and M2, separately. The sub-aperture size of the sub-lens is p, the interval of the sub-lens in the Y-direction is $\Delta y = \sqrt 3 p/2$, and the interval of the sub-lens in the X-direction is Δx = 3p/2. Figure 6(b) describes the splicing form of the central column sub-lens passing through the central sub-lens of the M1 in the YZ direction. If M1 (icenter, jcenter) is the optimized central sub-lens, and it could have an inclination θ (the incline angle is determined in the previous sub-lens design optimization; X-axis is the rotation axis). M1(i, j) and M2(i, j) are the offset of the sub-lens corresponding to M1 (icenter, jcenter) respectively. The sub-lens positions can be obtained using matrices M1 and M2 as follows:

$${{\mathbf M}_1}(i,j) = \left[ {\begin{array}{l} {{\mathbf {XD}}{{\mathbf E}_1}(i,j)}\\ {{\mathbf {YD}}{{\mathbf E}_1}(i,j)}\\ {{\mathbf {ZD}}{{\mathbf E}_1}(i,j)} \end{array}} \right] = \left[ {\begin{array}{c} {(i - {i_{\textrm{center}}})\Delta x}\\ {(j - {j_{\textrm{center}}})\Delta y\cos (\theta )}\\ {(j - {j_{\textrm{center}}})\Delta y\sin (\theta )} \end{array}} \right] = \left[ {\begin{array}{c} {(i - {i_{\textrm{center}}})\frac{{3p}}{2}}\\ {(j - {j_{\textrm{center}}})\frac{{\sqrt 3 p}}{2}\cos (\theta )}\\ {(j - {j_{\textrm{center}}})\frac{{\sqrt 3 p}}{2}\sin (\theta )} \end{array}} \right]$$
$${{\mathbf M}_2}(i,j) = {{\mathbf M}_1}(i,j) + \left[ {\begin{array}{c} {\frac{{3p}}{4}}\\ {\frac{{\sqrt 3 p}}{4}\cos (\theta )}\\ {\frac{{\sqrt 3 p}}{4}\sin (\theta )} \end{array}} \right] = \left[ {\begin{array}{c} {(i - {i_{\textrm{center}}})\frac{{3p}}{2} + \frac{{3p}}{4}}\\ {(j - {j_{\textrm{center}}})\frac{{\sqrt 3 p}}{2}\cos (\theta ) + \frac{{\sqrt 3 p}}{4}\cos (\theta )}\\ {(j - {j_{\textrm{center}}})\frac{{\sqrt 3 p}}{2}\sin (\theta ) + \frac{{\sqrt 3 p}}{4}\sin (\theta )} \end{array}} \right]$$
where XDE1(i, j), YDE1(i, j), and ZDE1(i, j) are the offset amounts in the X, Y, and Z directions, respectively, of the ML(i, j) sub-lens in MLA relative to the central sub-lens obtained by optimization. Through the above MLA offset matrixes M1 and M2, the optimized central sub-lens can easily be shifted to all possible locations of the sub-lenses in the MLA. Each sub-lens in the MLA can be located efficiently by offsetting the sub-channel from the offset matrices. The offset matrix and addressing method can improve the efficiency of sub-channel analysis, which alleviate the data storage burden due to the large number of sub-channels. The offset matrixes were used in ray tracing to determine the common area, and it also be used to trace chief rays for generating the SI. For the designed MLA in Fig. 5(d), the number of sub-channels is 681, corresponding to sizes of 24 × 14 and 23 × 15 for matrices M1 and M2, respectively.

 figure: Fig. 6.

Fig. 6. Sketch of hexagonal arrangement structure of MLA. Hexagonal-arranged MLA is divided into two rectangular-arranged MLAs: light blue and green sub-lenses corresponding to M1 and M2, separately. (a) Δx and Δy are center intervals between adjacent lenses in MLA, while p is aperture size of sub-lens; (b) cross section view of MLA corresponding to M1 through central sub-lens in YZ direction and definition of YDE1 and ZDE1; (c) cross section view of MLA through MLA plane and definition of XDE1.

Download Full Size | PDF

3.3 Ray tracing and multifocal-plane SIA generation

Figure 7 shows the generation flow chart of multifocal SIA. The main processing steps are: (1) Generating of offset metrics M1 and M2. Firstly, M1 and M2 for determining the positions of each sub-channel should be generated by considering the arrangement relationship of sub-lens relative to the central sub-channel in MLA. As shown in Fig. 8(a), M1 and M2 are corresponding to the green and blue sub-channels, respectively. In the CODE V optical software, commands XDE, YDE and ZDE can be used to complete the offset of the projection sub-lens.

  • (2) Setting projection distance Lγ. SIA for projection distance Lγ need to be generated. For achieving lossless image projection on each focal plane, the projection size Dγ and distance Lγ of multifocal planes should obey the Eq. (4). For projection plane at distance Lγ, chief ray tracing and SIs generation are performed for all the sub-channels in MLA, and the SIA of the single-focal projection plane should be generated correspondingly.
  • (3) Forward chief ray tracing. Here, the ‘forward chief ray tracing’ means chief ray tracing from object SI plane to the imaging projection plane. As shown in Fig. 8(a), edge FOV of each sub-channel (the blue, green, black and brown dashed circle lines) can be sampled and traced by using M1 and M2 (two hundreds of FOV points used in our design), and the boundary of common projection area depicted in Fig. 4(a) can be acquired (For Lγ > 200 mm, the boundary of common projection area can be approximated as a circle; for Lγ < 200 mm, the bound of common projection area should be fitted by polynomials). As shown in Fig. 8(b), different from the paraxial analysis in Fig. 3 (a), the position O can be determined accurately by chief-ray tracing of the two sub-channels with spacing distance RA. The two sub-channels locate at the edge diagonal positions of the MLA projector (In the design example, the MLA size is 20 mm × 20 mm, RA = 27.2 mm and LO = 43.9 mm). The size Dγ of common projection area is shown in Fig. 8(c), which is enclosed by the forward chief tracing points of edge FOV for each sub-channel (The blue, green, black and brown dashed circle lines are corresponding to the sampled-FOV circle lines in Fig. 8(a)).
  • (4) Sampling FOV points on projection plane at Lγ. As shown in Fig. 8(c), a rectangular sampled FOV grid (the default projection area is rectangular) area on the common area Dγ of projection plane can be determined. Dxγ and Dyγ are the projection sizes in the x and y directions. In addition, Dxγ and Dyγ are also determined by the size of common projection area Dγ and constrained by Eq. (4) (For example, when the projection target planes are set at 200 mm and 400 mm, Dn/Df = 0.44, and the projection sizes are 96.7 mm × 96.7 mm and 220 mm × 220 mm, respectively). The According to the M1 and M2 and the sampled FOV grid points on the target plane at Lγ, the geometric shape and location of SI for each sub-channel can be chief-ray traced and determined.
  • (5) Backward chief ray tracing. Here, the ‘Backward chief ray tracing’ means chief ray tracing from the imaging projection plane to the object SI plane. The sampled FOV grid on the target surface can be traced by offsetting the optimized sub-lens through M1 and M2. Hence, the geometric shape and location information of the predistortion SI of each projection sub-lens can be obtained. As shown in Fig. 8(d), the coordinate information of the chief ray (red points) on the SI plane corresponds to one of the SIs required (green points corresponding to another different sub-channel). The projection of each sub-channel in the MLA integrator exhibited a complex and different optical distortion owing to the short-distance application and the offset of the sub-channels.
  • (6) Generating SIs by radial basis function (RBF) method. Based on the ideal sampled FOV grid points and the traced grid points on the SI plane, a mapping relation was established by using the RBF image warping method. The initial SI (predefined image) is transformed into the predistortion SI by the mapping relation. The high-precision SIs predistortion can be ensured using the RBF image predistortion method [31,32]. By using the RBF method, the SI of each channel can be predistorted referring to the position of the blue regular points and the distorted ray tracing points (the red points). The RBF method uses n polynomial basis functions (the corresponding points are equal to the number of basis functions). Each original sampling point Pζ (xζ, yζ), (blue points in Fig. 8(d)) and the corresponding traced points of Pζ (xζ, yζ) can be described as:
    $$\begin{aligned} &{x_\zeta }^{\prime} = \sum\limits_{\eta = 1}^n {{\beta _{x,\eta }}} {R_\eta }(d) + {\varphi _m}({x_\zeta },{y_\zeta })\\ &{y_\zeta }^{\prime} = \sum\limits_{\eta = 1}^n {{\beta _{y,\eta }}} {R_\eta }(d) + {\varphi _m}({x_\zeta },{y_\zeta })\\ &{R_\eta }(d) = {[{({x_\zeta } - {x_{center\_\eta }})^2} + {({y_\zeta } - {y_{center\_\eta }})^2} + \lambda {r_\eta }^2]^{\mu /2}} \end{aligned}$$
    where Rη is the η-th basis function which centered at (xcenter_η, ycenter_η); βxy,η are the weights of the basis functions; φm (xζ, yζ) is a fitting polynomial of order m; ζ is an integer from 1 to n; and $\lambda$ is a scaling factor; μ = −2.The basis centers are the blue points in Fig. 8(d). All the elastic-warping SIs are generated with the precision less than 1 pixel and the high-precision integration from each sub-channel can be guaranteed. The look-up-table (LUT) of the pixel coordinates mapping from the original SI to the predistorted SI can be generated to facilitate the real-time and rapid generation of other projected images. Figure 8(e) shows the image predistortion warping process of SIs and generation of a single-focal SIA (SI1 and SIn are corresponding to the red and green tracing points shown in figure (d)).
  • (7) Generating SIA Iγ by splicing SIs. Figure 8(f) shows the splicing processes, the SIs of the sub-channels from SI1 to SIu and from SIu to SIn are used to splice to generate two SIAs corresponding to the M1 and M2 (For the example of this paper, u = 336 and n = 681), and then the single-focal SIA Iγ is generated by summation of the two SIAs. Usually, the edge of the image Iγ needs to be trimmed to meet the size of the actual optical MLA (For example, the 20 mm × 20 mm size of hexagonal-arranged MLA cannot realize an integer number of sub-channels in the x and y directions).
  • (8) Image fusion of multifocal SIA image. Figure 9 shows multifocal SIAs can be generated by using logical “AND” or “OR” processing of SIAs at different projection distances. “AND” and “OR” processes are corresponding to the intersections and unions of multiple single-focal SIAs respectively. Through the “AND” processing, the projection image at each depth will only affect the projection area with illuminance value at other depths (Type 1 and 2), while processed by “OR”, the distribution of illuminance at different depths will only affect the no-illuminance area of other depths (Type 3). Figure 9(a) shows the SIA generation for multifocal plane of Type 1 integral projection. By the “AND” processing, the multi-focal plane SIA is generated from the SIA of the same pattern at the corresponding distance of 200 mm and 400 mm, and the projection Type 1 can be realized. Figure 9(b) shows the SIA generation for multifocal plane of Type 2 integral projection. By the “AND” processing, the multi-focal plane SIA is generated from the SIA of the two different patterns at the corresponding distance of 200 mm and 400 mm, and the projection Type 2 can be finally realized. Figure 9(c) shows the SIA generation for multifocal plane of Type 3 integral projection. By the “OR” processing, the multi-focal plane SIA is generated from the SIA of the three different patterns at the corresponding distance of 200 mm, 400 mm and 600 mm, and the projection Type 3 can be finally realized. In addition, the fusion of multifocal SIAs can achieve an image projection of continuous depth by:
    $$\begin{aligned} &{AND}({\textrm{I}_1},{\textrm{I}_2},{\textrm{I}_3},\ldots ,{\textrm{I}_\textrm{n}}) ={\cap} _{\gamma = 1}^\textrm{n}{\textrm{I}_\gamma }\\ &{OR}({\textrm{I}_1},{\textrm{I}_2},{\textrm{I}_3},\ldots ,{\textrm{I}_\textrm{n}}) ={\cup} _{\gamma = 1}^\textrm{n}{\textrm{I}_\gamma } \end{aligned}$$
    where I1, I2, I3, …, In are SIAs corresponding to different focal planes. At the same time, within the depth range of focal planes corresponding to I1, I2, I3, …, In, the integrated and unblurred imaging effect for continuous-depth projection surface targets can also be achieved.

4. Simulations and experiments

The illumination distribution simulations were performed with a 5-lm collimated source by using software LightTools. Figure 10(a) and (b) show the results of a 200 and 400 mm single-focal plane projection, respectively. However, the results show that the “letters” can only be imaged to a single plane, and the image on another distance is blurred disastrously. The inherent DOF characteristic of the MLA cannot support a clear image in a larger imaging depth range. In the simulation results shown in Fig. 10(c−f), “AND” processing was performed on the corresponding SIAs with two projection distances and two patterns are clearly formed simultaneously at 200 mm and 400 mm, respectively. The sizes of projection images are nearly 93 mm × 93 mm and 210 mm × 210 mm at 200 mm and 400 mm, respectively. Due to the central-projection characteristic of multifocal projection, for the multifocal projection of the same pattern (Type 1), the defocus speckle between different focal planes has little influence. However, the defocus speckle would result in more pronounced uneven illumination for multifocal projection of different patterns such as Fig. 10(e) and (f). The image differences in the gray level at different distances would result in the uneven illumination at different distances. Figure 10(g) shows integral projection that two patterns are clearly formed simultaneously at 200 mm and 600 mm. Figure 10(h) shows the simulation result of same-image projection at three distances. In contrast, Fig. 10(i) shows the simulation result with the SIAs processed by “OR”: the images of the “N”, “M”, and “F” were imaged at 200, 400, and 600 mm, respectively.

 figure: Fig. 7.

Fig. 7. Generation flow chart of multifocal SIA.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. (a) Hexagonal arrangement of sub-lens and sampled edge-FOV of sub-channel; (b) position determination of central-projection point O; (c) sampled rectangular FOV grid points (red) and target distribution area (white area) within common area Dγ on target plane; (d) backward chief ray tracing points (red and green) and ideal regular grid points (blue) on SI plane; (e) SIs are generated by RBF image warping (SI1 and SIn are corresponding to the red and green tracing points shown in figure (d)) and (f) SIA are generated by splicing SIs corresponding to M1 and M2.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Generation of multifocal SIA images. (a) Multifocal SIA generation of Type 1 projection at 200 mm and 400 mm by using “AND” process. (b) Multifocal SIA generation of Type 2 projection at 200 mm and 400 mm by using “AND” process. (c) Multifocal SIA generation of Type 3 projection at 200 mm, 400 mm and 600mm by using “OR” process.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Simulations of single-focal MLA projections with (a) plane at 200mm and (b) plane at 400mm, respectively. Simulations of multifocal type 1 two same images (c, d) and type 2 two different images (e, f) projected at 200 and 400 mm, and type 2 two different images (g) projected at 200 and 600 mm, respectively (through logic “AND” processing of SIAs). Simulations of multifocal type 1 same image (h) and type 3 three different images (i) projected at 200, 400 and 600mm, respectively (through logic “AND” and “OR” processing of SIAs).

Download Full Size | PDF

The analysis based on RMS spot analysis of the MLA can more clearly reflect the influence of multi-channel integration on image quality at different projection distances. As shown in Fig. 11(a), a RMS spot analysis of the MLA projection imaging was carried out by sampling 21 field points on the projection plane uniformly along the central to the edge of the FOV. To reflect the cumulative effect of the sub-channel imaging quality, the integral imaging evaluation of the MLA projection can be calculated by using Eq. (17): an average of the RMS spots of all sub-channels at the relative projection FOV, which evaluates the integral quality of the MLA projector.

$${r_{\textrm{array}}}(y^{\prime}) = \frac{1}{N}[{r_{\textrm{RMS}}}(y^{\prime}) + \sum\limits_{\omega = 2}^N {{r_{\textrm{RMS}}}(y^{\prime} + \delta {v_\omega })} ]$$
where the imaging RMS spot radius rarray(y) of the MLA at the projection FOV y is determined by the average of the RMS spots of the N sub-channels at the projection FOV. Here, y +δvω is the object field corresponding to N−1 different sub-channels. It can be seen from Fig. 11(a) that in the FOV within 0 to 0.4, the integral RMS spot size is uniform, and the farther the projection distance is, the larger the integral spot will be. In the FOV within 0.6 to the maximum, the size of the integral RMS spot size increases gradually and it would increase more at a larger projection distance. The analysis of RMS SPOT is realized by the macro function “RMSSPOT” supported by optical design software CODEV. Figure 11(b) shows the distortion analysis with the central wavelength of 587 nm. Because of the central-projection constraints, the maximum distortion nearly −0.25% at different projection distances are the same.

 figure: Fig. 11.

Fig. 11. (a) Integral RMS spot size analysis at different projection distances. (b) Distortion analysis for integral projection image. (c) Analysis of light efficiency (collimated source with divergence angle of 0°), and SIA transmittances of simulations in Fig. 10. (d) Analysis of projection crosstalk stray light energy distribution when divergence angle equals 0° and (e) equals 10°. (f) Variation analysis of energy rate of crosstalk stray light with the change of divergence angle of light source.

Download Full Size | PDF

Figure 11(c) shows the analysis of light efficiency (collimated source with divergence angle of 0°) and SIA transmittance for simulations in Fig. 10(c−i). The light efficiency and the transmittance of the SIA mask have a linear relationship, and there are approximately 11%−12% gap. The gap contains 10% Fresnel loss and nearly 1%−2% loss result from crosstalk stray light and light absorption on the side wall of the system. There are three possible factors to result in the cross-talk stray light: (1) The first one is about the NA consideration in the design of condenser and projection optics. Once the NA of condenser sub-lens much larger than that of the projection sub-lens, the divergence angle of the refracted light deflected by condenser sub-lens will much larger than the wanted divergence angle to generate imaging rays, which will result in the crosstalk stray light. (2) The second factor is about the collimation error of incident collimated beam. In this paper, the optical system is designed based on a collimated incident beam generated by the LED source and it cannot avoid the imperfect-collimated incident rays. The imperfect-collimated incident rays would generate the rays with larger divergence angle when passed through the condenser sub-lens, and cross-talk problem is formed. (3) The third factor is about the tolerance problem that the optical effect would be influenced by the alignment errors of optical elements.

For the first influence factor, CMLA and PLMA have been designed by considering the match of NA, as depicted in Sec. 3.1. In the actual deign, NA of PMLA and CMLA are designed to be matched, the condenser sub-lens with spherical shape is optimized to generate the divergence angle at the flat surface equal to or a little less than the divergence angle of the imaging projection sub-lens for reducing the possible cross-talk stray light. Therefore, the cross-talk factor generated by the CMLA and PLMA would be reduced or even eliminated by optical design in advance. For the second factor of imperfect-collimated incident rays, Fig. 11(d) and (e) show the analysis of projection crosstalk stray light energy distribution when source divergence angle equals 0° and 10° respectively. The large-FOV crosstalk rays in orange and regular chief rays in blue are depicted in the dashed box of Fig. 11(e). When light source is not collimated (divergence angle larger than 0°), the stray light energy distribution would increase. Therefore, as shown in Fig. 11(f), we give the variation analysis of energy rate of crosstalk stray light with the change of divergence angle of light source. Fortunately, due to the crosstalk rays only hit on the area out of the integral image, the image quality of the integral image cannot be influenced by the large-FOV stray light. For the third factor, because the projection MLAs comprising two identical MLAs that can reduce the tolerance sensitivity during assembly, the alignment errors of optical elements will fall on the MLAs and SIA. The influence of alignment deviation between MLAs and SIA are simulated in LightTools. As shown in Fig. 12, Multifocal MLA projection simulations are conducted when the offsets between PMLA and SIA are (a) dx = 0.1 mm, (b) dx = 0.2 mm, (c) dx = 0.1 mm and dy = 0.1 mm, (d) dx = 0.2 mm and dy = 0.2 mm, (the top and bottom figures are corresponding to the projection distance of 200 mm and 400 mm, respectively). According to the simulation results, when the alignment error is less than 0.1 mm, the integral images at the target planes have a certain offset, but there is no obvious loss of uniformity, imaging quality and integrity. When the alignment error is greater than 0.1 mm, the integral images at the target planes have a larger offset, the image integrity has a loss, and the crosstalk stray light is obvious: the interface region of adjacent sub-images is also projected on the target planes. Therefore, the actual adjustment strategy we adopted is: when the center of the actual projected target pattern is basically co-axial with the center of the MLA projector, the alignment errors of MLA and SIA are basically eliminated, and the stray light distribution of crosstalk is also despaired on the target surface.

 figure: Fig. 12.

Fig. 12. Multifocal MLA projection simulations when the offsets between PMLA and SIA are (a) dx = 0.1 mm, (b) dx = 0.2 mm, (c) dx = 0.1 mm and dy = 0.1 mm, (d) dx = 0.2 mm and dy = 0.2 mm, (top: 200 mm; bottom: 400 mm).

Download Full Size | PDF

Figure 13(a) shows the aspherical MLA and its high-precision mold core. MLAs with a millimeter sub-aperture has a total size of 20 × 20 mm and can be regarded as a freeform optical element with a local sag close to 0.2 mm. Both MLAs designed in this study were manufactured using polycarbonate via high-precision freeform injection molding [33]. The fabrication precision of the MLA mold core surface can reach 0.3 microns which is sufficient for geometrical optical applications. Furthermore, due to the two identical MLAs are molded by the same injection mold, the projection MLA comprising two identical MLAs can reduce the tolerance sensitivity during assembly. Figure 13(b) shows the two of multifocal SIA masks with 3000 dpi-resolution and thickness of 0.1 mm, the top and bottom SIA masks shown in Fig. 13(b) is corresponding to the used SIA of simulations in Fig. 10(c) and Fig. 10(e), respectively. The SIA masks were fabricated by high-precision laser print with the material of polyethylene terephthalate. Figure 13(c) shows the experimental setup, calibration paper with 5 mm square grid lines is attached to the target plane for verifying the changes of projection sizes when camera capturing at different projection distances (In the experiments, camera is fixed on the side of the MLA projector and the projection plane is moved for realizing the changes of projection distance). A white collimating LED light source with 5 lm and 5° divergence angle were used in the experiments.

 figure: Fig. 13.

Fig. 13. (a) Fabricated aspherical MLA and its mold core. (b) Multifocal SIA mask used in Fig. 10 (c) and (e). (c) Experiment setup.

Download Full Size | PDF

Figure 14(a, b), (c, d), (e, f) and (g, h, i) show the experimental results corresponding to the simulation results in Fig. 10(c), (e), (f) and (i), respectively. (More experimental effects of continuous-depth changes are shown in Visualization 1). The experimental results in Fig. 14(a, b), (c, d), (e, f) and (g, h, i) are also corresponding to the multifocal MLA-projector Type 1, Type 2 and Type 3, respectively. By adjusting the distance of the target plane, the illumination distributions at multiple different distances were realized. In addition, there is no apparent illumination peak spot on the target surface, the simulations as well as the experiments show a good suppression of stray light.

 figure: Fig. 14.

Fig. 14. Multifocal projection experiments. Experiment results of figures (a, b), (c, d), (e, f), and (g, h, i) are corresponding to the simulation results shown in Fig. 10 (c), (e), (f) and (h).

Download Full Size | PDF

Figures 15(a−c) show the geometric MTF analysis of simulations(S) and experiments(E) for 200, 400 and 600mm, respectively. An actual projection resolution of 0.3−0.4 lines/mm is realized for 200−600mm, which is sufficient for pattern-illumination need. The main reason for MTF decline is probably due to the insufficient resolution and the uneven thickness of SIA mask adhesion on MLA rear surface. The insufficient resolution could result in the decrease of the integral image quality directly. The uneven thickness of SIA mask adhesion on MLA rear surface could result in the deterioration of image quality from some sub-channels in MLA, which will furtherly destroy the quality of integral image. In the future, we will look for higher-resolution mask printing process and mask adhesion process with higher uniformity to improve the quality of multifocal integral projection. Furthermore, to obtain the objective quality evaluation at illumination contrast and structure similarity between the simulations in Fig. 10(c−f), experiments in Fig. 14(a−f) and the target illumination distributions, correlation coefficient cc is used [10]:

$$cc = \frac{{\sum\nolimits_{i = 1}^N {({A_i} - {\mu _A})({B_i} - {\mu _B})} }}{{\sqrt {\sum\nolimits_{i = 1}^N {{{({A_i} - {\mu _A})}^2}} } \sqrt {\sum\nolimits_{i = 1}^N {{{({B_i} - {\mu _B})}^2}} } }}$$
where N is the sum number of the pixels; Ai and Bi are the pixel values of the two pictures, and μA and μB are the mean values of the two pictures, respectively. When cc = 1, it indicates a perfect correlation between the two pictures. The results are shown in Fig. 15(d), due to the illumination influence in the multifocal projection with different patterns, it shows a high similarity for same-pattern multifocal projection (Type 1) and a relative low similarity for different-pattern multifocal projection (Type 2). For two-plane projection, as shown in Fig. 10(g), this influence can be reduced by increasing the projection distance gap between the two focal planes. Because the larger the distance gap, the larger the speckle spots of the projected pattern from one focal plane would put on the other focal plane. However, the contrast of the larger speckle spots will be reduced, and the influence on the contrast of the pattern at the other focal plane will be reduced (Such as compared between Fig. 10(e) and 10(g), the influence of speckle spots is reduced in Fig. 10(g)). Furthermore, for multi-plane projection or continuous-plane projection, the influence can be reduced by means of SI grayscale correction, but it may cause greater energy loss.

 figure: Fig. 15.

Fig. 15. MTF of simulation (S) and experimental assessment (E) for (a) 200, (b) 400 and (c) 600mm, respectively. (d) Analysis of illumination contrast and structure similarity by correlation coefficient cc between the simulations in Fig. 10(c−f), experiments in Fig. 14(a−f) and target illumination distributions.

Download Full Size | PDF

5. Conclusion and prospect

In this study, a two-layer ultra-thin aspherical sub-lens with NA = 0.3 is proposed as a projection sub-channel and the optimization design ensures high optical integration precision and high optical efficiency. The central-projection constraint relations between sizes and projection distances of the multifocal projection are defined for realizing a no-tailoring-loss integral multifocal projection image. The DOF analysis of MLA and sub-lens are also introduced to proof the sufficient condition of realizing multifocal projection. Combined with the RBF image warping method, multifocal SIAs were acquired, and three types of multifocal integral projection were realized, breaking through the traditional limitations of the single-focal DOF. A prototype of MLA-based multifocal LED-projector imaging system with thickness of less than 4 mm is developed. Analysis of integral imaging quality, distortion correction effect and energy loss are conducted. Substantial simulations and experiments are also conducted to verify the three types of MLA projection we proposed. However, due to the illumination influence on the multifocal projection with different patterns (Type 2), there is a relative low similarity for different-pattern multifocal projection. Furthermore, the resolution of integral imaging is needing improvement.

In this paper, the sub-image pixel mapping relationship is generated by the RBF method, and it can be thought as the look-up table (LUT), which has been a mature digital and electronic technology for realizing a real-time image preprocess. Although the LUT generation by RBF method is time consuming (30 seconds in MATLAB for generating a sub-image with resolution of 800×800 pixels). However, when the LUT from an initial image to multiple sub-images are acquired, the time to generate the high-resolution SIA would be ignored, which means a real-time integral projection would realize. In future development, we will apply a transparent liquid crystal display instead of the SIA mask to realize a real-time dynamic pattern projection, it will be our next job.

Funding

National Key Research and Development Program of China (2017YFA0701200); National Natural Science Foundation of China (61822502); Young Elite Scientist Sponsorship Program by CAST (2019QNRC001).

Acknowledgments

We would like to thank Synopsys for providing the education license of CODE V and LightTools. Thank optical department of Beijing NED Ltd for their participation. We also thank Haisong Tang, Haoran Li and Chen Chen for their help.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Keum, K.-W. Jang, D. S. Jeon, C. S. Hwang, E. K. Buschbeck, M. H. Kim, and K.-H. Jeong, “Xenos peckii vision inspires an ultrathin digital camera,” Light: Sci. Appl. 7(8), 80 (2018). [CrossRef]  

2. H. Yang, C. K. Chao, M. K. Wei, and C. P. Lin, “High fill-factor microlens array mold insert fabrication using a thermal reflow process,” J. Micromech. Microeng. 14(8), 1197–1204 (2004). [CrossRef]  

3. H. Wu, T. W. Odom, and G. M. Whitesides, “Reduction Photolithography Using Microlens Arrays: Applications in Gray Scale Photolithography,” Anal. Chem. 74(14), 3267–3273 (2002). [CrossRef]  

4. M. V. Kunnavakkam, F. M. Houlihan, M. Schlax, J. A. Liddle, P. Kolodner, O. Nalamasu, and J. A. Rogers, “Low-cost, low-loss microlens arrays fabricated by soft-lithography replication process,” Appl. Phys. Lett. 82(8), 1152–1154 (2003). [CrossRef]  

5. D. Wu, Q. D. Chen, L. G. Niu, J. Jiao, H. Xia, J. F. Song, and H. B. Sun, “100% fill-factor aspheric microlenses arrays (AMLA) with sub-20 nm precision,” IEEE Photonics Technol. Lett. 21(20), 1535–1537 (2009). [CrossRef]  

6. S. Bae, K. Kim, S. Yang, K. Jang, and K. Jeong, “Multifocal microlens arrays using multilayer photolithography,” Opt. Express 28(7), 9082–9088 (2020). [CrossRef]  

7. Y. Long, Z. Song, M. Pan, C. Tao, R. Hong, B. Dai, and D. Zhang, “Fabrication of uniform-aperture multi-focus microlens array by curving microfluid in the microholes with inclined walls,” Opt. Express 29(8), 12763–12771 (2021). [CrossRef]  

8. L. J. Descombes, V. J. Cadarso, A. Schleunitz, S. Grützner, J. J. Klein, J. Brugger, H. Schift, and G. Grützner, “Organic-inorganic-hybrid-polymer microlens arrays with tailored optical characteristics and multi-focal properties,” Opt. Express 23(19), 25365–25376 (2015). [CrossRef]  

9. T. Hou, C. Zheng, S. Bai, Q. Ma, D. Bridgers, A. Hu, and W. W. Duley, “Fabrication, characterization, and applications of microlenses,” Appl. Opt. 54(24), 7366–7376 (2015). [CrossRef]  

10. X. Wang and H. Hua, “Depth-enhanced head-mounted light field displays based on integral imaging,” Opt. Lett. 46(5), 985–988 (2021). [CrossRef]  

11. J. Dunkel, F. Wippermann, A. Reimann, A. Brückner, and A. Bräuer, “Fabrication of microoptical freeform arrays on wafer level for imaging applications,” Opt. Express 23(25), 31915–31925 (2015). [CrossRef]  

12. Y. Zheng, L. Song, J. Huang, H. Zhang, and F. Fang, “Detection of the three-dimensional trajectory of an object based on a curved bionic compound eye,” Opt. Lett. 44(17), 4143–4146 (2019). [CrossRef]  

13. X. Yu, C. Liu, Y. Zhang, H. Xu, Y. Wang, and W. Yu, “Multispectral curved compound eye camera,” Opt. Express 28(7), 9216–9231 (2020). [CrossRef]  

14. J. S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29(11), 1230–1232 (2004). [CrossRef]  

15. Y. Peng, X. Zhou, Y. Zhang, and T. Guo, “Fabrication of a micro-lens array for improving depth-of-field of integral imaging 3D display,” Appl. Opt. 59(29), 9104–9107 (2020). [CrossRef]  

16. D. Shin and B. Javidi, “Three-dimensional integral imaging with improved visualization using subpixel optical ray sensing,” Opt. Lett. 37(11), 2130–2132 (2012). [CrossRef]  

17. M. Prossotowicz, A. Heimes, D. Flamm, F. Jansen, H. J. Otto, Al. Budnicki, A. Killi, and U. Morgner, “Coherent beam combining with micro-lens arrays,” Opt. Lett. 45(24), 6728–6731 (2020). [CrossRef]  

18. M. Prossotowicz, D. Flamm, A. Heimes, F. Jansen, H. Otto, A. Budnicki, A. Killi, and U. Morgner, “Dynamic focus shaping with mixed-aperture coherent beam combining,” Opt. Lett. 46(7), 1660–1663 (2021). [CrossRef]  

19. K. Desnijder, W. Ryckaert, P. Hanselaer, and Y. Meuret, “Luminance spreading freeform lens arrays with accurate intensity control,” Opt. Express 27(23), 32994–33004 (2019). [CrossRef]  

20. Y. Jin, A. Hassan, and Y. Jiang, “Freeform microlens array homogenizer for excimer laser beam shaping,” Opt. Express 24(22), 24846–24858 (2016). [CrossRef]  

21. Z. Zhu, P. Yao, and W. Zheng, “Design of a free-form surface microlens array optical system with high efficiency and uniformity,” Appl. Opt. 59(23), 6939–6944 (2020). [CrossRef]  

22. Q. Deng, C. Du, C. Wang, C. Zhou, X. Dong, Y. Liu, and T. Zhou, “Microlens array for stacked laser diode beam collimation,” Proc. SPIE 5636, 666–670 (2005). [CrossRef]  

23. M. Ares, S. Royo, and J. Caum, “Shack-Hartmann sensor based on a cylindrical microlens array,” Opt. Lett. 32(7), 769–771 (2007). [CrossRef]  

24. R. Wu, L. Yang, Z. Ding, L. Zhao, D. Wang, K. Li, F. Wu, Y. Li, Z. Zheng, and X. Liu, “Precise light control in highly tilted geometry by freeform illumination optics,” Opt. Lett. 44(11), 2887–2890 (2019). [CrossRef]  

25. Z. Feng, B. D. Froese, R. Liang, D. Cheng, and Y. Wang, “Simplified freeform optics design for complicated laser beam shaping,” Appl. Opt. 56(33), 9308–9314 (2017). [CrossRef]  

26. M. Sieler, P. Schreiber, P. Dannberg, A. Bräuer, and A. Tünnermann, “Ultraslim fixed pattern projectors with inherent homogenization of illumination,” Appl. Opt. 51(1), 64–74 (2012). [CrossRef]  

27. M. Sieler, S. Fischer, P. Schreiber, P. Dannberg, and A. Bräuer, “Microoptical array projectors for free-form screen applications,” Opt. Express 21(23), 28702–28709 (2013). [CrossRef]  

28. P. Meinzer, W. Wirtz, M. Klimke, and M. sieler, “Motor Vehicle,” U.S. Patent 9,919,644 (20 March 2018).

29. Y. Liu, D. Cheng, T. Yang, and Y. Wang, “High precision integrated projection imaging optical design based on microlens array,” Opt. Express 27(9), 12264–12281 (2019). [CrossRef]  

30. R. Voelkel and K. J. Weible, “Laser beam homogenizing: limitations and constraints,” Proc. SPIE7102, (2008).

31. A. Bauer, S. Vo, K. Parkins, F. Rodriguez, O. Cakmakci, and J. P. Rolland, “Computational optical distortion correction using a radial basis function-based mapping method,” Opt. Express 20(14), 14906–14920 (2012). [CrossRef]  

32. B. Fornberg, E. Larsson, and N. Flyer, “Stable computations with Gaussian radial basis functions,” SIAM J. Sci. Comput. 33(2), 869–892 (2011). [CrossRef]  

33. F. Fang, N. Zhang, and X. Zhang, “Precision injection molding of freeform optics,” Advanced Optical Technologies 5(4), 303–324 (2016). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Multifocal projection experiments.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Basic principle of integral projection imaging based on MLA. (a) Integral imaging optical path with MLAs. (b) Sub-channel imaging optical path in integral imaging.
Fig. 2.
Fig. 2. (a) Schematic diagram of short-distance integral LED-projector based on MLAs and (b) proposed three types of multifocal effect.
Fig. 3.
Fig. 3. Illustration for MLA projection. (a) Macro constrains between projection sizes and distances. (b) Micro illustration of sub-channel ray transportation in MLA projector.
Fig. 4.
Fig. 4. DOF analysis of sub-lens and MLA. (a) DOF presentation; (b) DOF analysis of sub-lens and MLA.
Fig. 5.
Fig. 5. (a) Optical structure of optimized projection sub-lensess, (b) field curves and (c) geometry spot diagram analysis of optimized projection sub-lens, (d) MLA integral projector.
Fig. 6.
Fig. 6. Sketch of hexagonal arrangement structure of MLA. Hexagonal-arranged MLA is divided into two rectangular-arranged MLAs: light blue and green sub-lenses corresponding to M1 and M2, separately. (a) Δx and Δy are center intervals between adjacent lenses in MLA, while p is aperture size of sub-lens; (b) cross section view of MLA corresponding to M1 through central sub-lens in YZ direction and definition of YDE1 and ZDE1; (c) cross section view of MLA through MLA plane and definition of XDE1.
Fig. 7.
Fig. 7. Generation flow chart of multifocal SIA.
Fig. 8.
Fig. 8. (a) Hexagonal arrangement of sub-lens and sampled edge-FOV of sub-channel; (b) position determination of central-projection point O; (c) sampled rectangular FOV grid points (red) and target distribution area (white area) within common area Dγ on target plane; (d) backward chief ray tracing points (red and green) and ideal regular grid points (blue) on SI plane; (e) SIs are generated by RBF image warping (SI1 and SIn are corresponding to the red and green tracing points shown in figure (d)) and (f) SIA are generated by splicing SIs corresponding to M1 and M2.
Fig. 9.
Fig. 9. Generation of multifocal SIA images. (a) Multifocal SIA generation of Type 1 projection at 200 mm and 400 mm by using “AND” process. (b) Multifocal SIA generation of Type 2 projection at 200 mm and 400 mm by using “AND” process. (c) Multifocal SIA generation of Type 3 projection at 200 mm, 400 mm and 600mm by using “OR” process.
Fig. 10.
Fig. 10. Simulations of single-focal MLA projections with (a) plane at 200mm and (b) plane at 400mm, respectively. Simulations of multifocal type 1 two same images (c, d) and type 2 two different images (e, f) projected at 200 and 400 mm, and type 2 two different images (g) projected at 200 and 600 mm, respectively (through logic “AND” processing of SIAs). Simulations of multifocal type 1 same image (h) and type 3 three different images (i) projected at 200, 400 and 600mm, respectively (through logic “AND” and “OR” processing of SIAs).
Fig. 11.
Fig. 11. (a) Integral RMS spot size analysis at different projection distances. (b) Distortion analysis for integral projection image. (c) Analysis of light efficiency (collimated source with divergence angle of 0°), and SIA transmittances of simulations in Fig. 10. (d) Analysis of projection crosstalk stray light energy distribution when divergence angle equals 0° and (e) equals 10°. (f) Variation analysis of energy rate of crosstalk stray light with the change of divergence angle of light source.
Fig. 12.
Fig. 12. Multifocal MLA projection simulations when the offsets between PMLA and SIA are (a) dx = 0.1 mm, (b) dx = 0.2 mm, (c) dx = 0.1 mm and dy = 0.1 mm, (d) dx = 0.2 mm and dy = 0.2 mm, (top: 200 mm; bottom: 400 mm).
Fig. 13.
Fig. 13. (a) Fabricated aspherical MLA and its mold core. (b) Multifocal SIA mask used in Fig. 10 (c) and (e). (c) Experiment setup.
Fig. 14.
Fig. 14. Multifocal projection experiments. Experiment results of figures (a, b), (c, d), (e, f), and (g, h, i) are corresponding to the simulation results shown in Fig. 10 (c), (e), (f) and (h).
Fig. 15.
Fig. 15. MTF of simulation (S) and experimental assessment (E) for (a) 200, (b) 400 and (c) 600mm, respectively. (d) Analysis of illumination contrast and structure similarity by correlation coefficient cc between the simulations in Fig. 10(c−f), experiments in Fig. 14(a−f) and target illumination distributions.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

Φ = π A B T 4 ( f / # ) 2 f 2 ( f / # ) 4
Φ array = n = 1 N Φ n T n f 2 N ( f / # ) 4 T array i = 1 , j = 1 m , n I ( i , j ) m n
D n = S n ( N 1 ) p = L n s p ( N 1 ) p D f = L f s p ( N 1 ) p
D n D f = S n ( N 1 ) p S f ( N 1 ) p = L n ( N 1 ) s L f ( N 1 ) s
( α ω , γ y ω , γ ) = ( 1 0 L γ 1 ) [ ( 0 ω p ) + ( 1 1 f 2 0 1 ) ( 1 0 s 1 ) ( α ω , γ y ω , γ ) ] = ( α ω , γ α ω , γ s + y ω , γ f 2 ( α ω , γ α ω , γ s + y ω , γ f 2 ) L γ + α ω , γ s + y ω , γ ω p )
α ω , γ  =  y ω , γ ω p L γ
y ω , γ  =  L γ s y ω , γ + 2 L γ ω p ( 1 s 1 f 2 ) + ω p
y ω  +  1 , γ y ω , γ  =  L γ s ( y ω  +  1 , γ y ω , γ ) + 2 L γ p ( 1 s 1 f 2 ) + p = 0
y ω + 1 , γ y ω , γ  =  2 p s ( 1 s 1 f 2 + 1 2 L γ )
D O F sub - lens = ε t a n V L p / 2 ε = 2 M s p ε
D O F MLA = ε t a n U L ( N 1 ) p 2 ε = 2 L ( N 1 ) p ε
z = c r 2 1 + 1 ( 1 + k ) c 2 r 2 + i = 2 n a 2 i r 2 i
M 1 ( i , j ) = [ X D E 1 ( i , j ) Y D E 1 ( i , j ) Z D E 1 ( i , j ) ] = [ ( i i center ) Δ x ( j j center ) Δ y cos ( θ ) ( j j center ) Δ y sin ( θ ) ] = [ ( i i center ) 3 p 2 ( j j center ) 3 p 2 cos ( θ ) ( j j center ) 3 p 2 sin ( θ ) ]
M 2 ( i , j ) = M 1 ( i , j ) + [ 3 p 4 3 p 4 cos ( θ ) 3 p 4 sin ( θ ) ] = [ ( i i center ) 3 p 2 + 3 p 4 ( j j center ) 3 p 2 cos ( θ ) + 3 p 4 cos ( θ ) ( j j center ) 3 p 2 sin ( θ ) + 3 p 4 sin ( θ ) ]
x ζ = η = 1 n β x , η R η ( d ) + φ m ( x ζ , y ζ ) y ζ = η = 1 n β y , η R η ( d ) + φ m ( x ζ , y ζ ) R η ( d ) = [ ( x ζ x c e n t e r _ η ) 2 + ( y ζ y c e n t e r _ η ) 2 + λ r η 2 ] μ / 2
A N D ( I 1 , I 2 , I 3 , , I n ) = γ = 1 n I γ O R ( I 1 , I 2 , I 3 , , I n ) = γ = 1 n I γ
r array ( y ) = 1 N [ r RMS ( y ) + ω = 2 N r RMS ( y + δ v ω ) ]
c c = i = 1 N ( A i μ A ) ( B i μ B ) i = 1 N ( A i μ A ) 2 i = 1 N ( B i μ B ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.