Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Mapping-based design method for high-quality integral projection system

Open Access Open Access

Abstract

A general method for designing an integral projection system is proposed, including optical design and digital preprocessing based on the mapping within the projection system. The per-pixel mapping between the sub-images and the integral projection image is generated by incorporating an integral projection imaging model as well as the ray data of all sub-channels. By tracing rays for sparsely sampled field points of the central sub-channel and constructing the mapping between the central sub-channel and other sub-channels, the efficient acquisition of ray data for all sub-channels is achieved. The sub-image preprocessing pipeline is presented to effectively address issues such as overlapping misalignment, optical aberrations, inhomogeneous illumination, and their collective contribution. An integral projection optical system with a field of view (FOV) of 80°, an F-number of 2, and uniform image performance is given as a design example. The ray tracing simulation results and quantitative analysis demonstrate that the proposed system yields distortion-free, uniformly illuminated, and high-quality integral projection images.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the development of micro-manufacturing methods, such as micro-stereolithography technology [1], direct laser writing [2,3], excimer laser micromachining [4,5], and thermal reflow method [6,7], the fabrication accuracy of aspheric microlens array (MLA) has been greatly improved. MLA is widely used in various applications, especially in integral imaging [814], integral projection [1518], augmented reality [14,19,20], illumination homogenization [21,22], light field display [2326], wavefront sensing [2729] and fiber optic communication [30]. Due to its inherent characteristics, including a short focal length, small aperture, and multi-channel, an integral projection system based on MLA can achieve a projection pattern with large FOV, large depths of field, and uniform illumination at ultra-short projection distances [15,16]. Thus it is particularly suitable for structure illumination [31], beam shaping [32], and projection on surfaces with large deformations [17].

Integral projection system based on MLA was first proposed by M. Sieler et al [15]. The integral projection image is made from the superposition of the projected images of all individual sub-channels. Since it is designed based on paraxial imaging theory and a single-layer projection MLA, the projection system in [15] has a limited projection FOV as well as a relatively poor image quality at the edges of the projected image. In our previous study [16], an integral projection image with high-precision superposition was achieved by using the offset addressing, chief ray tracing, and image warping method for each individual sub-channel. Compared with the design method based on paraxial optics, the design method proposed in [16] is more general and versatile, as it is not limited by the arrangement of sub-channels and optical distortion of the integral projection system. However, it involves repetitive ray tracing and radial basis function (RBF) interpolation for each sub-channel to generate pre-warping sub-images, resulting in computational complexity and time-consuming processing. Due to the limited degrees of freedom of the optical system structure in [16,18], it is difficult to achieve large FOV and high luminous flux simultaneously. To address the above challenges, a general design method for high-quality integral projection system is proposed in this paper. The design method incorporates a novel optical design for the projection sub-channel, as well as an efficient image preprocessing method. The optical properties of the entire system, including magnification, distortion, illumination, and point spread function (PSF), are taken into account in the integral projection imaging model and preprocessing pipeline. Furthermore, by conducting ray tracing for the central sub-channel and generating the mapping of the entire system, the proposed method eliminates the need for redundant ray tracing, RBF data interpolation, and analysis for all sub-channels.

In section 2 of this paper, the imaging degradation of the integral projection system is analyzed, and an integral projection imaging model is constructed in the per-pixel manner. In section 3, the overall design framework for the high-quality integral projection system is illustrated, including optical design method and digital image preprocessing method. In section 4, non-sequential ray tracing simulations of the proposed integral projection system are conducted and illustrated. The projected image performance is evaluated with comparison of different preprocessing procedures to verify the effectiveness of the proposed method.

2. Integral projection imaging model

2.1 Basic principle

Figure 1 illustrates the working principle of the integral projection system. The illumination beam from the light source is incident to the condenser MLA (CMLA), and the converging sub-beams illuminate the sub-image array (SIA) positioned near the focal plane of the projection MLA (PMLA). Then, the sub-beams containing the imaging information of all sub-images are projected onto the projection plane at a distance L by PMLA. Finally, the projected images of all sub-channels are superimposed to form an integral projection image.

 figure: Fig. 1.

Fig. 1. Basic working principle of integral projection system based on MLAs.

Download Full Size | PDF

Considering the object-image correspondence and the sub-channel offset, the ideal projection imaging model of the n-th sub-channel (shown in the blue-green dashed box of Fig. 1) of the integral projection system can be expressed as follows:

$$\left[ {\begin{array}{{c}} {\varTheta^{\prime}}\\ {Y^{\prime}} \end{array}} \right] = \left[ {\begin{array}{{c}} 0\\ { - np} \end{array}} \right] + \left[ {\begin{array}{{cc}} 1&0\\ L&1 \end{array}} \right]\left[ {\begin{array}{{cc}} 1&{ - 1/f}\\ 0&1 \end{array}} \right]\left[ {\begin{array}{{cc}} 1&\textrm{0}\\ s&1 \end{array}} \right]\left[ {\begin{array}{{c}} {{\theta_n}}\\ {{y_n}} \end{array}} \right] \approx \left[ {\begin{array}{{c}} { - {y_n}/f}\\ { - {y_n}L/f - np} \end{array}} \right]$$
where $Y^{\prime}$ and $\varTheta^{\prime}$ are the image height and incidence angle of the projection plane, yn and θn are the object height and incidence angle of the sub-image plane (or SIA plane) in the local coordinate system of the n-th sub-channel, p is the pitch of the sub-channels, s is the object distance, f is the focal length of the projection sub-channel, and L is the projection distance.

The object height yn of the sub-image of the n-th sub-channel can be expressed as:

$${y_n} ={-} \frac{{Y^{\prime} + np}}{L}f = \frac{{Y^{\prime} + np}}{M}$$
where M = -L/f is the magnification of the sub-channel, M represents the ideal imaging relationship between the image on projection plane and object on SIA plane; np represents the offset value of the n-th sub-channel relative to the central sub-channel.

The object height difference Δy between adjacent sub-channels on the SIA plane is:

$$\Delta y = {y_n} - {y_{n - 1}} = \frac{p}{M}$$

Ideally, the object heights of all individual sub-channels can be easily obtained, according to Eq. (2) and Eq. (3), for sub-image rendering.

2.2 Preprocessing model of sub-image

The integral projection imaging model described in section 2.1 is based on the ideal correspondence between the object and the image as well as the position offset of the sub-channel. However, this simplified model neglects the aberrations that can degrade the image quality, particularly in projection imaging system with wide FOV and large aperture.

In this paper, the optical properties and aberrations of the integral projection system are taken into account to generate a high-quality integral projection image. Due to the presence of optical aberrations such as coma, field curvature and astigmatism in optical system, the edges of the projection image often appear blurred compared to the center. For a single-channel projection system, optical distortion does not affect the sharpness of the projection image, but only results in a distorted image. However, optical distortion is a field-dependent aberration, and the object height of different sub-channel varies according to Eq. (3). Therefore, the integral projection image is formed by superimposing the spatial projection images of all sub-channels, each with a different extent of distortion and blur effect. Due to the integral effects of image superposition, the degradation of the integral projection image caused by above aberrations should be considered during the generation of sub-images. Furthermore, the edge illumination attenuation [32] of the sub-channels lead to an inhomogeneous illumination distribution within the integral projection image. Additional comparisons of the integral imaging model are outlined in Supplement 1.

By separating the contribution of each sub-channel, the per-pixel mapping of the target projected image to the sub-image can be represented as a preprocessing model of the sub-image, which involves pre-deblurring, illumination pre-compensation, as well as pre-warping. The preprocessing model of the n-th sub-channel can be expressed as:

$$\tau ({x_0^n,y_0^n} )= {i_\textrm{p}}({Mx_0^n,My_0^n} )\otimes {h^{ - 1}}({Mx_0^n,My_0^n} )$$
$$\xi ({x_0^n,y_0^n} )= \tau ({x_0^n,y_0^n} ){\varepsilon ^{ - 1}}({Mx_0^n,My_0^n} )$$
$$i(x_\textrm{d}^n,y_\textrm{d}^n) = {{\cal D}^{ - 1}}({x_\textrm{d}^n,y_\textrm{d}^n;\xi ({x_0^n,y_0^n} )} )$$
where M denotes the magnification of the sub-channel, $x_0^n=\left(X^{\prime}+n p_{\mathrm{x}}\right) / M$ and $y_0^n=\left(Y^{\prime}+n p_{\mathrm{y}}\right) / M$ are the ideal object heights of the sub-image in the X-direction and Y-direction, px and py are the minimum pitch of the sub-channels in the X-direction and Y-direction, $X^{\prime}$ and $Y^{\prime}$ are the image height of the projection plane in the X-direction and Y-direction, ip is the target projection image on the projection plane, ⊗ is a convolution operator, h is the normalized blur kernel or PSF of projection sub-channel, iph-1 represent the process of pre-deblurring, $\tau\left(x_0^n, y_0^n\right)$ represents the preprocessed image after pre-deblurring. ɛ is the illumination distribution on the projection plane, ɛ-1 represents the pre-compensated illumination map obtained after performing relative illumination reversal, $\xi\left(x_0^n, y_0^n\right)$ is the preprocessed image after pre-deblurring and illumination pre-compensation. $x_{\mathrm{d}}^n$ and $y_{\mathrm{d}}^n$ are the real object heights of the sub-image of n-th sub-channel in the X-direction and Y-direction. $\mathrm{{\cal D}}$-1 represents the image warping based on distortion mapping, and $i\left(x_{\mathrm{d}}^n, y_{\mathrm{d}}^n\right)$ is the final preprocessed sub-image after pre-deblurring, illumination pre-compensation, and pre-warping. Due to the chromatic variation of magnification with wavelength, it is important to perform the aforementioned processing steps individually for each color channel.

The mapping of the integral projection image to all individual sub-images can be established on a per-pixel basis by taking into account the ray data of all individual sub-channels. The acquisition of the distortion, illumination distribution and PSFs of the integral projection system is achieved by performing ray tracing for the central sub-channel and applying the RBF-based mapping method [3335], which will be detailed in section 3. In this work, the preprocessed image τ after deblurring is generated by using a learning-based Wiener deconvolution method [36,37]. Mathematically, the Wiener deconvolution equation can be expressed as follows:

$$\mathrm{{\cal F}}(\tau )= \frac{{{\mathrm{{\cal F}}^\ast }(h )\mathrm{{\cal F}}({{i_\textrm{p}}} )}}{{{{|{\mathrm{{\cal F}}(h )} |}^2} + \lambda }}$$
where $\mathcal{F}(\cdot)$ denotes the Fourier transform, * denotes the complex conjugate. In this paper, λ is a learnable regularization term to alleviate artifacts introduced by inaccurately manual settings. By utilizing the powerful gradient descent algorithm, λ is adaptively learned during optimization (see Supplement 1 for details).

Once the illumination mapping and the distortion mapping of each sub-channel is determined, the preprocessed image ξ and i can be generated by applying illumination pre-compensation according to Eq. (5) and pre-warping according to Eq. (6). By performing preprocessing for each sub-image in separate steps, imaging blurring, inhomogeneous illumination, and distortion of the corresponding projected image can be well corrected. The pipeline of preprocessing for the integral projection system will be detail in section 3.4.

3. Design method for integral projection system

Figure 2 shows the flow diagram of the mapping-based design method for high-quality integral projection system. First, the system parameters of the central sub-channel projection system are determined based on the requirements of projection distance, projection size, and imaging quality. In order to better cooperate with the image preprocessing model to generate high-quality integral projection images, the uniformity of imaging quality over the entire FOV is set as the primary criterion in the optimization process. Then, the chief ray of the edge field is traced to determine the maximum projection height and largest projection area of the central sub-channel at the given projection distance. The largest projection area is divided into a grid of field points for ray tracing, and the ray data (tracing point position, illumination, and PSF) of the field points is obtained. The ray tracing and ray data acquisition process are detailed in section 3.1.2. The illumination and distortion mapping relationship over the entire FOV of the central sub-channel is generated by RBF-based mapping method. The PSF mapping across the entire FOV is represented in a patch-wise form, which will be further detailed in section 3.1.2. According to the arrangement of sub-channels, the MLA offset matrix is calculated to establish the mapping relationship between the central sub-channel and other sub-channels. The common projection area for all sub-channels is calculated based on the field grid of the central sub-channel and the offset matrix of the entire system. As the common projection area is a portion of the largest projection area, considering the offset of the sub-channel, the effective FOV used for integral projection imaging varies slightly among each sub-channel. The effective FOV grids of all individual sub-channels are calculated by ideal object-image mapping between the sub-image and the projected image. The per-pixel mapping of distortion, illumination, and PSFs of all sub-channels can be generated based on the tailored FOV area and the established mapping function. The sub-image of each sub-channel are generated based on the preprocessing model proposed in section 2.2 to eliminate image degradation caused by each sub-channel as well as the integral effects of all sub-channels. The integral projection system, SIA and stop array are obtained by splicing sub-channels, sub-images and stops corresponding to the offset matrix, respectively. Finally, the illumination simulation and analysis of the proposed integral projection system are realized.

 figure: Fig. 2.

Fig. 2. The flow diagram of design method for high-quality integral projection system.

Download Full Size | PDF

3.1 Optical design and analysis of sub-channel

3.1.1 Optical design

The integral projection optical system typically comprises a condenser microlens array (CMLA) and projection microlens arrays (PMLAs). In our previous work [16,18], two aspherical lens-lets were used to establish the projection sub-channel. However, due to the limitation of optical structure of projection system and the demand of ultra-short focal projection, it is difficult to meet the design requirements of large FOV, high luminous flux, high illumination uniformity, and high resolution simultaneously.

Leveraging the angle expansion feature of telescope [23], a novel sub-channel structure is built. By combining a telescope structure (Galilean angle expander) with projection sub-lenses composed of two lenslets, the projection FOV is further expanded. In order to achieve an integral projection system with uniform image quality over the entire FOV, an automatic image performance balance algorithm [38] is employed to optimize the projection optical system. By subdividing the sub-image into a grid of patches and applying deconvolution to each individual image patch [39,40], a sharp and clear integral projection image can be achieved throughout the entire FOV. During the optimization process, the aperture of projection sub-channel is incrementally enlarged to achieve high luminous flux, and the lens surface is changed from sphere to aspherical surface to improve the image quality.

The optimized projection sub-channel, as illustrated in Fig. 3(a), features a focal length of 0.92 mm, a full FOV of 80°, and an F-number of 2. A full FOV projection size of 950 mm is realized when the projection distance is set to 570 mm. All the lenses are made of BK7 material, and the curved surfaces (S2, S4, S5, and S7) are designed as 8th-order aspherical surfaces. The stop and the sub-image of the projection sub-channel is positioned at the S6 and S8 surface, respectively. Since the size of the stop is smaller than the maximum aperture of the sub-channel, the crosstalk between the sub-channels can be effectively eliminated by inserting a stop array at the position of the S6 surface. The aspherical surface was also used to improve the imaging quality of the integral projection system in previous works [18]; however, the number of pixels that can be resolved in the projected images is limited (about 190 × 190 resolvable pixels) due to the miniature aperture of the sub-channel. In this paper, a projected image with a resolution of 550 × 550 pixels is achieved by enlarging the aperture size of the sub-image to 1.6 mm. In addition, the pixel size is set to be 2 µm.

 figure: Fig. 3.

Fig. 3. (a) Optimized projection sub-channel; (b) distortion curve, (c) MTF curves, (d) spot diagram of the optimized projection sub-channel.

Download Full Size | PDF

Figure 3(b) shows the chromatic distortion curves of the optimized projection system. The optimized projection system exhibits a maximum distortion of approximately 8% and a maximum distortion difference of 1.6% in the spectral range of 450 nm to 650 nm. The disparity observed in the distortion curves at different wavelengths signifies the presence of lateral chromatic aberration, which is characterized by the chromatic difference in magnification within the optical system. The modulation-transfer-function (MTF) curve and the spot diagram of the optimized projection system is shown in Fig. 3(c) and Fig. 3(d), respectively. The consistent trend observed in the MTF curves and the small variation in spot size among different field points indicate that the projection sub-channel maintains a uniform image quality. While some slight variations may exist, overall, the sub-channel delivers consistent and satisfactory performance across the entire FOV.

3.1.2 Acquisition of ray data based on ray tracing

By generating the per-pixel mapping of the integral projection image to each sub-image and performing the image preprocessing for all sub-images, a distortion-free and sharp integral projection image with high illumination uniformity can be realized. Theoretically, the per-pixel mapping can be generated by performing ray tracing for all pixels of the SIA. However, it requires tracing billions of rays, considering the dozens or even hundreds of sub-channels within the integral projection system. Given the immense computational complexity, it is impractical to trace the entire FOV for all sub-channels.

In this paper, we introduce a computational efficient approach for acquiring per-pixel ray data for all sub-channels. This approach combines ray tracing for sparsely sampled field points with data mapping methods for PSF, distortion, and illumination. The acquisition of ray data can be divided into three main steps. Firstly, sparsely sampled field points of the center sub-channel are selected, and ray tracing is performed to obtain the ray data at these field points. Secondly, the per-pixel mapping for the ray data of the center sub-channel is established. Lastly, the mapping relationship between all sub-channels is established to transform the ray data from the center sub-channel into the corresponding ray data for the remaining sub-channels.

Figure 4 shows the flowchart of ray tracing process of the central sub-channel. The ray tracing process can be subdivided into three core steps.

 figure: Fig. 4.

Fig. 4. The flowchart of ray tracing process of central sub-channel.

Download Full Size | PDF

Step 1: a set of field points at the maximum aperture of the sub-image plane is chosen, and forward ray tracing is conducted. In this paper, ‘forward ray tracing’ refers to the path of ray tracing from the sub-image plane to the projection plane. The maximum projection radius Rp, the largest (inscribed rectangle) projection area S and the corresponding maximum projection height Lp = Rp/$\sqrt {2}$ are obtained. The largest projection area S is then divided into a uniformly sampled grid of field points, denoted as $\mathbf{M}_{\mathrm{p}}=\left[\mathbf{X}^{\prime}, \mathbf{Y}^{\prime}\right]$, where $\mathbf{X}^{\prime}$ and $\mathbf{Y}^{\prime}$ represent the coordinates of the field points in the X-direction and Y-direction, respectively.

Step 2: the uniformly sampled field points are set as the FOVs for backward ray tracing, and the real ray tracing points on the sub-image plane are obtained. Here, ‘backward ray tracing’ refers to the path of ray tracing from the projection plane to the sub-image plane. The distortion grid on the sub-image plane is defined as Md= [xd, yd], where xd and yd represent the X-coordinate and Y-coordinate of the real ray tracing points, respectively. The ideal grid, M0= [x0, y0], of the sub-image plane is obtained through paraxial ray tracing, where x0 and y0 represent the X-coordinate and Y-coordinate of the ideal ray tracing points, respectively.

Step 3: the distortion grid Md on the sub-image plane is set as the FOV grid for forward ray tracing, which enables the acquisition of the illumination grid E and the spatially varying PSFs h on the projection plane.

Figure 5(a) shows the division of the grid of field points and image patches on the projection plane. In order to achieve a balance between high accuracy and computational complexity [33], the largest projection area S is divided into a FOV grid consisting of 11 × 11 field points. Given the rotational symmetry of the projection sub-channel, only the pink field points depicted in Fig. 5(a) are traced. The corresponding ray data is then rotated and flipped to obtain the distortion grid Md (including MdR for red channel, MdG for green channel, and MdB for blue channel), the ideal grid M0 shown in Fig. 5(b), and the illumination grid E shown in Fig. 5(c). It's worth noting that the sampling methodology for acquiring PSF data differs slightly from that used in distortion and illumination data acquisition. A patch-wise strategy is employed for the acquisition of PSFs. The largest projection area S is subdivided into 11 × 11 image patches over which the PSF can be assumed as spatially invariant due to uniform optical performance of the optimized sub-channel. The resolution of each image patch is set to 50 × 50 pixels, and the center of each image patch is set as the sampling field point to obtain corresponding patch-wise PSFs. Figure 5(d) shows the patch-wise PSFs h separated by a white border. Each PSF patch shown in Fig. 5(d) corresponds to an image patch shown in Fig. 5(a). The slight difference between the adjacent PSF patches demonstrates the uniform image performance of the optical system as well as the feasibility of using the patch-wise rendering method for generating the pre-deblurred sub-image.

 figure: Fig. 5.

Fig. 5. (a) Selection of field points and image patches on projection plane. (b) Ideal (or paraxial) tracing points, real (or distorted) tracing points, and definition of ideal grid M0 and distortion grid Md in normalized coordinate system of sub-image plane. (c) Relative illumination map and definition of illumination grid E, (d) patch-wise PSFs h in normalized coordinate system of projection plane.

Download Full Size | PDF

The relationship between the ideal grid M0 (on the sub-image plane) and the field grid Mp (on the projection plane) represents the ideal object-image mapping of the optical system, quantified by the magnification M = Mp/M0. The real object-image mapping of the optical system is represented by the relationship between distortion grid Md and the field grid Mp. The relationship between the projected image height and corresponding illumination can be determined by field grid Mp and illumination grid E. To facilitate the subsequent image preprocessing procedures, all the aforementioned relationships are transformed to be constructed based on the ideal grid M0. The distortion mapping is constructed by the distortion grid Md and the ideal grid M0, and the illumination mapping is constructed by the illumination grid E and the ideal grid M0. The distortion and illumination data across the entire FOVs are obtained through the utilization of an RBF-based mapping method, which will be detailed in section 3.1.3. The patch-wise PSFs is utilized as a substitute for the PSF mapping across the entire FOV.

As each sub-channel of the integral projection system is identical, the optical properties and ray data for each sub-channel remain consistent within a given projection FOV. The offset or decenter of the sub-channel corresponds to the bias of the effective FOV area of the sub-channel. The effective FOV area of the sub-channel can be represented by the transformation of ideal grid M0, considering both the scaling and offset parameters. Therefore, it is sufficient to perform ray tracing for the sampled field points of the central sub-channel and establish the mapping relationship of the ray data within the entire FOV. By further generating the mapping relationship between the central sub-channel and other sub-channels, the ray data of all sub-channels can be efficiently acquired. Further details on this process will be provided in section 3.3.

3.1.3 RBF-based mapping method

Due to the high numerical accuracy of the RBF-based mapping method for the interpolation of complex scattered data [33,35], the distortion mapping and the illumination mapping over the entire FOV can be represented by a function using a set of multiquadric RBFs as follows:

$$\begin{array}{@{}lll@{}} \left[ {\begin{array}{{@{}ccc@{}}} {\left[ {\begin{array}{{@{}ccc@{}}} {{\varphi_{\textrm{11}}}}& \ldots &{{\varphi_{\textrm{1}\gamma }}}\\ \vdots & \ddots & \vdots \\ {{\varphi_{\gamma \textrm{1}}}}& \ldots &{{\varphi_{\gamma \gamma }}} \end{array}} \right]}&{\left[ {\begin{array}{{@{}ccc@{}}} 1&{x_0^1}&{y_0^1}\\ \vdots & \vdots & \vdots \\ 1&{x_0^\gamma }&{y_0^\gamma } \end{array}} \right]}\\ {\left[ {\begin{array}{{@{}ccc@{}}} 1& \ldots &1\\ {x_0^1}& \ldots &{x_0^\gamma }\\ {y_0^1}& \ldots &{y_0^\gamma } \end{array}} \right]}&{{{\bf O}_{3 \times 3}}} \end{array}} \right]\begin{array}{{@{}c@{}}} {}\\ {\left[ {\begin{array}{{@{}ccc@{}}} {\omega_{{\textrm{x}_\textrm{d}}}^1}&{\omega_{{\textrm{y}_\textrm{d}}}^1}&{\omega_\textrm{E}^1}\\ \vdots & \vdots & \vdots \\ {\omega_{{\textrm{x}_\textrm{d}}}^{\gamma + 3}}&{\omega_{{\textrm{y}_\textrm{d}}}^{\gamma + 3}}&{\omega_\textrm{E}^{\gamma + 3}} \end{array}} \right]}\\ \Updownarrow \end{array} = \left[ {\begin{array}{{@{}c@{}}} {\left[ {\begin{array}{{@{}ccc@{}}} {x_\textrm{d}^1}&{y_\textrm{d}^1}&{{\textrm{E}^1}}\\ \vdots & \vdots & \vdots \\ {x_\textrm{d}^\gamma }&{y_\textrm{d}^\gamma }&{{\textrm{E}^\gamma }} \end{array}} \right]}\\ {{{\bf O}_{3 \times 3}}} \end{array}} \right]\\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \left[ {\begin{array}{{cc}} {\bf \Psi }&{\bf P}\\ {{{\bf P}^{\bf T}}}&{\bf O} \end{array}} \right]{\bf W} = \left[ {\begin{array}{{c}} {\bf D}\\ {\bf O} \end{array}} \right] \end{array}$$
where matrix Ψ consists of multiquadric RBFs located at grid points of the ideal grid M0. Matrix P represents a linear function based on the coordinates of the ideal grid M0, where $\left[x_0^1 ; x_0^2 ; \ldots ; x_0^\gamma\right]$ and $\left[y_0^1 ; y_0^2 ; \ldots ; y_0^y\right]$ represent the X-coordinate values (x0) and Y-coordinate values (y0) derived from M0, respectively, in vector form. Matrix W stores the desired RBF coefficients, where ωxd and ωyd are the distortion coefficients in the X-direction and Y-direction, and ωE is the illumination coefficient. Matrix D stores the uniformly sampled distortion data and illumination data, where $\left[x_{\mathrm{d}}^1 ; x_{\mathrm{d}}^2 ; \ldots ; x_{\mathrm{d}}^\gamma\right]$ and $\left[y_{\mathrm{d}}^1 ; y_{\mathrm{d}}^2 ; \ldots ; y_{\mathrm{d}}^\gamma\right]$ represent the X-coordinate values (xd) and Y-coordinate values (yd) derived from Md, respectively, in vector form; [E1;E2;;Eγ] is E in vector form. The distortion data of different color channels are not presented as separate expressions to maintain simplicity in the equation form. In addition, γ is the number of sampling points, and it is set to be 121 in this paper.

To improve the numerical accuracy and stability and ensure the smoothness of the ray data over the entire FOV, a regularization term is incorporated into the RBF. The multiquadric RBF φij, stored in RBFs matrix Ψ, is expressed as:

$${\varphi _{ij}} = \varphi ({{R_{ij}}} )= \left\{ {\begin{array}{{cc}} 0&{{R_{ij}} = 0}\\ {\sqrt {R_{ij}^2 + \lambda {{\hat{r}}^2}} }&{{R_{ij}} \ne 0} \end{array}} \right.$$
$$R_{ij}^2 = ||{({x_0^i,y_0^i} )- ({x_0^j,y_0^j} )} ||,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i,j \in \{{1,\textrm{2,} \cdots \textrm{,}\gamma } \}$$
where $x_0^i$ and $y_0^i$ are the coordinates of the i-th grid point of ideal grid M0 in the X-direction and Y-direction, respectively. The regularization term coefficient λ is set equal to 0.815. $\hat{r}$2 is the minimum Euclidean distance between the ideal grid points, and ||·|| denotes the distance in Euclidean space.

The RBF coefficient matrix W, which characterize both the distortion mapping (between ideal grid M0 and distortion grid Md) and the illumination mapping (between ideal grid M0 and illumination grid E), is determined by solving Eq. (8). The distortion mapping and illumination mapping, as represented in Eq. (8), can be reformulated as explicit functions:

$$ \left\{\begin{array}{l} x_{\mathrm{d}}^*=\sum_{i=1}^\gamma \omega_{\mathrm{x}_d}^i \varphi\left(\left\|\left(x_0^i, y_0^i\right)-\left(x_0^*, y_0^*\right)\right\|\right)+\omega_{\mathrm{x}_{\mathrm{d}}}^{\gamma+1}+\omega_{\mathrm{x}_{\mathrm{d}}}^{\gamma+2} x_0^*+\omega_{\mathrm{x}_{\mathrm{d}}}^{\gamma+3} y_0^* \\ y_{\mathrm{d}}^*=\sum_{i=1}^\gamma \omega_{\mathrm{y}_{\mathrm{d}}}^i \varphi\left(\left\|\left(x_0^i, y_0^i\right)-\left(x_0^*, y_0^*\right)\right\|\right)+\omega_{\mathrm{y}_{\mathrm{d}}}^{\gamma+1}+\omega_{\mathrm{y}_{\mathrm{d}}}^{\gamma+2} x_0^*+\omega_{\mathrm{y}_{\mathrm{d}}}^{\gamma+3} y_0^* \\ E^*=\sum_{i=1}^\gamma \omega_{\mathrm{E}}^i \varphi\left(\left\|\left(x_0^i, y_0^i\right)-\left(x_0^*, y_0^*\right)\right\|\right)+\omega_{\mathrm{E}}^{\gamma+1}+\omega_{\mathrm{E}}^{\gamma+2} x_0^*+\omega_{\mathrm{E}}^{\gamma+3} y_0^* \end{array}\right. $$

When the RBFs and corresponding coefficients have been determined, the coordinates of distorted point $\left[x_{\mathrm{d}}^*, y_{\mathrm{d}}^*\right]$ for the three color channels and illumination E* can be obtained by substituting coordinates of arbitrary field points within the entire FOV for $\left[x_{\mathrm{d}}^*, y_{\mathrm{d}}^*\right]$ in Eq. (11).

3.2 Sub-channel arrangement

To reduce the non-overlapping area and maximize the common area on the projection plane [16], the sub-channels are close-packed in a hexagonal arrangement, as shown in Fig. 6. The sub-channels in the hexagonal arrangement can be divided into two rectangular arrangements of MLA: light blue central-column sub-channels represented by MLAc and yellow side-column sub-channels represented by MLAb. The rectangular arrangement of MLAc and MLAb facilitates the definition of the sub-channel locations in matrix form. MLAc(i, j) and MLAb(i, j) denotes the sub-channel in the i-th row and j-th column of the central-column sub-channels and the side-column sub-channels, respectively. The central sub-channel of the integral projection system, denoted as MLAc(ic, jc), is placed at the origin of the global coordinate system. The center of the side-column sub-channels, MLAb(ic, jc), is positioned at the upper-right position relative to MLAc(ic, jc). The positions of remaining sub-channels are determined based on the offset values between them and the central sub-channel.

 figure: Fig. 6.

Fig. 6. Sketch of proposed hexagonally arranged sub-channels. The central-column sub-channels (MLAc) and the side-column sub-channels (MLAb) are represented by light blue and yellow discs, respectively. (a) Cross section of the sub-channels in the XOY plane, and definition of XDEc and YDEc. (b) Cross section of the central sub-channel and six adjacent sub-channels, and definition of pitches and intervals (px, py, Δx and Δy).

Download Full Size | PDF

Figure 6(b) shows the cross section of the central sub-channel and six adjacent sub-channels. D is the maximum aperture of the sub-channel, the minimum pitch of MLAc in the X-direction is px = $\sqrt 3$D, the minimum pitch of MLAc in the Y-direction is py = D, the interval between MLAc and MLAb in the X-direction is Δx = $\sqrt 3$D/2, and the interval between MLAc and MLAb in the Y-direction is Δy = D/2. The positions of sub-channel of MLAc and MLAb can be represented using offset matrices Dc and Db as follows:

$${{\bf D}_\textrm{c}}({i,j} )= \left[ {\begin{array}{{c}} {{\bf XD}{{\bf E}_\textrm{c}}({i,j} )}\\ {{\bf YD}{{\bf E}_\textrm{c}}({i,j} )} \end{array}} \right] = \left[ {\begin{array}{{c}} {{p_\textrm{x}}({j - {j_c}} )}\\ {{p_\textrm{y}}({{i_c} - i} )} \end{array}} \right] = \left[ {\begin{array}{{c}} {\sqrt 3 ({j - {j_c}} )D}\\ {({{i_c} - i} )D} \end{array}} \right]$$
$${{\bf D}_\textrm{b}}({i,j} ){\bf = }{{\bf D}_\textrm{c}}({i,j} )+ \left[ {\begin{array}{{c}} {\Delta x}\\ {\Delta y} \end{array}} \right] = \left[ {\begin{array}{{c}} {\sqrt 3 D({j - {j_c} + 1/2} )}\\ {D({{i_c} - i + 1/2} )} \end{array}} \right]$$
where XDEc(i, j) and YDEc(i, j) are the offset values in the X-direction and Y-direction, respectively, of MLAc(i, j) relative to the central sub-channel.

3.3 Analysis of projection area and data mapping of all sub-channels

In order to obtain a high-precision superimposed integral projection image with a uniform illumination distribution on the projection plane, the non-overlapping area of the projection images caused by offsets of the sub-channels will be dropped. The overlapping area of the largest projection areas of all sub-channels is set as the common projection area St:

$$ \left\{\begin{array}{l} S_{\mathrm{t}}=\bigcap_{n=1}^N S_n=\bigcap_{i, j} S_{\mathrm{c}}(i, j) \bigcap_{u, v} S_{\mathrm{b}}(u, v) \\ S_{\mathrm{t}} \subsetneq \forall S_n \quad n \in\{1,2, \ldots, N\} \end{array}\right. $$
Where St represents the common projection area, Sn represents the largest (inscribed rectangle) projection area of n-th sub-channel on the projection plane, N represent the toral number of sub-channels in the integral projection system, Sc(i, j) and Sb(u, v) represents the largest projection area of MLAc(i, j) and MLAb(u, v) on the projection plane, respectively.

The common projection area St of the symmetrically arranged integral projection system can be further represented as:

$${S_\textrm{t}} = ({{X_{\textrm{t + }}} - {X_{\textrm{t - }}}} )({{Y_{\textrm{t + }}} - {Y_{\textrm{t - }}}} )$$
$$ \left[\begin{array}{c} X_{\mathrm{t}+} \\ Y_{\mathrm{t}} \end{array}\right]=L_{\mathrm{p}}-\left[\begin{array}{l} \mathbf{X D E}_{\mathrm{c}}\left(i_{\text {min }}, j_{\text {max }}\right) \\ \mathbf{Y D E}_{\mathrm{c}}\left(i_{\text {min }}, j_{\text {max }}\right) \end{array}\right)=L_{\mathrm{p}}-\mathbf{D}_{\mathrm{c}}\left(i_{\text {min }}, j_{\text {max }}\right) $$
$$ \left[\begin{array}{c} X_{\mathrm{t}-} \\ Y_{\mathrm{t}-} \end{array}\right]=-L_{\mathrm{p}}-\left[\begin{array}{c} \mathbf{X D E}_{\mathrm{c}}\left(i_{\text {max }}, j_{\text {min }}\right) \\ \mathbf{Y D E E}_{\mathrm{c}}\left(i_{\text {max }}, j_{\text {min }}\right) \end{array}\right]=-L_{\mathrm{p}}-\mathbf{D}_{\mathrm{c}}\left(i_{\text {max }}, j_{\text {min }}\right) $$
where Xt+ and Yt+ are the maximum projected image heights of the common projection area St in the positive X-direction and Y-direction, Xt- and Yt- are the maximum projection image heights of the common projection area in the negative X-direction and Y-direction.

Figure 7 shows the tailoring methods for the common projection area on the projection plane and the effective FOV area of MLAc(imin, jmin) on the sub-image plane. The center sub-channel and the four edge sub-channels in different colors are utilized to demonstrate the tailoring method. The green boundary contour and its internal area denote Sc(imin, jmin), the blue boundary contour and its internal area denote Sc(imin, jmax), the red boundary contour and its internal area denote Sc(imax, jmin), the pink boundary contour and its internal area denote Sc(imax, jmax), the black boundary contour and its internal area denote Sc(ic, jc), and the black dashed circle denotes the envelop curve of the maximum FOVs of the central sub-channel on the projection plane. The common projection area is the intersection region of all sub-channels, and it is represented by the sky-blue region on the projection plane. The offset values between the center of the largest projection area of the sub-channels and the center of the common projection area can be represented by the offset matrices Dc and Db.

 figure: Fig. 7.

Fig. 7. Tailoring method for common projection area and effective FOV area of the projection sub-channel. The sky-blue area on the projection plane represents the common projection area. In the upper-left corner of the figure, the sky-blue area on the sub-image plane is the effective FOV area of MLAc(imin, jmin) corresponding to the common projection area.

Download Full Size | PDF

An enlarged view of MLAc(imin, jmin) is provided in the upper-left corner of Fig. 7. The green dashed circle represents the envelope curve of the maximum FOVs of MLAc(imin, jmin) on the sub-image plane. The green border represents the largest inscribed rectangle FOVs of MLAc(imin, jmin) on the sub-image plane. The effective FOV area of MLAc(imin, jmin) is depicted as the sky-blue region on the sub-image plane, enclosed by red and blue solid lines. The effective FOV area and the common projection area correspond to the object and image, respectively, in the ideal object-image correspondence. The effective FOV area of the sub-image plane is subsequently divided into a grid of field points, which can be defined as the effective ideal grid $\mathbf{M}_0^*\left(i_{\min }, j_{\min }\right)$ corresponding to MLAc(imin, jmin). The rectangular region enclosed by the red and blue dash-dot lines represents the effective FOV area of the central sub-channel MLAc(ic, jc), which can be divided and defined as the effective ideal grid $\mathbf{M}_0^*\left(i_c, j_c\right)$. The field bias of MLAc(imin, jmin) with respect to MLAc(ic, jc) is represented as dc(imin, jmin). The effective ideal grid M* 0, resulting from the tailoring and offsetting, is denoted as:

$$ \left\{\begin{array}{l} \mathbf{M}_0^*\left(i_c, j_c\right)=\left[\mathbf{x}_0^*, \mathbf{y}_0^*\right]=\left[\frac{X_{\mathrm{t+}}}{L_{\mathrm{p}}} \mathbf{x}_0, \frac{Y_{\mathrm{t}}}{L_{\mathrm{p}}} \mathbf{y}_0\right] \\ \mathbf{M}_0^*(i, j)=\mathbf{M}_0^*\left(i_c, j_c\right)+\mathbf{d}_{\mathrm{c}}(i, j)=\mathbf{M}_0^*\left(i_c, j_c\right)+\frac{\mathbf{M}_0}{\mathbf{M}_{\mathrm{p}}} \mathbf{D}_{\mathrm{c}}(i, j) \end{array}\right. $$
where $\mathbf{x}_0^*$ and $\mathbf{y}_0^*$ are the coordinates of effective ideal grid $\mathbf{M}_0^*\left(i_c, j_c\right)$ in the X-direction and Y-direction, respectively; $\mathbf{M}_0^*(i, j)$ is the effective ideal grid of MLAc(i, j) on the sub-image plane, and dc(i, j) represents the field bias of MLAc(i, j) on the sub-image plane. The effective ideal grid of MLAb(i, j) can be obtained by substituting Db(i, j) for Dc(i, j) in Eq. (18).

The common projection area, denoted as $S_{\mathrm{t}}\left(S_{t \subsetneq} S\right)$, is a tailored (cropped and biased) projection area of the largest projection area of the central sub-channel on the projection plane. The effective ideal grid, denoted as $\mathbf{M}_0^*\left(\mathbf{M}_0^* \subsetneq \mathbf{M}_0\right)$, is also a tailored FOV area of the entire FOV area on the sub-image plane. Therefore, when the effective ideal grid $\mathbf{M}_0^*$ of an arbitrary sub-channel is calculated using Eq. (18), the corresponding distortion grid $\mathbf{M}_{\mathrm{d}}^*=\left[\mathbf{x}_{\mathrm{d}}^*, \mathbf{y}_{\mathrm{d}}^*\right]$ and the illumination grid E* can be readily obtained by utilizing the RBF-based mapping function described in Eq. (11). By establishing mapping relationships between all sub-channels using the ideal grid and offset matrices, the ray data of all sub-channels can be efficiently obtained without the need for redundant ray tracing and data interpolation for each individual channel. The computational complexity of the mapping-based design method is 1/N of the design method based on offset addressing and chief ray tracing in the previous study [16,18,32], where N is the total number of sub-channels.

3.4 Sub-image preprocessing

As described in section 2.2, the degradation of the integral projection image is influenced by both individual sub-channel and their collective contribution. Therefore, it is crucial to generate a sharp, clear, and undistorted projection image with uniform illumination across the entire FOV for each sub-channel, while also ensuring high-precision in the imaging integration process. To address the effects of misalignment, blurring, illumination attenuation, and distortion of the integral projection image, our preprocessing pipeline is designed with four core steps that correspond to the image formation model in section 2. The digital preprocessing is exemplified using the sub-image SI(imin, jmin) of MLAc(imin, jmin) as an example, and the main processing steps are outlined as follows.

  • (1) Affine transformation of original image based on ideal object-image mapping. The ideal grid M0 is resized to match the dimensions of original image, and then converted to the pixel coordinate system to generate a per-pixel mapping from the ideal grid M0 to the original image. The effective ideal grid $\mathbf{M}_0^*\left(i_{\min }, j_{\min }\right)$ represents a tailored FOV area of M0, as shown in Fig. 8(a). The preprocessed sub-image, corresponding to $\mathbf{M}_0^*\left(i_{\min }, j_{\min }\right)$, is obtained by performing scaling, rotating, offsetting and padding operations on the original image. Please note that the original image was rotated by 180 degrees to account for the negative magnification. The preprocessed SI(imin, jmin) after the affine transformation and its corresponding projected image are shown at the top and bottom of Fig. 8(b), respectively. The projected image showcases the impact of optical aberrations and inhomogeneous illumination.
  • (2) Pre-deblurring of sub-image based on PSFs. The preprocessed sub-image, obtained through the affine transformation, represents the ideal object of the projection sub-channel, while the largest projection area on the projection plane is the corresponding image. By adopting a similar division strategy employed for discretizing the largest projection area, the SI(imin, jmin) is subdivided into 11 × 11 image patches whose indices correspond to the patch-wise PSFs shown in Fig. 8(c). The pre-deblurred sub-image is obtained through learning-based Wiener deconvolution, as denoted by Eq. (7), where the image patches and their corresponding PSFs are utilized as substitutes for ip and h, respectively. The preprocessed SI(imin, jmin) after deblurring is shown at the top of Fig. 8(d). Please note that the pre-deblurred image inherently appears sharper than the original image, although the original image itself is already sharp. As a result, the projected image after deblurring, depicted at the bottom of Fig. 8(d), is sharper compared to the projected image without the deblurring process.
  • (3) Illumination pre-compensation of sub-image based on illumination grid. The ideal grid M0, which has the same size of the sub-image, is substituted into the illumination mapping function described in Eq. (11) to generate the relative illumination map, as shown in Fig. 5(c). The illumination map is then reversed to obtain the pre-compensated illumination map of the entire FOV, denoted as Er = min(E)/E. The preprocessed sub-image obtained in step (2) is then multiplied by the normalized pre-compensated illumination map (as shown in Fig. 8(e)). The resulting preprocessed SI(imin, jmin) after illumination compensation and its corresponding projected image are shown at the top and bottom of Fig. 8(f), respectively. The illumination uniformity is improved across the entire FOV after applying illumination compensation.
  • (4) Pre-warping of sub-image based on distortion grid. After executing the aforementioned procedures, the resulting projected image still exhibits distortion. To achieve a pre-warping image without pixel loss and aliasing effects, the effective ideal grid $\mathbf{M}_0^*$ is rescaled to a size 1.2 times larger than the sub-image size. The oversampled $\mathbf{M}_0^*$ is then substituted into the distortion mapping function described in Eq. (11) to generate a dense distortion grid $\mathrm{M}_{\mathrm{d}}^*\left(i_{\min }, j_{\text {min }}\right)$ for each color channel. Figure 8(g) illustrates a simplified version of the oversampled grid. The pre-warped sub-image is obtained by applying the RBF image warping method [32,33] using the oversampled distortion grid. The preprocessed SI(imin, jmin) after pre-warping is shown at the top of Fig. 8(h). The distortion of the projected image is effectively corrected after applying image warping, as depicted at the bottom of Fig. 8(h).

 figure: Fig. 8.

Fig. 8. Sub-image preprocessing flowchart for the edge sub-channel, MLAc(imin, jmin). (a) Effective projection FOV area for MLAc(imin, jmin). (b) Preprocessed sub-image after affine transformation (top) and its corresponding projected image (bottom). (c) Patch-wise PSFs. (d) Preprocessed sub-image after deblurring (top) and its corresponding projected image (bottom). (e) Pre-compensated illumination map. (f) Preprocessed sub-image after illumination compensation (top) and its corresponding projected image (bottom). (g) Oversampled ideal grid and distortion grid. (h) Preprocessed sub-image after warping (top) and its corresponding projected image (bottom).

Download Full Size | PDF

After completing the preprocessing pipeline described above for each sub-image, the preprocessed SIA is generated based on the arrangement of sub-channels.

4. Simulations and assessments

In this paper, an integral projection system, consisting of 77 hexagonally arranged sub-channels, is presented and simulated in LightTools, as shown in Fig. 9. In addition, MLAc and MLAb consist of 9 × 5 and 8 × 4 sub-channels, respectively. The proposed system is analyzed and evaluated from three perspectives: optical distortion, illumination uniformity, and image quality.

 figure: Fig. 9.

Fig. 9. (a) Ray tracing simulation of the proposed integral projection system in LightTools. (b) Simulation result of the integral projection image corresponding to figure (d). (c) Split structure of the proposed integral projection system composed of 77 hexagonally arranged sub-channels. (d) Preprocessed sub-image array. (e) Stop array.

Download Full Size | PDF

As shown in Fig. 9(a), the ray tracing simulation of the proposed integral projection system was conducted. Figures 9(b) shows the illumination simulation result: a projection size of 650 × 650 mm is realized while the projection distance is 570 mm. The edge regions of the integral projection image show no significant deterioration when compared to the central region, owing to the utilization of our optimization strategy that incorporates the automatic image performance balance algorithm [38]. Figure 9(c) shows the LightTools simulation of the split structure of the proposed integral projection system. The integral projection system is composed of several key components, including a CMLA, PMLAs comprised of four aspherical MLAs, a preprocessed SIA, and a stop array. The light source adopted in the design example is a lambert-type plane extend source with a divergence angle of 40 degrees. Please note that for the convenience and fairness of evaluating the final projected image, the power of the light source was adjusted to match the dynamic range of original image (0-255) for each color channel. In order to improve light efficiency and suppress stray light, the numerical aperture (NA) of the CMLA is set to match the NA of the projection sub-channel. Figure 9(d) shows the preprocessed SIA corresponding to Fig. 9(b). The stop array, depicted in Fig. 9(e), eliminates the crosstalk stray light between adjacent sub-channels.

The optical distortion and the illumination uniformity of the proposed system are evaluated through ray tracing simulation with a checkerboard image. As shown in Fig. 10(a), the ray tracing simulation of the proposed integral projection system was conducted using a checkerboard SIA after pre-warping and illumination pre-compensation. Figure 10(b) shows the distortion analysis and slice analysis of illumination distribution on the corresponding integral projection image. The distortion of the integral projection image is less than 0.42%, and the illumination uniformity is greater than 99.07%, which verifies the effectiveness of both image warping method and illumination compensation method.

 figure: Fig. 10.

Fig. 10. (a) Ray tracing simulation of the proposed integral projection system using a SIA with checkerboard pattern. (b) Distortion analysis based on checkerboard corner point detection, and slice analysis of illumination distribution.

Download Full Size | PDF

To demonstrate the effectiveness of the proposed design method for integral projection system and the preprocessing model for sub-images, substantial simulations are conducted using preprocessed SIAs at various stages of the processing pipeline. In this paper, eight textured images from the DIV2 K dataset [41] were provided as examples and utilized for both qualitative comparison and quantitative analysis, as shown in Fig. 11 and Fig. S3 (See Supplement 1 for four additional simulation results). To ensure a fair and convenient comparison of image quality, all SIAs used for ray tracing simulations undergo preprocessing with illumination compensation, and the dynamic range of the projected images is adjusted to be equal to that of the original images.

 figure: Fig. 11.

Fig. 11. Integral projection image after ideal object-image mapping (first column), integral projection images after ideal object-image mapping and pre-warping (second column), integral projection images after ideal object-image mapping, pre-warping, and pre-deblurring (third column), and original images (last column). Note that the illumination distribution of the corresponding SIAs of all integral projection images are pre-compensated. The PSNR (P) and SSIM (S) of each projected image are given.

Download Full Size | PDF

The first column of Fig. 11 shows the simulation results of integral projection image after applying ideal object-image mapping, illustrating the impact of blur effect and image distortion caused by optical aberrations. The second column of Fig. 11 shows the integral projection image after applying ideal object-image mapping and pre-warping. By adopting high-precision RBF-based image warping method, the image degradation caused by the distortion of sub-channels and their integral effect is effectively correct. The third column of Fig. 10 shows the integral projection image after applying ideal object-image mapping, pre-warping, and pre-deblurring. The textures in the integral projection image with the pre-deblurring process are better preserved compared to those in the projection image without the pre-deblurring process. Please note that details in the results, such as the skin of the elephant in the first row, the relief on the palace in the second row, the stripes of the fish in the third row, and the patterns on the sail in the last row are well-preserved and closely resemble the corresponding regions in the original images.

In this paper, the image quality of the integral projection image is assessed using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). PSNR is treated as a metric to evaluate the numerical difference between projected images and the original image. SSIM contains three key characteristics: luminance, contrast and structure [42], making it suitable for evaluating the integral projection images. The quantitative comparisons between the integral projection image and the corresponding original image are presented at the top of each projected image, as shown in Fig. 11 and Fig. S4. The effectiveness of our design method and preprocessing model is validated by the improvements observed in three perspectives: distortion, illumination uniformity, and image quality (as measured by the mean PSNR and SSIM of eight selected textured images), as summarized in Table 1. The quantitative comparisons demonstrate that our method achieves a distortion reduction to 0.42%, illumination uniformity of 99.07%, a 26.58% improvement in PSNR, and a 36.31% improvement in SSIM. Moreover, to validate the processability of our design example and the robustness of our preprocessing method, we provide tolerance analysis and simulations for the integral projection system with fabrication and alignment errors. Additional comparisons and simulation results are available in Supplement 1.

Tables Icon

Table 1. The quantitative results of the integral projection images

5. Conclusion and prospect

In this paper, we proposed a general design method for integral projection system, encompassing both optical design and digital image preprocessing based on per-pixel mapping of the entire system. In the optical design stage, a Galilean telescope structure and aspherical surfaces are introduced in projection sub-channel to realize a large FOV, high luminous flux, and uniform optical performance across the FOV. In the stage of digital image preprocessing, the image deterioration of the integral projection system is analyzed, and a mapping-based sub-image preprocessing model is proposed. The per-pixel mapping between the sub-images and the integral projection image is established through a two-step process: (1) generating the mapping between the sub-image of central sub-channel and the integral projection image through ray tracing and RBF interpolation, (2) generating the mapping between the central sub-channel and the remaining sub-channels through effective FOV area tailoring and RBF-based mapping method. By leveraging the per-pixel mapping relationship, a four-step preprocessing pipeline is employed to mitigate image degradation caused by individual sub-channels and their integral effect, including affine transformation, deblurring, illumination compensation, image warping, and SIA generation. As a result, a distortion-free, uniformly illuminated, and high-quality integral projection system is realized. A design example of integral projection imaging system with a full FOV of 80°, an F-number of 2, and uniform image performance is presented. The ray tracing simulation results and quantitative assessments show that the image distortion is less than 0.42%, the overall illumination uniformity exceeds 99.07%, and the image quality is efficiently improved by more than 26.58% in PSNR and 36.31% in SSIM.

This paper focuses on presenting a comprehensive design framework for an integral projection system as well as efficient mapping methods for both ray data acquisition and image preprocessing. Compared with our previous design method based on offset addressing, chief ray tracing, and RBF image warping, the mapping-based design method proposed in this paper significantly reduces the computational costs associated with redundant ray tracing and data fitting for each individual sub-channel. For the design example with 77 sub-channels presented in this paper, the computational costs are reduced to approximately 1.3% of those in the method proposed in [16]. The mapping-based design method achieves a high-quality integral projection system with a large FOV, high light throughput, and high imaging quality simultaneously, as the optical properties of the projection system have been considered during the design process.

There are several key aspects that require further investigation, such as the parallel processing of sub-images, potential artifacts introduced during image preprocessing, and alignment of compound MLAs. In the future, we will apply a joint-optimization framework to design projection imaging system and achieve real-time high-dynamic-range projection. We will also explore more in the optical design, including the implementation of freeform MLA and curved substrate MLA to achieve a larger projection FOV, higher luminous flux, and more compact structure.

Funding

National Key Research and Development Program of China (2021YFB2802100); Beijing Municipal Science & Technology Commission, Administrative Commission of Zhongguancun Science Park (Z221100006722011).

Acknowledgments

We would like to thank Synopsys for providing the education license of CODE V and LightTools.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. C. Sun, N. Fang, D. M. Wu, et al., “Projection micro-stereolithography using digital micro-mirror dynamic mask,” Sens. Actuators, A 121(1), 113–120 (2005). [CrossRef]  

2. D. Wu, Q. Chen, L. Niu, et al., “100% fill-factor aspheric microlens arrays (AMLA) with sub-20-nm precision,” IEEE Photonics Technol. Lett. 21(20), 1535–1537 (2009). [CrossRef]  

3. S. Luan, F. Peng, G. Zheng, et al., “High-speed, large-area and high-precision fabrication of aspheric micro-lens array based on 12-bit direct laser writing lithography,” Light: Adv. Manufac. 3(4), 1 (2022). [CrossRef]  

4. C. Chiu and Y. Lee, “Excimer laser micromachining of aspheric microlens arrays based on optimal contour mask design and laser dragging method,” Opt. Express 20(6), 5922–5935 (2012). [CrossRef]  

5. C. Chiu and Y. Lee, “Fabricating of aspheric micro-lens array by excimer laser micromachining,” Optics and Lasers in Engineering 49(9-10), 1232–1237 (2011). [CrossRef]  

6. M. Wang, W. Yu, T. Wang, et al., “A novel thermal reflow method for the fabrication of microlenses with an ultrahigh focal number,” RSC Adv. 5(44), 35311–35316 (2015). [CrossRef]  

7. J. Zhu, M. Li, J. Qiu, et al., “Fabrication of high fill-factor aspheric microlens array by dose-modulated lithography and low temperature thermal reflow,” Microsyst. Technol 25(4), 1235–1241 (2019). [CrossRef]  

8. X. Xiao, B. Javidi, M. Martinez-Corral, et al., “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]  

9. J. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]  

10. B. Javidi, A. Carnicer, J. Arai, et al., “Roadmap on 3D integral imaging: sensing, processing, and display,” Opt. Express 28(22), 32266–32293 (2020). [CrossRef]  

11. Z. Lv, J. Li, Y. Yang, et al., “3D head-up display with a multiple extended depth of field based on integral imaging and holographic optical elements,” Opt. Express 31(2), 964–975 (2023). [CrossRef]  

12. Z. Yan, X. Yan, Y. Huang, et al., “Characteristics of the holographic diffuser in integral imaging display systems: A quantitative beam analysis approach,” Optics and Lasers in Engineering 139, 106484 (2021). [CrossRef]  

13. X.-L. Ma, H.-L. Zhang, R.-Y. Yuan, et al., “Depth of field and resolution-enhanced integral imaging display system,” Opt. Express 30(25), 44580–44593 (2022). [CrossRef]  

14. X. Wang and H. Hua, “Depth-enhanced head-mounted light field displays based on integral imaging,” Opt. Lett. 46(5), 985–988 (2021). [CrossRef]  

15. M. Sieler, P. Schreiber, P. Dannberg, et al., “Ultraslim fixed pattern projectors with inherent homogenization of illumination,” Appl. Opt. 51(1), 64–74 (2012). [CrossRef]  

16. Y. Liu, D. Cheng, T. Yang, et al., “High precision integrated projection imaging optical design based on microlens array,” Opt. Express 27(9), 12264–12281 (2019). [CrossRef]  

17. M. Sieler, S. Fischer, P. Schreiber, et al., “Microoptical array projectors for free-form screen applications,” Opt. Express 21(23), 28702–28709 (2013). [CrossRef]  

18. Y. Liu, D. Cheng, T. Yang, et al., “Ultra-thin multifocal integral LED-projector based on aspherical microlens arrays,” Opt. Express 30(2), 825–845 (2022). [CrossRef]  

19. X. Wang and H. Hua, “Design of a digitally switchable multifocal microlens array for integral imaging systems,” Opt. Express 29(21), 33771–33784 (2021). [CrossRef]  

20. W. Song, Y. Wang, D. Cheng, et al., “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014). [CrossRef]  

21. Y. Jin, A. Hassan, and Y. Jiang, “Freeform microlens array homogenizer for excimer laser beam shaping,” Opt. Express 24(22), 24846–24858 (2016). [CrossRef]  

22. J. Pan, C. Wang, H. Lan, et al., “Homogenized LED-illumination using microlens arrays for a pocket-sized projector,” Opt. Express 15(17), 10483–10491 (2007). [CrossRef]  

23. M. Hirsch, G. Wetzstein, and R. Raskar, “A compressive light field projection system,” ACM Trans. Graph. 33(4), 1–12 (2014). [CrossRef]  

24. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

25. Z. Qin, J. Wu, P. Chou, et al., “Revelation and addressing of accommodation shifts in microlens array-based 3D near-eye light field displays,” Opt. Lett. 45(1), 228–231 (2020). [CrossRef]  

26. J. Wen, X. Yan, X. Jiang, et al., “Integral imaging based light field display with holographic diffusor: principles, potentials and restrictions,” Opt. Express 27(20), 27441 (2019). [CrossRef]  

27. L. Zhao, N. Bai, X. Li, et al., “Efficient implementation of a spatial light modulator as a diffractive optical microlens array in a digital Shack-Hartmann wavefront sensor,” Appl. Opt. 45(1), 90–94 (2006). [CrossRef]  

28. Y. Huang, Y. Qin, P. Tu, et al., “High fill factor microlens array fabrication using direct laser writing and its application in wavefront detection,” Opt. Lett. 45(16), 4460–4463 (2020). [CrossRef]  

29. G. Yoon, T. Jitsuno, M. Nakatsuka, et al., “Shack Hartmann wave-front measurement with a large F-number plastic microlens array,” Appl. Opt. 35(1), 188–192 (1996). [CrossRef]  

30. C. Edwards, H. Presby, and C. Dragone, “Ideal microlenses for laser to fiber coupling,” J. Lightwave Technol. 11(2), 252–257 (1993). [CrossRef]  

31. S. Heist, A. Mann, P. Kühmstedt, et al., “Array projection of aperiodic sinusoidal fringes for high-speed three-dimensional shape measurement,” Opt. Eng. 53(11), 112208 (2014). [CrossRef]  

32. Y. Liu, D. Cheng, Q. Hou, et al., “Compact integrator design for short-distance sharp and unconventional geometric irradiance tailoring,” Appl. Opt. 60(14), 4165–4176 (2021). [CrossRef]  

33. A. Bauer, S. Vo, K. Parkins, et al., “Computational optical distortion correction using a radial basis function-based mapping method,” Opt. Express 20(14), 14906–14920 (2012). [CrossRef]  

34. P. Pjanic, S. Willi, and A. Grundhofer, “Geometric and photometric consistency in a mixed video and galvanoscopic scanning laser projection mapping system,” IEEE Trans. Vis. Comput. Graph. 23(11), 2430–2439 (2017). [CrossRef]  

35. A. Grundhofer and D. Iwai, “Robust, error-tolerant photometric projector compensation,” IEEE Trans. on Image Process. 24(12), 5086–5099 (2015). [CrossRef]  

36. R. Zhang, F. Tan, Q. Hou, et al., “End-to-end learned single lens design using improved Wiener deconvolution,” Opt. Lett. 48(3), 522–525 (2023). [CrossRef]  

37. K. Yanny, K. Monakhova, R. W. Shuai, et al., “Deep learning for fast spatially varying deconvolution,” Optica 9(1), 96–99 (2022). [CrossRef]  

38. D. Cheng, Y. Wang, and H. Hua, “Automatic image performance balancing in lens optimization,” Opt. Express 18(11), 11574–11588 (2010). [CrossRef]  

39. F. Heide, M. Rouf, M. B. Hullin, et al., “High-quality computational imaging through simple lenses,” ACM Trans. Graph. 32(5), 1–14 (2013). [CrossRef]  

40. M. Brown, P. Song, and T. Cham, “Image pre-conditioning for out-of-focus projector blur,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2006), pp. 1956–1963.

41. E. Agustsson and R. Timofte, “Ntire 2017 challenge on single image super-resolution: dataset and study,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017), pp. 126–135.

42. Z. Wang, A. C. Bovik, H. R. Sheikh, et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Addtional validation of the feasibility and processability of our system

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Basic working principle of integral projection system based on MLAs.
Fig. 2.
Fig. 2. The flow diagram of design method for high-quality integral projection system.
Fig. 3.
Fig. 3. (a) Optimized projection sub-channel; (b) distortion curve, (c) MTF curves, (d) spot diagram of the optimized projection sub-channel.
Fig. 4.
Fig. 4. The flowchart of ray tracing process of central sub-channel.
Fig. 5.
Fig. 5. (a) Selection of field points and image patches on projection plane. (b) Ideal (or paraxial) tracing points, real (or distorted) tracing points, and definition of ideal grid M0 and distortion grid Md in normalized coordinate system of sub-image plane. (c) Relative illumination map and definition of illumination grid E, (d) patch-wise PSFs h in normalized coordinate system of projection plane.
Fig. 6.
Fig. 6. Sketch of proposed hexagonally arranged sub-channels. The central-column sub-channels (MLAc) and the side-column sub-channels (MLAb) are represented by light blue and yellow discs, respectively. (a) Cross section of the sub-channels in the XOY plane, and definition of XDEc and YDEc. (b) Cross section of the central sub-channel and six adjacent sub-channels, and definition of pitches and intervals (px, py, Δx and Δy).
Fig. 7.
Fig. 7. Tailoring method for common projection area and effective FOV area of the projection sub-channel. The sky-blue area on the projection plane represents the common projection area. In the upper-left corner of the figure, the sky-blue area on the sub-image plane is the effective FOV area of MLAc(imin, jmin) corresponding to the common projection area.
Fig. 8.
Fig. 8. Sub-image preprocessing flowchart for the edge sub-channel, MLAc(imin, jmin). (a) Effective projection FOV area for MLAc(imin, jmin). (b) Preprocessed sub-image after affine transformation (top) and its corresponding projected image (bottom). (c) Patch-wise PSFs. (d) Preprocessed sub-image after deblurring (top) and its corresponding projected image (bottom). (e) Pre-compensated illumination map. (f) Preprocessed sub-image after illumination compensation (top) and its corresponding projected image (bottom). (g) Oversampled ideal grid and distortion grid. (h) Preprocessed sub-image after warping (top) and its corresponding projected image (bottom).
Fig. 9.
Fig. 9. (a) Ray tracing simulation of the proposed integral projection system in LightTools. (b) Simulation result of the integral projection image corresponding to figure (d). (c) Split structure of the proposed integral projection system composed of 77 hexagonally arranged sub-channels. (d) Preprocessed sub-image array. (e) Stop array.
Fig. 10.
Fig. 10. (a) Ray tracing simulation of the proposed integral projection system using a SIA with checkerboard pattern. (b) Distortion analysis based on checkerboard corner point detection, and slice analysis of illumination distribution.
Fig. 11.
Fig. 11. Integral projection image after ideal object-image mapping (first column), integral projection images after ideal object-image mapping and pre-warping (second column), integral projection images after ideal object-image mapping, pre-warping, and pre-deblurring (third column), and original images (last column). Note that the illumination distribution of the corresponding SIAs of all integral projection images are pre-compensated. The PSNR (P) and SSIM (S) of each projected image are given.

Tables (1)

Tables Icon

Table 1. The quantitative results of the integral projection images

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

[ Θ Y ] = [ 0 n p ] + [ 1 0 L 1 ] [ 1 1 / f 0 1 ] [ 1 0 s 1 ] [ θ n y n ] [ y n / f y n L / f n p ]
y n = Y + n p L f = Y + n p M
Δ y = y n y n 1 = p M
τ ( x 0 n , y 0 n ) = i p ( M x 0 n , M y 0 n ) h 1 ( M x 0 n , M y 0 n )
ξ ( x 0 n , y 0 n ) = τ ( x 0 n , y 0 n ) ε 1 ( M x 0 n , M y 0 n )
i ( x d n , y d n ) = D 1 ( x d n , y d n ; ξ ( x 0 n , y 0 n ) )
F ( τ ) = F ( h ) F ( i p ) | F ( h ) | 2 + λ
[ [ φ 11 φ 1 γ φ γ 1 φ γ γ ] [ 1 x 0 1 y 0 1 1 x 0 γ y 0 γ ] [ 1 1 x 0 1 x 0 γ y 0 1 y 0 γ ] O 3 × 3 ] [ ω x d 1 ω y d 1 ω E 1 ω x d γ + 3 ω y d γ + 3 ω E γ + 3 ] = [ [ x d 1 y d 1 E 1 x d γ y d γ E γ ] O 3 × 3 ] [ Ψ P P T O ] W = [ D O ]
φ i j = φ ( R i j ) = { 0 R i j = 0 R i j 2 + λ r ^ 2 R i j 0
R i j 2 = | | ( x 0 i , y 0 i ) ( x 0 j , y 0 j ) | | , i , j { 1 , 2, , γ }
{ x d = i = 1 γ ω x d i φ ( ( x 0 i , y 0 i ) ( x 0 , y 0 ) ) + ω x d γ + 1 + ω x d γ + 2 x 0 + ω x d γ + 3 y 0 y d = i = 1 γ ω y d i φ ( ( x 0 i , y 0 i ) ( x 0 , y 0 ) ) + ω y d γ + 1 + ω y d γ + 2 x 0 + ω y d γ + 3 y 0 E = i = 1 γ ω E i φ ( ( x 0 i , y 0 i ) ( x 0 , y 0 ) ) + ω E γ + 1 + ω E γ + 2 x 0 + ω E γ + 3 y 0
D c ( i , j ) = [ X D E c ( i , j ) Y D E c ( i , j ) ] = [ p x ( j j c ) p y ( i c i ) ] = [ 3 ( j j c ) D ( i c i ) D ]
D b ( i , j ) = D c ( i , j ) + [ Δ x Δ y ] = [ 3 D ( j j c + 1 / 2 ) D ( i c i + 1 / 2 ) ]
{ S t = n = 1 N S n = i , j S c ( i , j ) u , v S b ( u , v ) S t S n n { 1 , 2 , , N }
S t = ( X t +  X t -  ) ( Y t +  Y t -  )
[ X t + Y t ] = L p [ X D E c ( i min  , j max  ) Y D E c ( i min  , j max  ) ) = L p D c ( i min  , j max  )
[ X t Y t ] = L p [ X D E c ( i max  , j min  ) Y D E E c ( i max  , j min  ) ] = L p D c ( i max  , j min  )
{ M 0 ( i c , j c ) = [ x 0 , y 0 ] = [ X t + L p x 0 , Y t L p y 0 ] M 0 ( i , j ) = M 0 ( i c , j c ) + d c ( i , j ) = M 0 ( i c , j c ) + M 0 M p D c ( i , j )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.