Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Aberration correction based on a pre-correction convolutional neural network for light-field displays

Open Access Open Access

Abstract

Lens aberrations degrade the image quality and limit the viewing angle of light-field displays. In the present study, an approach to aberration reduction based on a pre-correction convolutional neural network (CNN) is demonstrated. The pre-correction CNN is employed to transform the elemental image array (EIA) generated by a virtual camera array into a pre-corrected EIA (PEIA). The pre-correction CNN is built and trained based on the aberrations of the lens array. The resulting PEIA, rather than the EIA, is presented on the liquid crystal display. Via the optical transformation of the lens array, higher quality 3D images are obtained. The validity of the proposed method is confirmed through simulations and optical experiments. A 70-degree viewing angle light field display with the improved image quality is demonstrated.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Tabletop three-dimensional (3D) displays have attracted widespread attention because they allow multiple viewers around the table to observe reconstructed 3D scenes with the correct depth cues [15]. Existing tabletop 3D display systems can be divided into three basic types: holographic, volumetric, and light-field displays. Some researchers have looked to develop reconstructed 3D objects based on the reproduction of both amplitude and phase information using holographic 3D technology. Unfortunately, producing large, true-color holographic 3D images remains a challenge because of the complexity of the image acquisition, transmission, and display process [24]. High-speed spanning projectors and screens have been adopted in volumetric tabletop 3D displays to produce transparent volume-filling 3D images. However, the color and gray levels produced by this system are limited, and occlusion relationships cannot be correctly constructed due to the transparency of the images [5]. As such, reconstructing the distribution of light rays of a real 3D scene in a 3D light-field display is regarded as a promising approach to achieving functional tabletop 3D displays [68].

In the previous study, a light-field display system containing a lens array, a liquid crystal display (LCD), and a holographic functional screen (HFS) that provides a continuous perspective at a 45-degree viewing angle is proposed [9]. Figure 1 illustrates the imaging process for a light-field display system. The LCD is used to display the elemental image array (EIA), with each lens and its corresponding LCD unit representing a projection system in which the images displayed on the LCD unit are projected onto the HFS. However, lens aberrations reduce the image quality and limit the viewing angle. Figure 1(b) shows the generation of diffuse spots. Figure 1(c) presents the original image and the blurred image by optical simulation. The size of the diffused spot increases as the viewing angle increases.

 figure: Fig. 1.

Fig. 1. (a) Schematic diagram of a light-field display system. (b) Example of a lens aberration degrading image quality.

Download Full Size | PDF

In general, aberrations can be reduced using optical optimization. In our previous work, a triple-lens system was designed for optical aberration correction that increased the viewing angle to 70 degrees [10]. However, using three lenses increases the structural complexity and the manufacturing difficulty. Inverse filtering can also be employed to suppress aberrations in various optical systems without increasing the complexity [1114]. Recently, a 45-degree viewing angle integral imaging display with enhanced image quality based on pre-filtering is demonstrated [15]. However, this method is suitable for the light field display with small viewing angle. For the large viewing angle light field display, there are serious aberrations. When the inverse filtering method is used to eliminate the serious aberration, the ringing effects deteriorate the 3D image quality, severely. The combination of machine learning with optics has been used to enhance image quality in optical systems. A light field re-projection method by machine learning is proposed to correct aberrations from singlet plenoptic camera [16]. By introducing machine learning into computer-generated holography (CGH), the full-color high-quality holographic images are generated in real time [17]. The employed algorithmic CGH framework includes a novel camera-in-the-loop optimization strategy and a neural network architecture.

In the present study, considering the structural complexity and the manufacturing difficulty, the compound lens with two lenses and an iris is designed to achieve70-degree viewing angle. An aberration-correction method based on a pre-correction convolutional neural network (CNN) is employed to enhance the image quality of a light-field display system with a compound-lens array. The captured EIA is pre-corrected in accordance with the aberrations introduced by the lens array, and then the pre-corrected EIA (PEIA) is displayed on the LCD in place of the original EIA. The use of this pre-correction method does not increase the complexity of the display, and experimental data validate the effectiveness of the proposed pre-correction CNN in improving image quality. In the experiment, a full-parallax tabletop 3D scene with high image quality can be viewed within a 70-degree viewing angle.

2. Aberration optimization based on a pre-correction CNN

In the traditional integrated imaging (II) based on LCD panel and lens array, the gap of lens array will cause image fragmentation, the lens distortion wraps elemental images and degrades the image quality severely, as shown in Fig. 2(a). To eliminate these problems, an HFS (which is also called the directional diffuser) is introduced in the integrated imaging in our previous work [9,18,19]. The HFS is set at the focal plane of the lens array.

 figure: Fig. 2.

Fig. 2. Reconstructed 3D image of (a) the traditional II (b) the II with the HFS.

Download Full Size | PDF

To guarantee the displayed 3D image is consecutive and uniform, the expanding angle of the HFS is designed to exactly eliminate the gap between the lenses [18]. In terms of contrast experiments, as shown in Fig. 2(b), the distortion of the display system with the HFS is corrected effectively and the influence of chromatic aberration on the display effect is very small. Figure 2(b) illustrates the details of a 3D image on the HFS. Due to the lens aberration, the pixels are obviously blurred. It is important to improve image quality by reducing the influence of aberration.

As shown in Fig. 3, the implementation process for enhanced 3D images based on a pre-correction CNN consists of three stages: digital capture, pre-correction processing, and optical visualization. In the capture stage, a virtual camera array (VCA) or a real camera array can be used to capture the perspective images. Here, the digital capture is employed, as the pinhole model can be applied to obtain perspective images without optical aberrations. In the pre-correction processing stage, the EIA is processed with the pre-correction CNN, producing a PEIA. During the pre-correction of the EIA, Zernike polynomials are used to analyze the optical aberrations of the lenses. Depending on the analysis results, the corresponding aberrations at different field positions of the EIs differ. Therefore, each EI is divided into 10×10 subsections, and the corresponding point spread function (PSF) for each subsection is determined. Based on the PSF array, a pre-correction network is constructed and trained by minimizing the errors between the aberration-corrected images and the target images. In the optical visualization stage, the PEIA is displayed on the LCD. Following optical transformation by the lens array, a high-quality 3D image is produced.

 figure: Fig. 3.

Fig. 3. Diagram of the proposed pre-correction method: (a) digital capture, (b) pre-correction processing, and (c) optical visualization.

Download Full Size | PDF

2.1 Analysis of the aberration characteristics of the lens array

In the digital capture stage of a light-field display system containing a lens array and an HFS, computer-generated techniques can be used to produce an aberration-free EIA via a VCA that simulates the pickup process [20]. In conventional visualization processes, blurry 3D images can be created due to aberrations in the lens array. Consequently, it is necessary to analyze the aberration characteristics of the lens array in a light-field display system. Because the parameters of the lens units are the same, Fig. 4 presents only one lens as an example to illustrate the aberrations. Two two-dimensional plane coordinate systems, XO1Y and ηO2ξ, are employed to describe the EIA plane and the lens array plane, respectively, with (X, Y) representing the object field coordinates, and (η, ξ) representing the pupil coordinates of the lens. The wavefront aberrations of field (X, Y) can be expressed by using Zernike polynomials as expressed in Eq. (1).

$$\begin{aligned}{W_{(X,Y)}}(\xi ,\eta ) &= {Z_0}_{(X,Y)} + {Z_1}_{(X,Y)}\eta + {Z_2}_{(X,Y)}\xi + {Z_3}_{(X,Y)}[{2({\xi^2} + {\eta^2}) - 1} ]\\ &\quad + {Z_4}_{(X,Y)}({\eta ^2} - {\xi ^2}) + {Z_5}_{(X,Y)}2\xi \eta + {Z_6}_{(X,Y)}[{ - 2\eta + 3\eta ({\xi^2} + {\eta^2})} ]\\ &\quad + {Z_7}_{(X,Y)}[{ - 2\xi + 3\xi ({\xi^2} + {\eta^2})} ]+ {Z_8}[{1 - 6({\xi^2} + {\eta^2}) + 6{{({\xi^2} + {\eta^2})}^2}} ]\end{aligned}$$

To characterize the continuously varying aberrations of a lens, the EI on the object field is divided into 100 equal subsections as shown in Fig. 4. These subsections are denoted as Fi,j (i = –5, –4, … –1, 1 … 4, 5; j = –5, –4, … –1, 1 … 4, 5). As each subsection is very small, the wavefront aberrations are almost the same for each individual section, and they can be calculated using the corresponding central field positions. For example, the wavefront aberrations in subsection F3,3 are calculated using the field position (0.5Rm,0.5Rm).

 figure: Fig. 4.

Fig. 4. (a) All 100 subsections of an EI and its corresponding lens. (b) Selection of nine subsections marked in (a) and their mapped central field positions.

Download Full Size | PDF

2.2 Light intensity distribution of the displayed EI on the HFS

The light intensity distribution of one point and the image displayed on the HFS can be analyzed after the aberration expression for the lens array is obtained. The pixels on the LCD are projected on the HFS by the lens array. The HFS diffuses the incident light from different lenses at a solid diffusion angle, and a voxel with a uniform light distribution is constructed. As shown in Fig. 5 and Fig. 6, XO1Y, ηO2ξ, and xOy represent the LCD plane, the lens array plane, and the HFS plane, respectively.

 figure: Fig. 5.

Fig. 5. Imaging process for the pixels of an LCD.

Download Full Size | PDF

Taking the voxel V on the HFS as an example, the pixels A, B and C pass through their corresponding lens and converge at voxel V. In the light-field display system, the pixels are small enough to be seen as points. The pixels A, B and C present different views of the voxel V. As shown in Fig. 5, the light intensity distribution of three pixels are different, so aberrations have different effects on different views for the voxel V. The aberrations vary for different object coordinate (X,Y). Taking the single pixel A(XA, YA) on the LCD as an example, according to Fourier optics theory, the intensity distribution created by pixel A(XA, YA) passing through the corresponding lens can be given as follows:

$$h(x,y) = \int {\left( {{{\left|{\frac{g}{l}\int {\int\limits_{ - \infty }^{ + \infty } {\textrm{exp} [{ - ikW(\lambda gu,\lambda gv)} ]} } \textrm{exp} \left\{ { - i2\pi \left[ {\left( {x - \frac{g}{l}{X_A}} \right)u + \left( {y - \frac{g}{l}{Y_A}} \right)v} \right]} \right\}dudv} \right|}^2}} \right)d\lambda }$$
where $u = {\xi / {\lambda g}}$, $v = {\eta / {\lambda g}}$, l is the distance between the LCD and the lens array, and g denotes the distance between the lens array and the HFS. $\lambda $ denotes the wavelength. As the LCD contains three kinds of color filters, the optical simulation uses three wavelengths (Blue:486 nm, Green:587 nm, Red:656 nm). The intensity distribution for the EI on the HFS can be represented by a convolution operation:
$$DI(x,y) = PI(x,y) \ast h(x,y)$$
where * represents the convolution operation, $PI({x,y} )$ denotes the intensity distribution of the pre-corrected EI on the LCD, and $DI({x,y} )$ represents the intensity distribution of the displayed image on the HFS.

 figure: Fig. 6.

Fig. 6. Schematic diagram of the pre-correction CNN for aberration correction.

Download Full Size | PDF

2.3 Pre-filtering CNN for aberration correction

To suppress the effect of aberrations in a light-field display system, a pre-correction CNN is utilized to obtain PEIs. Figure 6 illustrates the process of obtaining a PEI from a pre-correction CNN, in which an original EI is used as input for the auto-encoder. In accordance with the PSF analysis presented in the previous section, the imaging process for PEIs can be expressed digitally. The imaging process for the example PEI in Fig. 6 is equivalent to the convolution of PI (x, y) and h (x, y), as shown by the red rectangle. By substituting the lens parameters into Eqs. (1)–(3), the light intensity distribution of the displayed EI (DEI) on the HFS can be simulated. The original EI is captured by applying the pinhole model, which avoids the optical aberrations. The ideal light field image consists of original EIs. To improve the image quality, the DEI is required to be as close to the original EI as possible. The structural similarity index measure (SSIM) is used to determine the degree of similarity between the DEI and the original EI, followed by back-propagation of the loss value. In the training process, the influence of aberrations is introduced in the networks. Thus, the coordinate of the LCD plane and the viewing angles are considered in the training process.

2.4 Implementation of the pre-correction CNN

A schematic diagram of the auto-encoder employed in the pre-correction CNN is presented in Fig. 7. The encoder is utilized to extract features of the input EI, and a PEI is the output of the decoder. In the encoder, five convolution layers are adopted. As the EI is processed by each convolutional layer, the feature size is reduced by half and the number of features is doubled. In the decoder, a deconvolution operation is used to gradually increase the resolution of the features so that the resulting PEI is the same size as the input EI.

 figure: Fig. 7.

Fig. 7. Schematic diagram of the auto-encoder of the pre-correction CNN.

Download Full Size | PDF

In the present study, the encoder for the proposed CNN is trained based on building scenes. Many parallax images of streets and buildings are acquired for different scene models. More than 15,000 EIs with 143×143 resolution are collected as a dataset, and patches of 128×128 are randomly cropped for training. Large datasets and regularization are used to avoid the overfitting problem. In addition, the techniques such as jump connection and batch normalization are used to improve the convergence performance of the network. The employed network converges well, when the training process is iterated to 50,000 iterations, as shown in Fig. 8. Each iteration takes 0.036 seconds, and the total training time is 1800 seconds. The employed network is programmed with Tensorflow and operated on the NVIDIA RTX 2070 GPUs. The network structure contains five convolutional layers. The layers contain 32, 64, 128, 256, and 512 features in sequence.

 figure: Fig. 8.

Fig. 8. Convergence curve of network training.

Download Full Size | PDF

2.5 Advantage of the pre-correction CNN

For the large viewing angle light field display, the aberrations of the edge field of view are obviously larger than the middle. With the increase of aberrations, the overlapping range between imaging pixels increase. The previous pre-filtering method based on the minimum mean square error (MSE) only considers single-pixel correspondence [15]. When the pre-filtering method is used to eliminate the serious aberration of the edge field of view, the calculation error and the ringing effects appear. The proposed pre-correction CNN takes into account with the relationship between adjacent pixels by using the SSIM. It can improve the image quality of both edge field and center field, simultaneously. The simulation results are presented in Fig. 9, it can be seen that the aberrations of the lens unit seriously degrade the image quality. By using the pre-filtering method, the middle view is corrected effectively and the ringing effects deteriorate the image quality of left and right views. By using the pre-correction CNN method, the image quality of all the views are notably improved. The SSIM values are indicated at the top of the simulated images, respectively.

 figure: Fig. 9.

Fig. 9. Simulation results of (a) an ideal image without aberrations, (b) a displayed image without correction, (c) a displayed image with pre-filtering, (d) a displayed image with pre-correction CNN.

Download Full Size | PDF

3. Experiment

In order to realize a light field display with 70-degree viewing angle, a group of compound lens is designed. Considering the structural complexity and the manufacturing difficulty, two lenses and an iris are employed. The optimized structure and corresponding parameters of the lens unit are shown in Fig. 10(a). Based on the rotational symmetry of the lens units, the spot diagrams of 25 field sections are presented in Fig. 10(b), in which serious lens aberrations can be observed. As illustrated in Fig. 10(b), different colors represent different wavelengths (Blue:486 nm, Green:587 nm, Red:656 nm), the influence of chromatic aberration on the display effect is very small.

 figure: Fig. 10.

Fig. 10. (a) Structural parameters of a lens unit. (b) Spot diagrams of a lens unit for 25 field subsections (RMS spot sizes: F1,1: 1.91 mm, F2,1: 2.02 mm, F3,1: 2.53 mm, F4,1: 3.22 mm, F5,1: 4.10 mm, F1,2: 2.02 mm, F2,2: 2.17 mm, F3,2: 2.72 mm, F4,2: 3.32 mm, F5,2: 4.33 mm, F1,3: 2.53 mm, F2,3: 2.72 mm, F3,3: 3.07 mm, F4,3: 3.75 mm, F5,3: 4.62 mm, F1,4: 3.22 mm, F2,4: 3.31 mm, F3,4: 3.75 mm, F4,4: 4.33 mm, F5,4: 4.98 mm, F1,5: 4.09 mm, F2,5: 4.33 mm, F3,5: 4.62 mm, F4,5: 4.95 mm, F5,5: 5.04 mm).

Download Full Size | PDF

The Zernike coefficients for the 100 field subsections are given in Table 1. With these Zernike coefficients, the PSF is calculated and a PEIA is obtained for utilizing the pre-correction CNN.

Tables Icon

Table 1. Zernike coefficients for 100 field subsections.

In the optical experiment, a tabletop light-field display system consisting of an HFS, a lens array, and an LCD is demonstrated as shown in Fig. 11(a). The lens array is located at 6.977 mm above the LCD, while the HFS is located 180.00 mm above the lens array. The size of the LCD is 32 inches and its resolution is 7680×4320. The lens array consists of 53×30 lens units. The distance between the centers of adjacent lens units is 13 mm. Each EIA is composed of 143×143 pixels in a matrix format and create 143×143 viewpoint perspectives. Using rendering software and a synthesis algorithm, the EIA shown in Fig. 11(b) is obtained. Figure 11(c) presents the PEIA generated by the proposed pre-correction CNN.

 figure: Fig. 11.

Fig. 11. (a) Tabletop light-field display system consisting of an HFS, a lens array, and an LCD. (b) EIA. (c) PEIA.

Download Full Size | PDF

Figure 12 shows a standard resolution chart. It can be seen that the image quality is improved by introducing the pre-correction CNN.

 figure: Fig. 12.

Fig. 12. A displayed standard resolution chart (a) without the proposed pre-correction CNN and (b) with the pre-correction CNN.

Download Full Size | PDF

Figure 13(a) and (b) present the displayed 3D architectural scene from different viewpoints. The first row presents 3D images without correction, while the bottom row presents the corresponding 3D images improved with the pre-correction network. The middle row shows details from the captured images. By comparing the details, it can be concluded that the bottom row presents clearer 3D images and provides more detailed information for the 3D architectural scene within a 70-degree viewing angle. By introducing the pre-correction method to the light-field display, the image quality is improved obviously, and the trained CNN is suitable for different building scenes. A video of the 3D display results can be seen in Visualization 1.

 figure: Fig. 13.

Fig. 13. Tabletop 3D light-field display for an architectural scene without the proposed pre-correction CNN and with the pre-correction CNN.

Download Full Size | PDF

4. Conclusion

An aberration correction method for a light-field display based on a pre-correction CNN is demonstrated in the present study. Here, Zernike polynomials are used to describe the wavefront aberration characteristics of the lens array and obtain the PSF, which allows the imaging process to be digitally expressed. The target of the proposed pre-correction method is to produce DEIs on the HFS that are as similar to the original EIs without aberrations as possible. The pre-correction CNN is built and trained by minimizing the errors between the two DEIs and original EIs. In the visualization stage, the calculated PEIA is displayed on the LCD. Simulation and optical experiments demonstrate that the image quality is significantly higher with the proposed pre-correction approach. Using our approach, the proposed light-field display system produces clear and full-parallax 3D images with a 70-degree viewing angle.

Funding

National Natural Science Foundation of China (61905019, 62075016); National Key Research and Development Program of China (2017YFB1002900).

Disclosures

The authors declare no conflicts of interest. This work is original and has not been published elsewhere.

References

1. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]  

2. E. Chang, J. Choi, S. Lee, S. Kwon, J. Yoo, M. Park, and J. Kim, “360-degree color hologram generation for real 3D objects,” Appl. Opt. 57(1), A91–A100 (2018). [CrossRef]  

3. J. Hua, D. Yi, W. Qiao, and L. Chen, “Multiview holographic 3D display based on blazed Fresnel DOE,” Opt. Commun. 472, 125829 (2020). [CrossRef]  

4. Z. Zhang, J. Liu, X. Duan, and Y. Wang, “Enlarging field of view by a two-step method in a near-eye 3D holographic display,” Opt. Express 28(22), 32709–32720 (2020). [CrossRef]  

5. T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The seelinder: Cylindrical 3D display viewable from 360 degrees,” J. Vis. Commun. Image Represent. 21(5-6), 586–594 (2010). [CrossRef]  

6. Z. Yan, X. Yan, X. Jiang, H. Gao, and J. Wen, “Integral imaging based light field display with enhanced viewing resolution using holographic diffuser,” Opt. Commun. 402, 437–441 (2017). [CrossRef]  

7. J. Wen, X. Yan, X. Jiang, Z. Yan, F. Fan, P. Li, Z. Chen, and S. Chen, “Integral imaging based light field display with holographic diffusor: principles, potentials and restrictions,” Opt. Express 27(20), 27441–27458 (2019). [CrossRef]  

8. Z. Yan, X. Yan, Y. Huang, X. Jiang, Z. Yan, Y. Liu, Y. Mao, Q. Qu, and P. Li, “Characteristics of the holographic diffuser in integral imaging display systems: A quantitative beam analysis approach,” Opt. Lasers Eng. 139, 106484 (2021). [CrossRef]  

9. X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018). [CrossRef]  

10. X. Gao, X. Sang, X. Yu, W. Zhang, B. Yan, and C. Yu, “360◦ light field 3D display system based on a triplet lenses array and holographic functional screen,” Chin. Opt. Lett. 15(12), 121201 (2017). [CrossRef]  

11. M. Alonso and A. Barreto, “Pre-compensation for high-order aberrations of the human eye using on-screen image deconvolution,” in Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 556–559 (2003).

12. S. Mohammadpour, A. Mehridehnavi, H. Rabbani, and V. Lakshminarayanan, “A pre-compensation algorithm for different optical aberrations using an enhanced wiener filter and edge tapering,” in 2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA), 935–939 (2012).

13. M. Brown, P. Song, and T. Cham, “Image Pre-Conditioning for Out-of-Focus Projector Blur,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), 1956–1963 (2006).

14. F. Huang, D. Lanman, B. Barsky, and R. Raskar, “Correcting for optical aberrations using multilayer displays,” ACM Trans. Graph. 31(6), 1–12 (2012). [CrossRef]  

15. W. Zhang, X. Sang, X. Gao, X. Yu, B. Yan, and C. Yu, “Wavefront aberration correction for integral imaging with the pre-filtering function array,” Opt. Express 26(21), 27064–27075 (2018). [CrossRef]  

16. Y. Chen, X. Jin, and B. Xiong, “Optical-aberrations-corrected light field re-projection for high-quality plenoptic imaging,” Opt. Express 28(3), 3057–3072 (2020). [CrossRef]  

17. P. Yifan, C. Suyeon, P. Nitish, and W. Gordon, “Neural Holography with Camera-in-the-loop Training,” ACM Trans. Graph. 39(6), 1–14 (2020). [CrossRef]  

18. X. Yu, X. Sang, X. Gao, S. Yang, B. Liu, D. Chen, B. Yan, and C. Yu, “Distortion correction for the elemental images of integral imaging by introducing the directional diffuser,” Chin. Opt. Lett. 16(4), 041001 (2018). [CrossRef]  

19. C. Yu, J. Yuan, F. C. Fan, C. C. Jiang, S. Choi, X. Sang, C. Lin, and D. Xu, “The modulation function and realizing method of holographic functional screen,” Opt. Express 18(26), 27820–27826 (2010). [CrossRef]  

20. S. Xing, X. Sang, X. Yu, C. Duo, B. Pang, X. Gao, S. Yang, Y. Guan, B. Yan, J. Yuan, and K. Wang, “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25(1), 330–338 (2017). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       3D result

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. (a) Schematic diagram of a light-field display system. (b) Example of a lens aberration degrading image quality.
Fig. 2.
Fig. 2. Reconstructed 3D image of (a) the traditional II (b) the II with the HFS.
Fig. 3.
Fig. 3. Diagram of the proposed pre-correction method: (a) digital capture, (b) pre-correction processing, and (c) optical visualization.
Fig. 4.
Fig. 4. (a) All 100 subsections of an EI and its corresponding lens. (b) Selection of nine subsections marked in (a) and their mapped central field positions.
Fig. 5.
Fig. 5. Imaging process for the pixels of an LCD.
Fig. 6.
Fig. 6. Schematic diagram of the pre-correction CNN for aberration correction.
Fig. 7.
Fig. 7. Schematic diagram of the auto-encoder of the pre-correction CNN.
Fig. 8.
Fig. 8. Convergence curve of network training.
Fig. 9.
Fig. 9. Simulation results of (a) an ideal image without aberrations, (b) a displayed image without correction, (c) a displayed image with pre-filtering, (d) a displayed image with pre-correction CNN.
Fig. 10.
Fig. 10. (a) Structural parameters of a lens unit. (b) Spot diagrams of a lens unit for 25 field subsections (RMS spot sizes: F1,1: 1.91 mm, F2,1: 2.02 mm, F3,1: 2.53 mm, F4,1: 3.22 mm, F5,1: 4.10 mm, F1,2: 2.02 mm, F2,2: 2.17 mm, F3,2: 2.72 mm, F4,2: 3.32 mm, F5,2: 4.33 mm, F1,3: 2.53 mm, F2,3: 2.72 mm, F3,3: 3.07 mm, F4,3: 3.75 mm, F5,3: 4.62 mm, F1,4: 3.22 mm, F2,4: 3.31 mm, F3,4: 3.75 mm, F4,4: 4.33 mm, F5,4: 4.98 mm, F1,5: 4.09 mm, F2,5: 4.33 mm, F3,5: 4.62 mm, F4,5: 4.95 mm, F5,5: 5.04 mm).
Fig. 11.
Fig. 11. (a) Tabletop light-field display system consisting of an HFS, a lens array, and an LCD. (b) EIA. (c) PEIA.
Fig. 12.
Fig. 12. A displayed standard resolution chart (a) without the proposed pre-correction CNN and (b) with the pre-correction CNN.
Fig. 13.
Fig. 13. Tabletop 3D light-field display for an architectural scene without the proposed pre-correction CNN and with the pre-correction CNN.

Tables (1)

Tables Icon

Table 1. Zernike coefficients for 100 field subsections.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

W ( X , Y ) ( ξ , η ) = Z 0 ( X , Y ) + Z 1 ( X , Y ) η + Z 2 ( X , Y ) ξ + Z 3 ( X , Y ) [ 2 ( ξ 2 + η 2 ) 1 ] + Z 4 ( X , Y ) ( η 2 ξ 2 ) + Z 5 ( X , Y ) 2 ξ η + Z 6 ( X , Y ) [ 2 η + 3 η ( ξ 2 + η 2 ) ] + Z 7 ( X , Y ) [ 2 ξ + 3 ξ ( ξ 2 + η 2 ) ] + Z 8 [ 1 6 ( ξ 2 + η 2 ) + 6 ( ξ 2 + η 2 ) 2 ]
h ( x , y ) = ( | g l + exp [ i k W ( λ g u , λ g v ) ] exp { i 2 π [ ( x g l X A ) u + ( y g l Y A ) v ] } d u d v | 2 ) d λ
D I ( x , y ) = P I ( x , y ) h ( x , y )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.