Conventional head-mounted displays present different images to each eye, and thereby create three-dimensional (3D) sensation for viewers. This method can only control the stimulus to vergence but not accommodation, which is located at the apparent location of the physical displays. The disrupted coupling between vergence and accommodation could cause considerable visual discomfort. To address this problem, in this paper a novel multi-focal plane 3D display system is proposed. A stack of switchable liquid crystal Pancharatnam-Berry phase lenses is implemented to create real depths for each eye, which is able to provide approximate focus cue and relieve the discomfort from vergence-accommodation conflict. The proposed multi-focal plane generation method has great potential for both virtual reality and augmented reality applications, where correct focus cue is highly desirable.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
CorrectionsTao Zhan, Yun-Han Lee, and Shin-Tson Wu, "High-resolution additive light field near-eye display by switchable Pancharatnam–Berry phase lenses: errata," Opt. Express 26, 28505-28505 (2018)
Head-mounted display is a key part of virtual reality (VR) and augmented reality (AR) devices, serving as a bridge connecting the computer-generated virtual world and the real one. Currently, the three-dimensional (3D) effect is constructed by the principle of binocular disparity [1,2] in most of the commercial VR and AR devices. Although providing different images to different eyes is a popular and effective method to create acceptable depth perceptions, this unnatural method also has considerable drawbacks, such as vergence accommodation conflict , distorted depth perception , and visual fatigue . To overcome these drawbacks, physically real depths must be provided by the display in order to present not only the correct vergence but also the corresponding accommodation. There exist different kinds of technologies with such potential, including volumetric displays , integral displays [7-8], light field displays [9–11], and focal surface displays . In most of these methods, a fast focal length changing device plays a key role for the multi-focal plane displays. Several approaches [13–18] for making a tunable lens have been proposed, however, most of them are either too slow or bulky for wearable display applications.
In this paper, a novel multi-focal plane display, based on fast-response switchable Pancharatnam–Berry lenses (PBLs), is proposed, satisfying the need for a fast (< 1 ms) and compact (< 10 cm) 3D display system. First, the basic principles and fabrication processes of PBLs are introduced, which is the key element for generating multiple focal planes. Second, the additive light field generation procedure is described, making use of a constrained linear least squares method. Third, with the factorized light fields, a high-resolution 3D scene is synthesized by a compact light field near-eye display system.
2. Operation principle and simulations
2.1 Switchable Pancharatnam–Berry phase lenses
The well-known Pancharatnam–Berry (PB) phase optical elements [19–25] are half-wave plates whose crystal-axis is changing spatially in a specific way. The basic working principle of PB optical elements can be well explained by Jones calculus as follows:Fig. 1 and Eq. (3):
To make PBLs switchable, homemade fast-response liquid crystals are applied to pre-patterned half-wave plate cells. PBLs can be driven actively or passively [23,26], as shown in Fig. 2(c). For active driving, voltages are applied across PBLs, switching the LC directors between a well-defined lens-profile pattern parallel to the substrate (Fig. 1(b)) and homogenously perpendicular to the substrate. For passive driving, an external polarization rotator (PR) (e.g. a combination of quarter-wave plate and twisted-nematic LC cell) is added to switch the handedness of incident circularly polarized light. The optical power of PBL can be switched between 0 and K in active driving, while –K and K in passive driving. By synchronizing a stack of fast switching PBLs and a flat-panel display with high frame rate, a fast-response high-resolution 3D light field display system can be constructed. Specifically, for each sub-frame, a computationally factorized image is presented on the flat panel display at a desired depth by modulating the optical power of PBLs, as shown in Fig. 2(b).
2.2 Additive type light field display factorization
With generated multiple focal planes, the image shown on the flat panel display can be assigned to multiple depths, thus a 3D scene can be created. Here, an additive type of light field factorization method is designed to generate all the 2D images for corresponding image depths, which can be combined to reconstruct the targeted 3D light field with the proposed system, as Fig. 2(a) depicts. At different frames, the physical display panel is imaged by the PBLs as virtual panels with different depths. Since all the displays light is from incoherent illumination sources, it can be directly summed together using following equation:Fig. 2.
In order to computationally generate all the image contents shown in all virtual panels, it is necessary to solve the following optimization problem:
In Eq. (7), is the target light field originated from the desired 3D scene captured at K view points in the eyebox, is the mapping matrix between image contents on virtual panels and generated light field from the proposed system. Without loss of generality and for a simple example, a mapping procedure of a 3D scene with view points, 2 virtual panels (each has P pixels) is shown in Fig. 3. For a ray generated by the 8th pixel in virtual panel 1 and the 7th pixel in virtual panel 2, the viewer looking from the 8th view point would consider it representing the 9th pixel in the 3D scene, which is determined by the propagating direction of the ray. Since all display contents are assumed to be discrete, the target light field can be represented by a 4D matrix (view points, and a reference panel with a 2D image for each view point/direction). For convenience, the direction angles of light rays are discretized by the center point of pixels on the reference panel and the corresponding view point. For example, the direction angle of the green ray, shown in Fig. 3(a), is determined by line connecting the center point of the 9th pixel on the reference panel and 8th view point. The contents of target light field, rendered by commercial software (such as 3ds MaxTM), are reshaped into a vector in the order shown in Fig. 3. In this case, the 7P + 9th row in the mapping matrix, corresponding to the 9th pixel in the 3D scene looked from view point 8, are zeros except for the 8th and P + 7th columns, representing the locations of pixels to be added in two virtual panels.
The optimization of Eq. (7) appears like a least-squares problem, however, the elements of the vector I must stay within the range [0, 2552.2], because the illumination intensity of a display cannot be less than zero or larger than 2552.2 (for an 8-bit display with gamma = 2.2). Hence, a well-defined constrained linear least-squares problem is encountered. To solve this problem, a trust-region-reflective algorithm  needs to be applied. And a demonstration of the simulation results of this algorithm for 25 view points and 4 virtual panels is plotted in Fig. 4. With 4 virtual panels, the simulated additive light fields are able to provide precise 3D images with high qualities in different viewing angle. Additionally, the performance of the optimization framework is tested with different 3D scenes (Fig. 5(a)) and system setups, the result of which is given in Fig. 5. The simulated image quality is heavily dependent on the content of input 3D scenes, as Fig. 5(b) shows. Since the system’s degree of freedom is fixed by the physical display and virtual panel numbers, the compressive rate is higher for more complicated 3D scenes with low redundancy, resulting in output images with low qualities. If more degrees of freedom are provided, for example, by adding virtual panels, the image quality can be improved significantly, as Fig. 5(c) depicts. Moreover, it is found that lowing the brightness (grey levels) of input 3D scenes also has apparent positive impacts on the quality of output images, which can be explained by the relatively loosened constraint of input attributing to the decreased target values.
3. Experiment and results
3.1 Fabrication of switchable Pancharatnam–Berry lenses
Photo-alignment method is applied to fabricate the PBLs . For both active and passive driving methods, a thin photo-alignment film (PAAD-72, from Beam Company) was spin-coated on a transparent substrate. For passive driving, the coated substrate was directly exposed by a desired interference pattern. For active driving, the substrates with transparent electrodes (ITO glass) were assembled to form an LC cell before undergoing the exposure procedure. Figure 6 shows the optical setup. The incident collimated linearly polarized laser beam (λ = 457 nm) was split into two arms after passing through a non-polarizing beam splitter (BS). One beam is converted to LCP by a quarter-wave plate working as the reference beam, while the other is converted to RCP before entering the target lens (Lt), whose focal length is identical to that of the desired PBLs. These two laser beams are supposed to have the same size on the prepared substrate (S), which is coated with a thin photo-alignment film, after being combined together by the 2nd beam splitter.
After exposure, for the active driving device, the cell (with indium tin oxide (ITO) electrodes) is filled with a home-made fast-response LC material (UCF-M37, γ1/K11 = 4.0 ms/μm2 at 22°C) to satisfy the half-wave requirement (dΔn = λ/2; d is the cell gap). While for passive driving, the exposed substrate is coated with a diluted LC monomer (e.g. RM257) and then cured by a UV light, forming a thin cross-linked LC polymer film where a LC cell is not necessary.
3.2 Experimental results
Since the time-multiplexing method is applied in the proposed system, the response time of PBL should be as fast as possible. For passive driving mode, the response time is limited by the broadband TN polarization rotator, whose response time is typically >2 ms . For active driving mode, the response time of the fabricated PBL (d = 1.6 μm, Δn = 0.17) is measured to be 0.54 ms, which is fast enough for a display panel with 1-kHz frame rate. Because of its fast response time and compact system configuration, in this paper the active driving mode (using 2 PBLs) is selected to demonstrate additive light field display. As shown in Fig. 7, the depth information of image content is well-illustrated and the reduction in spatial resolution is negligible as compared to that of the display panel. The detailed parameters of the PBLs and experimental 3D display system are listed in the Table. 1.
A novel light field display system is proposed and experimentally demonstrated. This system, benefiting from the fast response time of PBLs, is able to provide high-resolution 3D scenes for viewers. The physical depth provided by PBLs could relieve human visual system from vergence-accommodation conflict, which is a highly demanding requirement. The proposed light field technology has potential applications in virtual reality and augmented reality with the rapidly increasing computation power of electronic devices.
The authors are indebted to the financial support of Intel Labs.
References and links
2. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]
5. M. Mon-Williams, J. P. Wann, and S. Rushton, “Binocular vision in a virtual world: visual deficits following the wearing of a head-mounted display,” Ophthalmic Physiol. Opt. 13(4), 387–391 (1993). [CrossRef] [PubMed]
6. E. Downing, L. Hesselink, J. Ralston, and R. Macfarlane, “A three-color, solid-state, three-dimensional display,” Science 273(5279), 1185–1189 (1996). [CrossRef]
7. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 220 (2013). [CrossRef]
8. H. S. Park, R. Hoskinson, H. Abdollahi, and B. Stoeber, “Compact near-eye display system using a superlens-based microlens array magnifier,” Opt. Express 23(24), 30618–30633 (2015). [CrossRef] [PubMed]
10. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015). [CrossRef]
11. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays: realization of augmented reality with holographic optical elements,” ACM Trans. Graph. 35(4), 60 (2016). [CrossRef]
12. N. Matsuda, A. Fix, and D. Lanman, “Focal surface displays,” ACM Trans. Graph. 36(4), 86 (2017). [CrossRef]
13. H. Ren and S. T. Wu, Introduction to Adaptive Lenses (Wiley, 2012).
17. G. D. Love, D. M. Hoffman, P. J. W. Hands, J. Gao, A. K. Kirby, and M. S. Banks, “High-speed switchable lens enables the development of a volumetric stereoscopic display,” Opt. Express 17(18), 15716–15725 (2009). [CrossRef] [PubMed]
18. S. W. Lee and S. S. Lee, “Focal tunable liquid lens integrated with an electromagnetic actuator,” Appl. Phys. Lett. 90(12), 121129 (2007). [CrossRef]
19. E. Hasman, V. Kleiner, G. Biener, and A. Niv, “Polarization dependent focusing lens by use of quantized Pancharatnam–Berry phase diffractive optics,” Appl. Phys. Lett. 82(3), 328–330 (2003). [CrossRef]
20. S. Pancharatnam, “Generalized theory of interference and its applications,” Proc. Indian Acad. Sci. Sect. A Phys. Sci. 44(5), 247–262 (1956).
21. M. V. Berry, “Quantal phase factors accompanying adiabatic changes,” Proc. R. Soc. Lond. A 392 (1802), 45–57 (1984). [CrossRef]
22. Y. Ke, Y. Liu, J. Zhou, Y. Liu, H. Luo, and S. Wen, “Optical integration of Pancharatnam-Berry phase lens and dynamical phase lens,” Appl. Phys. Lett. 108(10), 101102 (2016). [CrossRef]
23. N. V. Tabiryan, S. V. Serak, D. E. Roberts, D. M. Steeves, and B. R. Kimball, “Thin waveplate lenses of switchable focal length-new generation in optics,” Opt. Express 23(20), 25783–25794 (2015). [CrossRef] [PubMed]
26. Y. H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S. T. Wu, “Recent progress in Pancharatnam-Berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3(1), 79–88 (2017). [CrossRef]
27. T. F. Coleman and Y. Li, “A reflective Newton method for minimizing a quadratic function subject to bounds on some of the variables,” SIAM J. Optim. 6(4), 1040–1058 (1996). [CrossRef]
28. J. Kim, Y. Li, M. N. Miskiewicz, C. Oh, M. W. Kudenov, and M. J. Escuti, “Fabrication of ideal geometric-phase holograms with arbitrary wavefronts,” Optica 2(11), 958–964 (2015). [CrossRef]