Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dual-depth augmented reality display with reflective polarization-dependent lenses

Open Access Open Access

Abstract

Vergence-accommodation conflict (VAC) is a common annoying issue in near-eye displays using stereoscopy technology to provide the perception of three-dimensional (3D) depth. By generating multiple image planes, the depth cues can be corrected to accommodate a comfortable 3D viewing experience. In this study, we propose a multi-plane optical see-through augmented reality (AR) display with customized reflective polarization-dependent lenses (PDLs). Leveraging the different optical powers of two PDLs, a proof-of-concept dual-plane AR device is realized. The proposed design paves the way to a compact, lightweight, and fatigue-free AR display.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Augmented reality (AR) displays have found widespread applications in healthcare, education, engineering, and gaming. One of the major challenges of AR displays is to present natural-looking three-dimensional (3D) images for comfortable viewing experiences. Conventional stereoscopic AR products leverage binocular stereo vision to create the illusion of depth [1]. Nevertheless, this technique fails to match the vergence distance with the accommodation distance, resulting in visual fatigue, discomfort, and even nausea [2]. With the demand for providing correct focus cues, several methods have been developed, such as light field displays [36], holographic displays [710], multifocal displays [1119], varifocal displays [2022], and volumetric displays [23,24].

A multi-plane display can establish multiple focal depths to match the varying vergence distance perceived by binocular vision. Several implementations encompass spatial-multiplexing [11,12], time-multiplexing [1416,18], polarization-multiplexing [17], and wavelength-multiplexing [19]. Overall, they can be categorized into power-based [16,17] and distance-based types [15,18]. The power-based approach relies on the imaging optics with a tunable focal length, while the distance-based method alters the optical path difference (objective distance). With respect to the elements providing tunable optical powers, the gradient-index LC lenses are widely used in the varifocal displays due to their continuous control of focal lengths [2022]. In multifocal displays, some approaches adopt the Pancharatnam-Berry phase lens exploiting the opposite phase profiles with orthogonal circular polarizations to implement a multifocal virtual reality system [16,17]. However, transforming the systems into AR displays needs an additional optical see-through combiner. One of the distance-based approaches is to use a Savart plate to distinguish the optical path lengths between the ordinary and the extraordinary beams by double refraction [25]. Nevertheless, the tradeoff is increased volume size due to the thick Savart plate, and a beam splitter is deployed to enable the see-through ability. Another approach based on cholesteric liquid crystal (CLC) films is demonstrated [18]. Circularly polarized lights with opposite handedness offer different optical path lengths by arranging the CLC films at different positions. Nonetheless, a beam splitter is still needed to accommodate the on-axis feature of CLC films. Furthermore, an additional distance between CLC films should be counted for projecting more focal planes at different depths, which would also increase the display volume.

Diffractive optical devices with patterned CLC have been proven useful in wearable displays [2631]. Such CLC-patterned diffractive optics features thin form factor, lightweight, low cost, polarization selectivity, and potentially tunable reflection band with high efficiency. Unlike traditional holographic optical elements, the CLC devices could adapt to a wider angular and spectral response, making them more compatible with various display devices. CLC lenses can be classified into off-axis and on-axis types. The main difference between them is the transverse periodicity. The off-axis lens integrating the function of a lens and a grating provides a large diffraction angle, and can be used as a coupler or combiner in AR displays [28,30,31]. However, it may cause large aberrations depending on how it is used. When it is used as an imaging element for Maxwellian view, the system is free from the aberration if the incident waves satisfy the Bragg-matching condition. However, the tradeoff is reduced eyebox size. And in a conventional imaging system, it suffers from a noticeable aberration due to its relatively large diffraction angle. In comparison, the on-axis type only featuring the lens function exhibits a better image performance and can be employed as a chromatic aberration corrector in a virtual reality display system [29].

In this paper, we propose a multi-plane optical see-through AR display with customized polarization-dependent lenses (PDLs) based on CLC patterning. PDLs with different diopters serve to provide image planes at different depths. In Sec. 2, we demonstrate the system design and simulation model using sequential ray-tracing analysis. This is followed in Sec. 3 by showing the fabrication procedures and lensing performance. Finally, we implement a proof-of-concept dual-plane AR display to verify its feasibility. Further improvements and diversities of the system are also discussed. Such a design is promising for 3D AR displays.

2. System design

Figure 1 shows the system configuration consisting of an image source, a polarization rotator (PR), and two customized PDLs. The polarization rotator includes a linear polarizer, a switchable twisted nematic (TN) cell, and a quarter-wave plate (QWP) film. By switching on and off the TN cell, two orthogonal linearly polarized (LP) lights can be generated. After passing through the QWP film, they will be turned into RCP (right-handed circularly polarized) or LCP (left-handed circularly polarized) light. The PDL follows the CLC polarization-selectivity rule. In experiment, we designed two PDLs with different optical powers for RCP and LCP. The image source is placed within one focal length of the PDL to generate virtual images, and two image planes would appear at different depths. It is noteworthy that instead of utilizing a beam splitter as the system combiner, the PDLs are positioned at 22.5° relative to the eye pupil. In practical applications, head-mounted display manufacturers tend to place the image plane at around 2 to 3 m to cover a wide range of observable virtual space without any obvious adverse effects [32]. In mixed reality (MR) headsets, the arm’s-length display interaction is involved, so the image plane should be closer. The depths of image planes are calculated by thin lens equation in the paraxial approximation [33].

 figure: Fig. 1.

Fig. 1. Schematic illustration of the dual-plane AR system. LCD: Liquid crystal display; PR: Polarization rotator. R-PDL and L-PDL reflect RCP and LCP lights, respectively.

Download Full Size | PDF

The sequential raytracing is used to quantitatively evaluate the image performance of the system performed by the commercial software OpticStudio. The PDLs behave like volume holographic optical elements [34], whose interference patterns are recorded on a photosensitive material. The hologram involves two distinctive stages: the construction step and the playback step. The construction step records the pattern on a photosensitive material by interfering two coherent beams (recording beam and reference beam). When constructing a reflective holographic lens, two beams are distributed on opposite sides of the photosensitive material. In the playback stage, matching the incident beam with the reference beam could render a diffraction-limited system. Also, the imaging wavelength is the same as the construction wavelength. The simulated 2D optical layout is illustrated in Fig. 2(a). The system has an eye relief of 30 mm with a circular field-of-view (FOV) of 20° accompanied by a 4-mm eyebox. Herein, we only simulate one PDL for simplicity. The material for the QWP is set as PMMA. Figure 2(b) shows the optimized spot diagrams with root-mean-square (RMS) radii at different fields. During the optimization, we kept the distances between each surface constant and tuned the angle of the image plane with respect to the hologram surface. At the 0° field, the RMS spot is enclosed within the Airy disk, which is 8.488 µm in radius. Nevertheless, the aberrations become more noticeable when the FOV gets larger. This issue will be discussed later in Sec. 4.

 figure: Fig. 2.

Fig. 2. (a) 2D optical layout of a simplified system with one PDL at 3-m depth. (b) Standard spot diagrams with RMS radii at different fields.

Download Full Size | PDF

3. Optical component fabrication

A commonly adopted method to record a PDL profile is the photo-alignment polarization holography, where the interference patterns are produced by two CP light beams [35]. A modified Mach-Zehnder interferometer, shown in Fig. 3(a), serves as the exposure setup. The employed laser is Cobolt Twist 457 nm. The filtered and collimated laser beam is split into two orthogonal arms (TE and TM) by the first polarization beam splitter (PBS), and the ratio of two beams can be adjusted by the half-wave plate (HWP). A template lens (TL) is inserted into optical path 1 to generate the desirable phase profile. The TE wave is reflected twice by mirrors and then recombined with the transmitting TM wave after passing through the second PBS. Then the QWP converts the TE and TM waves into RCP and LCP. The resulting interference pattern is recorded on the sample substrate coated with a photo-alignment layer. The sample preparation started from cleaning the glass substrate by using ethanol and organic solvents followed by 10-min UV-O3 cleaning. Next, a photo-alignment layer was spin-coated onto the clean substrate to create a uniform thin film. The photo-alignment material we used is Brilliant Yellow (BY), which was dissolved in the dimethylformamide (DMF) solution. Then the substrate was exposed under interfering beams. After exposure, we spin-coated the CLC polymer diluted in toluene at 1:2.5 by weight onto the patterned surface. We prepared two recipes of CLC polymers, and both are composed of RM257 (94.27%), chiral dopants (2.53%), photo-initiator Irgacure 651 (3%), and surfactant Zonyl 8857A (0.2%). The difference is that one is doped with chiral dopant R5011 (HCCH, China) for fabricating the R-PDL, and the other is doped with chiral dopant S5011 (HCCH, China) for fabricating the L-PDL. The ratio of chiral dopants needs to be carefully considered to provide an accurate cholesteric pitch and the central wavelength at an incident angle (θ­in). The relation between the central wavelength under an off-axis (λoff) and on-axis (λon) incident angle is described by λoff = λon × cos (θ­in). Our target central wavelength at θ­in = 22.5° is around 530 nm, which corresponds to λon≈573 nm. The cholesteric pitch is equal to λon divided by the average refractive index of RM257 (nave = 1.59 at λon = 573 nm). As a result, the designed cholesteric pitch is 359 nm, and then the chiral dopant concentration can be calculated accordingly. Finally, the samples were polymerized by a UV light. More detailed fabrication procedures can be found in [36].

 figure: Fig. 3.

Fig. 3. (a) The exposure setup for 2D patterning. M: mirror; TL: template lens; PBS: polarizing beam splitter; S: sample substrate; $\theta $: rotation angle. ① indicates optical path 1. ② indicates optical path 2. (b) Simulated phase profile. The color bar indicates the phase ranging from 0 to $2\pi .$ (c) Working principles of the R-PDL and L-PDL in the recording and imaging steps.

Download Full Size | PDF

The key part of this exposure setup is the position and orientation of the sample substrate. To achieve the desired photo-alignment pattern, we should keep the two beams with same intensity. Thus, the simplest way is to put the sample substrate at the position which is twice the focal length of the TL. In this case, the recorded phase profile will be the same as that of the TL. Generally, recording an on-axis lens pattern is to place the sample substrate at the plane perpendicular to the beam propagation direction [37]. The corresponding phase profile ($\varphi $) is expressed as:

$$\varphi = \frac{{2\pi }}{\lambda }\left( {\sqrt {{r^2} + {f^2}} - f} \right),$$
where $\lambda $ is the wavelength used in the imaging, r is the lens radius, and f is the focal length of the sample. This kind of lens could exhibit diffraction-limited characteristics for the normal incident light. Otherwise, the aberrations inevitably arise for the off-axis light rays. Consequently, to potentially acquire a diffraction-limited image at a specific oblique incident angle, we rotate the sample substrate around the x-axis counterclockwise by an angle θ = 22.5°, as shown in Fig. 3(a). In this case, the recorded phase profile is recalculated as:
$$\varphi = \frac{{2\pi }}{\lambda }\left( {\sqrt {{x^2} + {y^2} + {f^2} + 2xfsin\theta } - f - xsin\theta } \right),$$
where x and y represent the coordinates of a point on the lens, and $\theta $ is the rotation angle. Figure 3(b) shows the simulated phase distribution at θ = 22.5°. The patterned rings will look more like ellipses if the rotation angle becomes much larger.

In our experiment, two PDLs are designed with different phase profiles. Figure 3(c) further illustrates the working principles of the recording and imaging steps of these two PDLs. In the imaging stage, the incident wave satisfies the Bragg-matching condition, and the reconstruction wave is the mirror reflection upon the sample of the extended ray of the other exposure beam. The diverging reconstructed beams can also be derived from the Gaussian lens equation. The system configuration shows the input beams on both PDLs share the same shape, but the output beams have different profiles. We assume the output beams are parallel in the case of 3m-depth. To record the R-PDL phase profile, we simply place a target TL (f1=8 cm) in the optical path 1 to mimic the input beam of the system at the 0° field. The distance between the TL (f1=8 cm) and the sample substrate is equal to 2f1. The optical path 2 mimics the parallel output beam. Thus, R-PDL works for the far depth in the imaging stage. To get the L-PDL phase profile, the TL in optical path 1 is retained, and another TL (f2=30 cm) is added to the optical path 2 to mimic the other output beam. The distance between the TL (f2=30 cm) and the sample substrate is equal to 2f2. When it is used in the imaging system, the output beams turn into diverging waves, whose reversed converging extension rays focus on the image plane at the near depth.

4. Results and discussions

4.1. Lens characterization

We fabricated two PDLs in response to RCP and LCP lights. They possess different optical powers but work in the same wavelength range. Figures 4(a) and 4(b) show the photographs of the fabricated R-PDL circled with white dashed lines. The green image observed in Fig. 4(a) is the ceiling fluorescent lamp in our lab. The lens surface is quite clear and uniform. The text shown in Fig. 4(b) is imaged through the lens. Considering its practical use in the proposed system, we measured its normalized transmittance at an oblique incident angle (θ­in=22.5°), as plotted in Fig. 4(c). The inset figure illustrates the measurement setup. The reflection efficiency is about 90% for the RCP light. Establishing sufficient Bragg pitches (∼10 pitches) contributes to a higher efficiency, but it requires a thicker film. However, the spin-coating method may potentially limit the thickness of deposited layers. Employing a thick liquid crystal cell is conducive to improving efficiency, and the cell gap thickness can be freely controlled [29]. We also examined the LC alignment quality under the polarizing optical microscope, as shown in Fig. 4(d). The elliptical rings further prove the phase profile to be consistent with the theoretical analysis.

 figure: Fig. 4.

Fig. 4. Photos of (a) the fabricated R-PDL and (b) texts imaged through the R-PDL. The lens areas are circled by the white dashed lines. (c) The measured normalized transmission spectrum of the R-PDL at 22.5° with respect to the horizontal line. The inset illustrates the measurement setup. (d) The polarizing optical microscope image.

Download Full Size | PDF

4.2. Breadboard performance

To experimentally verify the feasibility of the proposed system, we implemented a green-color dual-plane AR display system by utilizing the fabricated PDLs. The input images were displayed on a commercial 0.5-inch OLED micro-display with 1024-by-768 resolution. The polarization rotator module consists of a linear polarizer, a switchable TN cell, and a QWP. We filled a liquid crystal (ZLI3285, Merck) into a commercial TN cell (cell gap d≈3.4 µm) and applied an AC voltage to control the on- and off-states. And the QWP converts two orthogonal linear polarizations into two opposite circular polarizations. Two real objects were put at ∼25 cm and ∼3 m, respectively. The display panel was put at ∼6.7 cm away from the two stacked PDLs. Then we simultaneously switch the polarization rotator and the computer-generated images to create two image planes. The camera was positioned at ∼3 cm in front of the PDLs to capture the images through PDLs. As shown in Fig. 5(a), when the camera was focused at 25 cm, both the doll and ‘LCD’ image look clear, while the far objects and ‘UCF’ logo are strongly blurred. On the contrary, when the camera is focused at 3 m, both the objects on the wall and ‘UCF’ logo could be clearly observed, while the doll and ‘LCD’ are very blurry as Fig. 5(b) shows. The sizes of virtual images appear to be different due to different magnifications of two depths. The ‘UCF’ image represents a horizontal FOV of about 10°. The experiment was conducted under the yellow ambient ceiling light, which is a combination of green and red colors. The green component is reflected by our CLC lenses. As a result, mainly the red color is observed, as Fig. 5 shows. This can be alleviated by reducing the reflection efficiency of the lens. The reflective PDL based on the principle of CLC only works for a specific angular and wavelength bandwidth. The light outside the bandwidth will pass through the lens. The light within the bandwidth from a real-world scene will be reflected by the PDL and does not enter the observer’s eye, while the rest will pass through the PDL without any imaging effect. As a result, the real-world scene will not be distorted by the PDL.

 figure: Fig. 5.

Fig. 5. Captured images of the dual-plane AR display system when the camera is focused at (a) 25 cm and (b) 3 m, respectively.

Download Full Size | PDF

The BY layer recording the pattern under blue laser irradiation has been widely reported. When the PDL is implemented in the imaging system operating at other wavelengths, it will cause additional aberrations due to the mismatched wavelength. Two methods are available to circumvent this problem. One way is to adopt a green laser to record the pattern. It has been verified that the photo-alignment layer (PAAD-72, BEAM Co.) could effectively work under green laser exposure [38]. The recording ability of BY material at green and red wavelengths is still under investigation. The other approach is to design a freeform template lens to pre-compensate for the correlated aberrations [39]. As for the optical aberration caused by the off-axis rays, it could be partially corrected by the digital compensation method through pre-processing the input image contents. This adds some burden on the computational load and power consumption. Another choice is to use the freeform compound lens to fulfill optical correction. Furthermore, a full color display could also be implemented by stacking multiple PDL films working at primary RGB wavelengths or synthesizing RGB colors into one PDL cell [29].

4.3. Discussion

We have successfully demonstrated a dual-plane AR display implemented by PDLs with opposite circular polarization responses. Although PDLs are limited to two circular polarization states, the number (N) of focal planes can be increased by utilizing active half-wave plates and customized PDLs. Figure 6 depicts the system configuration of multiple focal planes (N=3). We assume the incident light has RCP state modulated by a LP and QWP, and the PDLs respond to LCP light. PDLs possess different optical powers. When HWP-1 is switched on, the incident RCP light is converted to LCP and then is reflected by PDL-1 and imaged at a specific depth. Similarly, if we only switch on HWP-2, the incident RCP light will only be reflected by PDL-2 and imaged at another depth. Correspondingly, virtual images could be projected to the designated Nth focal plane by switching on the Nth HWP. It is noteworthy that the incident angles on HWPs are determined by the system’s FOV. Thus, a compensation film is required to accommodate a large range of incident angles in a wide FOV system. In addition, if the incident light is not totally reflected by the working PDL, the leaked light will continue to work on the next PDL, resulting in ghost images. This can be mitigated by turning on the HWP behind the working PDL to convert the LCP back to RCP light. A more critical factor for its practicality in AR systems is that how much the light in the real-world scene will enter the system. In our demonstrated configuration, the transmittance of the outside view entering the system can be approximated by the following equation:

$$T = \left( {50{\%} + 50{\%} \times \left( {1-R} \right)} \right)^N,$$

 figure: Fig. 6.

Fig. 6. Illustration of multi-plane system configuration. Each PDL provides a different optical power. Each HWP serves to open a channel to project the virtual images at different depths.

Download Full Size | PDF

where N is the number of PDL, and R is the reflection efficiency of each PDL. If the system is rendered with 6 planes and we assume each lens has 10% reflection efficiency, then from Eq. (3), the transmittance is 73%. The lens with 10% reflection efficiency could provide a displayed image with an ambient contrast ratio >3:1 in our proposed AR system, depending on the adopted display panel.

5. Conclusion

We propose a multi-plane optical see-through AR display with customized reflective polarization-dependent lenses (PDLs). The PDLs are fabricated using patterned CLC polymers, thereby following the polarization-selectivity rules. In the demonstrated prototype, we design the PDLs with different optical powers in response to opposite circular polarizations, so two virtual images are projected at different depths using time-multiplexing technique. To build a more advanced system, we propose to use active HWPs to control each channel and enable the corresponding PDL to create more focal planes at different depths. The fabricated PDL exhibits ∼90% reflection efficiency at the desired wavelength and shows a clear surface with high uniformity. The proposed design could become a promising approach enabling small form-factor platforms for AR displays.

Funding

Goertek Electronics.

Acknowledgments

The authors would like to thank Tao Zhan for helpful discussions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]  

2. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” Journal of Vision 8(3), 33–233 (2008). [CrossRef]  

3. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001). [CrossRef]  

4. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012). [CrossRef]  

5. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

6. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]  

7. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41(11), 2486–2489 (2016). [CrossRef]  

8. C. Jang, C. K. Lee, J. Jeong, G. Li, S. Lee, J. Yeom, K. Hong, and B. Lee, “Recent progress in see-through three-dimensional displays using holographic optical elements [Invited],” Appl. Opt. 55(3), A71–A85 (2016). [CrossRef]  

9. J. H. Park and S. B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018). [CrossRef]  

10. C. Chang, W. Cui, and L. Gao, “Holographic multiplane near-eye display based on amplitude-only wavefront modulation,” Opt. Express 27(21), 30960–30970 (2019). [CrossRef]  

11. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000). [CrossRef]  

12. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004). [CrossRef]  

13. X. Hu and H. Hua, “Design and assessment of a depth-fused multi-focal-plane display prototype,” J. Display Technol. 10(4), 308–316 (2014). [CrossRef]  

14. Y. H. Lee, F. Peng, and S. T. Wu, “Fast-response switchable lens for 3D and wearable displays,” Opt. Express 24(2), 1668–1675 (2016). [CrossRef]  

15. S. Liu, Y. Li, P. Zhou, Q. Chen, and Y. Su, “Reverse-mode PSLC multi-plane optical see-through display for AR applications,” Opt. Express 26(3), 3394–3403 (2018). [CrossRef]  

16. Y. H. Lee, G. Tan, K. Yin, T. Zhan, and S. T. Wu, “Compact see-through near-eye display with depth adaption,” J. Soc. Inf. Disp. 26(2), 64–70 (2018). [CrossRef]  

17. G. Tan, T. Zhan, Y. H. Lee, J. Xiong, and S. T. Wu, “Polarization-multiplexed multiplane display,” Opt. Lett. 43(22), 5651–5654 (2018). [CrossRef]  

18. Q. Chen, Z. Peng, Y. Li, S. Liu, P. Zhou, J. Gu, J. Lu, L. Yao, M. Wang, and Y. Su, “Multi-plane augmented reality display based on cholesteric liquid crystal reflective films,” Opt. Express 27(9), 12039–12047 (2019). [CrossRef]  

19. T. Zhan, J. Zou, M. Lu, E. Chen, and S. T. Wu, “Wavelength-multiplexed multi-focal-plane seethrough near-eye displays,” Opt. Express 27(20), 27507–27513 (2019). [CrossRef]  

20. S. Suyama, M. Date, and H. Takada, “Three-dimensional display system with dual-frequency liquid-crystal varifocal lens,” Jpn. J. Appl. Phys. 39(Part 1, No. 2A), 480–484 (2000). [CrossRef]  

21. M. B. Kumar, D. Kang, J. Jung, H. Park, J. Hahn, M. Choi, J. H. Bae, H. Kim, and J. Park, “Compact vari-focal augmented reality display based on ultrathin, polarization-insensitive, and adaptive liquid crystal lens,” Opt. Lasers Eng. 128, 106006 (2020). [CrossRef]  

22. Y. J. Wang, Y. H. Lin, O. Cakmakci, and V. Reshetnyak, “Varifocal augmented reality adopting electrically tunable uniaxial plane-parallel plates,” Opt. Express 28(15), 23023–23036 (2020). [CrossRef]  

23. K. Langhans, D. Bahr, D. Bezecny, D. Homann, K. Oltmann, K. Oltmann, C. Guill, E. Rieper, and G. Ardey, “FELIX 3D display: an interactive tool for volumetric imaging,” Proc. SPIE 4660, 176–190 (2002). [CrossRef]  

24. G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. Giovinco, M. J. Richmond, and W. S. Chun, “100-million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002). [CrossRef]  

25. C. K. Lee, S. Moon, S. Lee, D. Yoo, J. Y. Hong, and B. Lee, “Compact three-dimensional head-mounted display system with Savart plate,” Opt. Express 24(17), 19531–19544 (2016). [CrossRef]  

26. J. Kobashi, H. Yoshida, and M. Ozaki, “Planar optics with patterned chiral liquid crystals,” Nat. Photonics 10(6), 389–392 (2016). [CrossRef]  

27. K. Yin, Y. H. Lee, Z. He, and S. T. Wu, “Stretchable, flexible, rollable, and adherable polarization volume grating film,” Opt. Express 27(4), 5814–5823 (2019). [CrossRef]  

28. J. Xiong, G. Tan, T. Zhan, and S. T. Wu, “Breaking the field-of-view limit in augmented reality with a scanning waveguide display,” OSA Continuum 3(10), 2730–2740 (2020). [CrossRef]  

29. Y. Li, T. Zhan, Z. Yang, C. Xu, P. L. LiKamWa, K. Li, and S. T. Wu, “Broadband cholesteric liquid crystal lens for chromatic aberration correction in catadioptric virtual reality optics,” Opt. Express 29(4), 6011–6020 (2021). [CrossRef]  

30. J. Xiong, Y. Li, K. Li, and S. T. Wu, “Aberration-free pupil steerable Maxwellian display for augmented reality with cholesteric liquid crystal holographic lenses,” Opt. Lett. 46(7), 1760–1763 (2021). [CrossRef]  

31. K. Yin, Z. He, K. Li, and S. T. Wu, “Doubling the FOV of AR displays with a liquid crystal polarization-dependent combiner,” Opt. Express 29(8), 11512–11519 (2021). [CrossRef]  

32. R. Zabels, K. Osmanis, M. Narels, U. Gertners, A. Ozols, K. Rūtenbergs, and I. Osmanis, “AR Displays: Next-Generation Technologies to Solve the Vergence–Accommodation Conflict,” Appl. Sci. 9(15), 3147 (2019). [CrossRef]  

33. B. E. A. Saleh and M. C. Teich, Fundamentals of photonics (Wiley-Interscience, 2007).

34. D. H. Close, “Holographic optical elements,” Opt. Eng. 14(5), 408–419 (1975). [CrossRef]  

35. J. Kim, Y. Li, M. N. Miskiewicz, C. Oh, M. W. Kudenov, and M. J. Escuti, “Fabrication of ideal geometric-phase holograms with arbitrary wavefronts,” Optica 2(11), 958–964 (2015). [CrossRef]  

36. Y. Li, T. Zhan, and S. T. Wu, “Flat cholesteric liquid crystal polymeric lens with low f-number,” Opt. Express 28(4), 5875–5882 (2020). [CrossRef]  

37. S. V. Serak, D. E. Roberts, J. Y. Hwang, S. R. Nersisyan, N. V. Tabiryan, T. J. Bunning, D. M. Steeves, and B. R. Kimball, “Diffractive waveplate arrays [Invited],” J. Opt. Soc. Am. B 34(5), B56–B63 (2017). [CrossRef]  

38. L. De Sio, N. Tabiryan, M. McConney, and T. J. Bunning, “Cycloidal diffractive waveplates fabricated using a high-power diode-pumped solid-state laser operating at 532 nm,” J. Opt. Soc. Am. B 36(5), D136–D139 (2019). [CrossRef]  

39. C. Jang, O. Mercier, K. Bang, G. Li, Y. Zhao, and D. Lanman, “Design and fabrication of freeform holographic optical elements,” ACM Trans. Graph. 39(6), 1–15 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Schematic illustration of the dual-plane AR system. LCD: Liquid crystal display; PR: Polarization rotator. R-PDL and L-PDL reflect RCP and LCP lights, respectively.
Fig. 2.
Fig. 2. (a) 2D optical layout of a simplified system with one PDL at 3-m depth. (b) Standard spot diagrams with RMS radii at different fields.
Fig. 3.
Fig. 3. (a) The exposure setup for 2D patterning. M: mirror; TL: template lens; PBS: polarizing beam splitter; S: sample substrate; $\theta $: rotation angle. ① indicates optical path 1. ② indicates optical path 2. (b) Simulated phase profile. The color bar indicates the phase ranging from 0 to $2\pi .$ (c) Working principles of the R-PDL and L-PDL in the recording and imaging steps.
Fig. 4.
Fig. 4. Photos of (a) the fabricated R-PDL and (b) texts imaged through the R-PDL. The lens areas are circled by the white dashed lines. (c) The measured normalized transmission spectrum of the R-PDL at 22.5° with respect to the horizontal line. The inset illustrates the measurement setup. (d) The polarizing optical microscope image.
Fig. 5.
Fig. 5. Captured images of the dual-plane AR display system when the camera is focused at (a) 25 cm and (b) 3 m, respectively.
Fig. 6.
Fig. 6. Illustration of multi-plane system configuration. Each PDL provides a different optical power. Each HWP serves to open a channel to project the virtual images at different depths.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

φ = 2 π λ ( r 2 + f 2 f ) ,
φ = 2 π λ ( x 2 + y 2 + f 2 + 2 x f s i n θ f x s i n θ ) ,
T = ( 50 % + 50 % × ( 1 R ) ) N ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.