Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Extending eyebox with tunable viewpoints for see-through near-eye display

Open Access Open Access

Abstract

The Maxwellian display presents always-focused images to the viewer, alleviating the vergence-accommodation conflict (VAC) in near-eye displays (NEDs). However, the limited eyebox of the typical Maxwellian display prevents it from wider applications. We propose a Maxwellian see-through NED based on a multiplexed holographic optical element (HOE) and polarization gratings (PGs) to extend the eyebox by viewpoint multiplication. The multiplexed HOE functions as multiple convex lenses to form multiple viewpoints, which are copied to different locations by PGs. To mitigate the imaging problem that multiple viewpoints or no viewpoints enter the eye pupil, the viewpoints can be tuned by mechanically moving a PG. We implement our method in a proof-of-concept system. The optical experiments confirm that the proposed display system provides always in-focus images within a 12 mm eyebox in the horizontal direction with a 32.7° diagonal field of view (FOV) and a 16.5 mm eye relief (ERF), and its viewpoints are tunable to match the actual eye pupil size. Compared with other techniques to extend the eyebox of Maxwellian displays, the proposed method shows competitive performances of a large eyebox, adaptability to the eye pupil size, and focus cues within a large depth range.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Augmented reality (AR) devices allow users to see both real scenes and virtual scenes in the same viewing area at the same time [1,2]. In the past years, the AR technology has been concerned by many research institutions, and several commercial products have been launched [312]. Generally, AR devices can be categorized into handheld AR devices, car head-up displays (HUDs), projection type AR devices and near-eye displays (NEDs) [13]. Due to compact structure and portability, NEDs hold great promises in social communication, healthcare, education and entertainment [1416]. However, currently available NEDs focus virtual images on a fixed focal plane, which causes the distance from the focal plane to the NED (called accommodation distance) doesn’t coincide with vergence distance. This mismatch between accommodation distance and vergence distance is called vergence-accommodation conflict (VAC), causing discomfort and visual fatigue [1618]. To alleviate this issue, several methods have been proposed to support focus cues by presenting multi-focal planes or a three-dimensional (3D) image, such as multi-focus display [19,20], varifocal display [21], light field display [22,23], and holographic display [24,25]. These methods alleviate the VAC by presenting images with true focus cues, but they usually require a high amount of calculation and will lose resolution [26].

The Maxwellian display, which is also known as the retina-projection display (RPD), alleviates the VAC in a new way. Instead of providing the correct focus cues of virtual images, the Maxwellian display directly projects images onto the retina by focusing imaging light rays to the eye pupil [27,28], providing all in-focus images to the viewer regardless of the focus depth of the viewer’s eye [27,29]. However, the eyebox of Maxwellian displays is limited to the size of eye pupil diameter because the focal spot of imaging light should be in the eye pupil. This inherent limitation severely hinders the user’s experience [28].

In recent years, some techniques have been proposed to overcome this limitation. For instance, Jang et al. [30] proposed a dynamic eyebox by changing the incident angle of the probe wave within angular bandwidth. Since the effective incident angle range of a holographic optical element (HOE) is relatively small, the extension of the eyebox is limited. Kim et al. [29] realized a wide eyebox by moving a HOE combiner mechanically, but its effective viewpoint is only one. To support a large eyebox, the moving range of a viewpoint should be relatively large, so the moving speed of the HOE combiner needs to be fast enough to avoid image residual or lag. Kim et al. [27] used a HOE functioning as multiple concave mirrors to extend the eyebox. However, the spacing between viewpoints is fixed. When the size of the eye pupil diameter changes, multiple images or no images entering the eye pupil will lead to a decrease of user’s experience [31]. Yoo et al. [31] proposed a Maxwellian display through the combination of multiplexed HOEs, a polarization grating (PG), and the polarization-dependent eyepiece lens, which creates two multi-viewpoint groups and can support a pupil movement with an extended eyebox. To mitigate the multiple or blank image problems, the two multi-viewpoint groups are selectively activated by a polarization controller. However, two viewpoints are activated at the same time, and the spacing between viewpoints is fixed. As the eye pupil diameter changes, the multiple or blank image problems may occur. To alleviate this, Yoo et al. [32] also proposed a Maxwell display, which can control the polarization state of imaging light by a polarization controller and an active retarder sandwiched between two PGs, thereby activating only a specific viewpoint among the predefined three spot positions. But the spacing between adjacent viewpoints is still fixed. If the spacing between adjacent viewpoints is larger than the eye pupil diameter, the blank image problem may occur. Conversely, the spacing smaller than the eye pupil diameter will result in a smaller eyebox.

In this paper, a novel method is proposed for the first time to tune the positions of viewpoints and change the spacing between adjacent viewpoints by mechanically moving a PG. The proposed Maxwellian NED with tunable viewpoints can achieve an extended eyebox and eliminate the multiple or blank image problems. This method is easy to implement and suitable for users with different eye pupil diameters. The optical experiments demonstrate that our proposed Maxwellian display realizes a 12 mm eyebox in the horizontal direction, and the spacing between adjacent viewpoints is variable in the range of 2 to 4 mm, which is sufficient for people in a bright environment. In the following sections, the operating principle and system configuration are illustrated and a proof-of-concept system is implemented and tested.

2. Principle and configuration

The proposed method is aimed at extending the eyebox and mitigating the multiple or blank image problems. Compared with preview work [2932], the proposed Maxwellian display features two remarkable innovations: first, four viewpoints are generated and simultaneously activated in the horizontal direction to achieve an enlarged eyebox. Second, by mechanically moving a PG, their positions and spacing between adjacent viewpoints can be changed according to the size and position of the eye pupil, thereby ensuring that there is only one viewpoint in the eye pupil.

2.1 Principle of viewpoint multiplication

Firstly, a multiplexed HOE is used to create different viewpoints. HOEs have the advantages of high diffraction efficiency, small form factor and flexibility of design [6,8,24]. In addition, multiple gratings with different diffraction characteristics can be recorded in a HOE, which is called the multiplexing function of HOEs. The technique of using multiplexed HOEs as the transparent image combiner to extend the eyebox of Maxwellian displays has been used by some researchers [27,30,31], where the multiplexed HOEs are usually functioning as multiple concave mirrors or convex lenses; hence, an imaging light beam will be diffracted into multiple convergent light beams forming multiple viewpoints on the eye pupil plane by the multiplexed HOEs.

Secondly, PGs expand the eyebox. The PG has very high diffraction efficiencies in ±1 orders, and can selectively diffract the incident light in +1 or −1 order depending on the polarization state of the incident light [33,34]. Moreover, the PGs are fabricated by an anisotropic medium and implemented with ultra-thin and lightweight characteristics. PGs have shown great potential in NEDs [31,34]. Figure 1 shows a typical transmission PG selectively diffracting light beams with different polarization states. As shown in Fig. 1(a), when a light with right-handed circular polarization (RCP) hits the PG, it is diffracted in a counterclockwise tilted direction, and its polarization state is flipped from RCP to left-handed circular polarization (LCP). By contrast, the incident light with LCP is diffracted in a clockwise tilted direction at the same diffraction angle, and its polarization state is converted to RCP, as shown in Fig. 1(b). Since a light beam with linear polarization (LP) can be divided into a left-handed circularly polarized light and a right-handed circularly polarized light with equal intensity, an incident light beam with LP will be modulated into two light beams with the same intensity and opposite polarization states, as shown in Fig. 1(c). This phenomenon can be used to expand the eyebox.

 figure: Fig. 1.

Fig. 1. Illustration of a PG with the diffraction angle for the first-order (θ) when the incident light with (a) RCP; (b) LCP; (c) LP. If the incident light is linearly polarized, the number of light beams is doubled.

Download Full Size | PDF

The operating principle of eyebox expansion is shown in Fig. 2, where PG1 and PG2 are placed in parallel, and have exactly the same diffraction characteristics and parameters. Figures 2(a) and 2(b) show the modulation function of two separated PGs for circularly polarized light with different polarization states. Here, taking Fig. 2(a) as an example to illustrate, an incident light with RCP is diffracted by PG1 in a counterclockwise tilted direction at an angle of θ, and its polarization state is flipped. Because of this polarization state change, the light leaving PG1 is diffracted in clockwise tilted direction at an angle of θ when interacting with PG2, returning to the original propagation direction and having an upward position shifting in the vertical direction. Similarly, when the polarization state of incident light is LCP, the light diffracted by PG2 has a downward position shifting in the vertical direction. When a viewpoint enters the two PGs, it will be copied to two different locations, realizing the expansion of eyebox, which is shown in Fig. 2(c).

 figure: Fig. 2.

Fig. 2. Illustration of eyebox expansion by PGs. The vertical position changes, while its propagation direction remains unchanged, when the incident light with (a) RCP; (b) LCP. (c) When the incident light beam with LP, there will be two parallel light beams leaving PG2.

Download Full Size | PDF

In our paper, a multiplexed HOE and two PGs are employed to generate four convergent light beams corresponding to the four viewpoints as mentioned above. Because PGs have the function of eyebox expansion, a dual multiplexing HOE is used in our scheme, which is relatively easy to fabricate. Therefore, the whole process of viewpoint multiplication is that one light beam with LP is diffracted into two convergent light beams by the multiplexed HOE, and then they are copied into four convergent light beams by the PGs. Specific description will be described in Section 2.3.

2.2 Principle of viewpoint movement

For a Maxwellian display with multiple viewpoints, two or more images may be projected on the retina when the spacing between adjacent viewpoints is smaller than the eye pupil diameter. On the contrary, when the spacing between adjacent viewpoints is larger than the eye pupil diameter, none viewpoint may enter the eye pupil causing a blank screen [31,35]. Moreover, the eye pupil diameter is sensitive to the brightness of external environments. Generally, the eye pupil diameter ranges from 2 to 4 mm in a bright environment [36]. Thus, the viewpoints should be tunable to ensure only one viewpoint entering the eye pupil at any time.

According to Fig. 2(c), the vertical spacing between these two focal spots (d) can be expressed as:

$$d = 2{h_1}\tan (\theta ),$$

Where h1 is the spacing between PG1 and PG2, θ is the diffraction angle of the used PGs. Adjusting the parameter h1 can change the parameter d, and then realize the movement of viewpoints. Hence, adjusting PG2 mechanically according to the position and size of eye pupils, it is ensured that only one viewpoint enters the eye pupil, avoiding the problem that multiple images or no images are projected onto the retina.

2.3 System specification

The schematic diagram of our proposed Maxwellian display is illustrated in Fig. 3, which consists of a laser scanning projector (LSP) as the image source, a collimating lens used to collimate the laser beam, a linear input coupler (HOE_in) and a multiplexed output coupler (HOE_out) attached to a waveguide, two polarization gratings (PG1 and PG2), a linear polarizer and a quarter wave plate (QWP) placed at the output end of the waveguide to filter the light from real scenes.

 figure: Fig. 3.

Fig. 3. Schematic diagram of the proposed configuration.

Download Full Size | PDF

The LSP scanning out the imaging light is located on the focal plane of the collimating lens. The light is collimated by the collimating lens and passes through the waveguide interacting with HOE_in. The plane wave leaving HOE_in transmits to HOE_out through total internal reflection (TIR). Upon imping on HOE_out, the plane wave is modulated into two convergent light beams with LP. Before converging to a point, the two convergent light beams transmit through PG1 and PG2, forming four convergent points on the eye pupil plane. When the spacing between PG1 and PG2 is changed, the viewpoints will be moved. In actual use, the viewpoints should be moved according to the position and size of the viewer’s eye pupil to ensure that only one viewpoint enters the eye pupil.

As shown in Fig. 4(a), HOE_in is a linear reflection HOE recorded by two plane waves. The probe light enters HOE_in with the same angle as the recording light to work in Bragg diffraction conditions. HOE_out is a dual multiplexing HOE recorded by a plane wave and two convergent waves. The light from HOE_in hits HOE_out with an effective width w, and then is modulated into two light beams converging at points P1 and P2 on the eye pupil plane, respectively. The distance from HOE_out to the eye pupil plane is f, and the spacing between P1 and P2 is l0, as shown in Fig. 4(b). Therefore, the field of view (FOV) of the proposed configuration can be calculated as:

$$FOV = 2\arctan \left( {\frac{w}{{2f}}} \right).$$

PG1 is attached to HOE_out to reduce the form factor. The modulation effects of PG1 and PG2 for diffraction light 1 and 2 are shown in Figs. 4(c) and 4(d), respectively, where the vertical spacing between PG1 and PG2 is h1, and the green dashed lines represent the incident light with LP from HOE_out. If the thicknesses of PGs are ignored, the eye relief (ERF) of this proposed system can be defined as:

$$ERF = {h_2} = f - {h_1},$$

 figure: Fig. 4.

Fig. 4. Partial optical path diagram of (a) HOE_in; (b) HOE_out; (c) PG1 and PG2 for the light beam converging at P1; (d) PG1 and PG2 for the light beam converging at P2. (e) The optical path diagram for real scenes. QWP: Quarter-wave plate.

Download Full Size | PDF

Where h2 is the distance from PG2 to the eye pupil plane. According to Eq. (1), the horizontal spacing d between these two viewpoints diffracted from the same incident light can be written as:

$$d = 2{h_1}\tan (\theta ).$$

This means the spacing between P11 and P12 is equal to that between P21 and P22. As for the spacing dc between P12 and P21, it can be calculated as:

$${d_c} = {l_0} - d.$$

Since the parameter d is related to h1, dc varies with h1. It should be noted that as the value of h1 changes, the values of spacing d and dc will change at the same speed, but their trends are opposite. For example, the spacing between P12 and P21 will become smaller, when the spacing between P11 and P12 increases.

Figure 4(e) illustrates the see-through feature of the proposed configuration. The light from real scenes is filtered by a linear polarizer and a quarter-wave plate (QWP), and its polarization state is changed to LCP or RCP. After passing through the waveguide and HOE_out, the circularly polarized light is modulated by the two PGs in the way shown in Fig. 2(a) or Fig. 2(b). Finally, it enters the human eye together with the imaging light from the LSP.

This paper assumes that the average size of the eye pupil diameter is 3 mm. To make the four viewpoints evenly distributed, d is set to 3 mm, and l0 should be set to 6 mm according to Eq. (5). According to Eqs. (3) and 4, ERF is determined by parameters h1 and f, and h1 is related to θ and d, whose relationships are shown in Fig. 5. If the diffraction angle θ is increased, the spacing h1 will decrease, making the system more compact and the ERF longer. A larger f corresponds to a larger ERF, but also results in a smaller FOV because the effective width of HOE_out is limited by the waveguide thickness and the propagating angle of imaging light in the waveguide [31].

 figure: Fig. 5.

Fig. 5. The spacing between PGs (h1) and the eye relief are determined by the diffraction angle of PGs (θ) under the assumption that the eye pupil diameter is 3 mm.

Download Full Size | PDF

In this paper, the diffraction angle of PGs is 10°, so h1 is about 8.5 mm according to Eq. (4). The focal length f is 25 mm, corresponding to a 16.5 mm ERF.

3. Optical fabrication of HOEs

In the proposed configuration, HOEs are important components to deflect the imaging light and modulate the light wavefront. In our proposed system, HOEs are recorded in a photopolymer attached to a waveguide. The photopolymer is a self-developed photopolymer with a 13.5 µm thickness, and its refractive index modulation is 0.032. The waveguide is made of N-BK7 glass with a 3 mm thickness. The manufacturing schematic diagram of HOE_in and HOE_out are shown in Figs. 6(a) and 6(b), respectively. In Fig. 6(a), the reference wave and the signal wave are plane waves and enter the photopolymer from opposite sides to record a linear reflection HOE. In Fig. 6(b), the two signal waves enter a convex lens from different directions to form two convergent light beams with laterally shifted focal spots on the eye pupil plane. In the recording of transmission HOEs, the reference wave and the signal wave should enter the recording material from the same side, but the convex lens used to form convergent light beams blocks the reference wave from the outside of the waveguide. To avoid this problem, the reference wave hits HOE_in first, and then transmits to the recording region through TIR.

 figure: Fig. 6.

Fig. 6. Schematic diagram for the manufacture of (a) HOE_in; (b) HOE_out. (c) The samples of HOE_in and HOE_out attached to a waveguide; (d) focal spots on the eye pupil plane.

Download Full Size | PDF

The recorded HOEs on a waveguide is shown in Fig. 6(c), where both HOEs are 12 mm wide and separated by 24 mm. In Fig. 6(d), a translucent scatter screen is placed on the eye pupil plane to test the convergent light beams diffracted by HOE_out. The two convergent light beams are well focused at two spots on the translucent scatter screen with a spacing of 6 mm. Furthermore, the diffraction efficiency of HOE_in is 93%, and the diffraction efficiencies of these two gratings recorded in HOE_out are 47% (P1) and 42% (P2), respectively.

4. Experimental system

The proposed method is experimentally verified by constructing a Maxwellian display prototype on an optical table, which is shown in Fig. 7. A LSP (HD301A1-H2) with 1280×720 pixels is used as the image source. The imaging light projected from the LSP is polarized by linear polarizer1 before entering the collimating lens (Thorlabs, AC508-075-A). A XY-translation mount is used to clamp the waveguide, move HOE_in to the proper position to be illuminated by the imaging light, and adjust the incident angle to get the best display effects. Both used PGs (Edmund Optics, #12–678) are placed in parallel. To make the system more compact, PG1 and QWP are attached to opposite sides of the waveguide, and linear polarizer2 is closely attached to the front of QWP. PG2 is fixed on linear translation stage1, so the position of PG2 can be changed mechanically.

 figure: Fig. 7.

Fig. 7. Experimental setup of the proposed Maxwellian see-through NED.

Download Full Size | PDF

The CCD sensor (Thorlabs DCU224C) with a camera lens (Thorlabs MVL4WA f=3.5 mm) is applied to monitor the human eye and capture images on the eye pupil plane. Linear translation stage2 is used to support the horizontal movement of the CCD sensor.

5. Experimental results

Figure 8(a) shows the results of FOV measurement with a target paper at 30 cm from the eye pupil plane. As shown in Fig. 8(a), the projection length in each radial direction is about 6.5 cm, so the measured FOV is about 24.5° in the horizontal direction and 32.7° in the diagonal direction. The measured FOV in the horizontal direction is smaller than the theoretical value (2arctan (6/25) = 27°), which is due to the change in the optimal diffraction angle of HOE_in caused by the shrinkage of material [6]. Improving the uniformity of the recording material and pre-compensating the changed angle can alleviate this problem.

 figure: Fig. 8.

Fig. 8. (a) A FOV measurement results mixed with a target paper; (b) focal spots on the eye pupil plane; (c) the diffraction order distribution of a PG.

Download Full Size | PDF

To test the eyebox, a translucent scatter screen is placed on the eye pupil plane, and a ruler is placed horizontally below these four viewpoints, as shown in Fig. 8(b). The eyebox is extended to 12 mm in the horizontal direction by viewpoint replication when the spacing between PGs is 8.5 mm. The efficiency of each viewpoint from P11 to P22 is about 21%, 20%, 18% and 18%, respectively. There are some evenly distributed bright spots between viewpoints, which are not caused by HOE_out according to Fig. 6(b). A linearly polarized light ray enters a used PG perpendicularly, and the diffraction order distribution diagram is shown in Fig. 8(c), where in addition to the ±1 orders we want, there are several other orders and sub-orders. These unwanted orders cause the evenly distributed bright spots shown in Fig. 8(b), and the total energy of them accounts for about 7% of the incident light.

As mentioned in Section 2.2, the eye pupil diameter varies from 2 to 4 mm in a bright environment. So the spacing between adjacent viewpoints also needs to be adjustable between 2 and 4 mm to avoid the problem that multiple images or no images are projected on the retina. Figure 9 and the associated movie show the movement of viewpoints, where the spacing between P11 and P12, as well as the spacing between P21 and P22 increases from 2 to 4 mm, while the spacing between P12 and P21 decreases from 4 to 2 mm. Throughout the movement of viewpoints, the viewpoints are always well focused. When the eye pupil diameter is about 3 mm, the distribution state of viewpoints shown in Fig. 8(b) is suitable. At this time, the viewpoints don’t need to be moved when the eye pupil moves within the eyebox. If the eye pupil diameter is in other values, the positions of viewpoints should be changed according to the actual position of the eye pupil to ensure that only one viewpoint enters the eye pupil.

 figure: Fig. 9.

Fig. 9. The distribution of viewpoints when the spacing between P11 and P12 is (a) 2 mm; (b) 3 mm; (c) 4 mm (Visualization 1).

Download Full Size | PDF

Figure 10 shows the virtual images that are captured using the CCD camera after removing the translucent scatter screen when the distribution of viewpoints is in the state shown in Figs. 9(a), 9(b) and 9(c), respectively. To avoid capturing multiple images at the same time, we focus the camera lens to a short distance to obtain a smaller aperture diameter. The virtual images are seen clearly in each viewpoint. However, several problems are observed in experimental results. Firstly, the virtual images show a little blurring noise, which is caused by the laser speckle and the aberrations (especially spherical aberration and astigmatism) of the collimating lens. Using additional beam shaping lenses can enhance the clarity of the image [31]. There are some missing parts in virtual images, which are caused by the surface defects of HOEs produced in the manufacturing process. Improving exposure conditions and processing techniques can alleviate this problem. In addition, a clear linear deletion appears in the virtual image in the picture (d=4 mm, P22), which is caused by the scratch on the surface of PG2 and can be observed on PG2 in the picture (d = 2 mm, P21).

 figure: Fig. 10.

Fig. 10. Virtual images captured at different viewpoints when the distribution of viewpoints is changed. The letter “d” means the spacing between P11 and P12, and the spacing between P21 and P22 (Visualization 2).

Download Full Size | PDF

Figure 11 shows captured images at each viewpoint when the camera focuses at different distances to demonstrate the always in-focus property. At each viewpoint, the displayed images remain clear and well-focused while the real scenes are focused and blurred according to the focal length of the camera, demonstrating the presentation of always-focused images as expected. In some pictures (such as the picture Focused at 1D, at P21), there are other virtual images with very weak energy, which are caused by other orders of used PGs shown in Fig. 8. Using the PG with lower diffraction efficiencies in these orders can mitigate this problem.

 figure: Fig. 11.

Fig. 11. Images at different focal distance (Visualization 3) and different viewpoints when the spacing between viewpoints is 3 mm.

Download Full Size | PDF

From the experimental results of Figs. 9, 10 and 11, it is confirmed that our proposed Maxwellian display produces virtual images always-focused and clear superimposed on real scenes, and realizes the movement of viewpoints.

6. Discussion

Due to the linear polarizer 2 in Fig. 7, the optical efficiency of the real scenes is reduced by half. Therefore, in a bright environment, the displayed virtual image is easy to see. On the contrary, in a dark environment, it is necessary to reduce the brightness of the displayed virtual image so that users can clearly see both real scenes and virtual scenes at the same time. In practical use, the two PGs should be parallel, otherwise, chromatic aberrations to the real scenes will be introduced and will increase along the radial direction, thereby reducing the image quality of the real scenes.

The FOV of real scenes is determined by the width of linear polarizer2 and the distance from linear polarizer2 to the eye pupil plane. It should be noted that the edge of linear polarizer2 will also be divided into two images by PGs and superimposed on the real scenes, resulting in a smaller FOV of real scenes. Increasing the size of linear polarizer2 and the used QWP in the horizontal direction can alleviate this problem. Adding a light-shielding shell between linear polarizer2 and PG2 can avoid the chromatic dispersion phenomenon shown in Fig. 11 caused by the light from the real scenes only passing through PG2.

Although the proposed method needs to move PG2 mechanically to realize the movement of viewpoints, the moving distance of PG2 is relatively small due to the increase in the number of viewpoints. For example, in the conversion from Fig. 9(a) to Fig. 9(c), a viewpoint moves 1 mm, and PG2 only needs to move about 5.6 mm. If the diffraction angle of used PGs is increased, the moving distance can be further reduced, and the system will become more compact.

According to Eq. (2), we can scale up the FOV simply by using a wider imaging beam propagating in the waveguide, or recording the HOE_out with a shorter focal length. Limited by the TIR, the former needs to increase the thickness of the waveguide, leading to a bulky form factor, while the latter is at the cost of a shorter eye relief. In our proposed display, increasing the diffraction angle of the used PGs can also reduce the spacing between PGs. Under this premise, if using HOE_out with a shorter focal length, we can increase the FOV while maintaining a suitable eye relief.

7. Conclusion

We have proposed a novel method to extend the eyebox by viewpoint multiplication, and solve the problem of no or multiple images by viewpoint movement. The principles and key parameters of this method are analyzed in detail. The method is flexible and easy to implement. A prototype is built and tested. The results show that the displayed virtual images are clear and always-focused with a 32.7° diagonal FOV within a 12 mm eyebox in the horizontal direction. The eye relief is 16.5 mm, the spacing between adjacent viewpoints can vary from 2 to 4 mm, which are sufficient for practical use. It is expected that the proposed method and configuration have huge application potential in near eye displays.

Funding

National Natural Science Foundation of China (61975014, 62035003).

Disclosures

The authors declare no conflicts of interest.

References

1. Z. He, X. Sui, G. Jin, and L. Cao, “Progress in virtual reality and augmented reality based on holographic display,” Appl. Opt. 58(5), A74–A81 (2019). [CrossRef]  

2. W. Cui, C. Chang, and L. Gao, “Development of an ultra-compact optical combiner for augmented reality using geometric phase lenses,” Opt. Lett. 45(10), 2808–2811 (2020). [CrossRef]  

3. G. Y. Lee, J. Y. Hong, S. H. Hwang, S. Moon, H. Kang, S. Jeon, H. Kim, J. H. Jeong, and B. Lee, “Metasurface eyepiece for augmented reality,” Nat. Commun. 9(1), 4562 (2018). [CrossRef]  

4. J. S. Lee, Y. K. Kim, and Y. H. Won, “See-through display combined with holographic display and Maxwellian display using switchable holographic optical element based on liquid lens,” Opt. Express 26(15), 19341–19355 (2018). [CrossRef]  

5. Z. Liu, Y. Pang, C. Pan, and Z. Huang, “Design of a uniform-illumination binocular waveguide display with diffraction gratings and freeform optics,” Opt. Express 25(24), 30720–30731 (2017). [CrossRef]  

6. J. Xiao, J. Liu, Z. Lv, X. Shi, and J. Han, “On-axis near-eye display system based on directional scattering holographic waveguide and curved goggle,” Opt. Express 27(2), 1683–1692 (2019). [CrossRef]  

7. P. Zhou, Y. Li, S. Liu, and Y. Su, “Compact design for optical-see-through holographic displays employing holographic optical elements,” Opt. Express 26(18), 22866–22876 (2018). [CrossRef]  

8. C. Yu, Y. Peng, Q. Zhao, H. Li, and X. Liu, “Highly efficient waveguide display with space-variant volume holographic gratings,” Appl. Opt. 56(34), 9390–9397 (2017). [CrossRef]  

9. M. Popovich and S. Sagan, “Application Specific Integrated Lenses for Displays,” SID International Symposium Digest of Technical Paper31(1), 1060–1063 (2000).

10. https://www.baesystems.com/en/home

11. https://www.microsoft.com/zh-cn/hololens/hardware#

12. M. K. Hedili, M. O. Freeman, and H. Urey, “Microlens array-based high-gain screen design for direct projection head-up displays,” Appl. Opt. 52(6), 1351–1357 (2013). [CrossRef]  

13. Z. Zhang, J. Liu, X. Duan, and Y. Wang, “Enlarging field of view by a two-step method in a near-eye 3D holographic display,” Opt. Express 28(22), 32709–32720 (2020). [CrossRef]  

14. J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Lvkovic, “Augmented reality technologies, system and applications,” Multimed Tools Appl 51(1), 341–377 (2011). [CrossRef]  

15. I. Rabbi and S. Ullah, “A survey on augmented reality challenges and tracking,” Acta Graph. 24, 29–46 (2013).

16. G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Computer Graphics Forum 38(2), 493–519 (2019). [CrossRef]  

17. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008). [CrossRef]  

18. G. Kramida, “Resolving the vergence-accommodation conflict in head-mounted displays,” IEEE Trans. Visual. Comput. Graphics 22(7), 1912–1931 (2016). [CrossRef]  

19. G. Tan, T. Zhan, Y. H. Lee, J. Xiong, and S. T. Wu, “Polarization-multiplexed multi-plane display,” Opt. Lett. 43(22), 5651–5654 (2018). [CrossRef]  

20. X. Hu and H. Hua, “Design and tolerance of a free-form optical system for an optical see-through multi-focal-plane display,” Appl. Opt. 54(33), 9990–9999 (2015). [CrossRef]  

21. K. Akşit, W. Lopes, J. Kim, P. Shirley, and D. Luebke, “Near-eye varifocal augmented reality display using see-through screens,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

22. Y. Takaki and Y. Yamaguchi, “Flat-panel see-through three-dimensional display based on integral imaging,” Opt. Lett. 40(8), 1873–1876 (2015). [CrossRef]  

23. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33(4), 1–11 (2014). [CrossRef]  

24. H. J. Yeom, H. J. Kim, S. B. Kim, H. Zhang, B. Li, Y. M. Ji, S. H. Kim, and J. H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]  

25. A. Mainmone, A. Georgiou, and J. Kollin, “Holographic Near-Eye Displays for Virtual and Augmented Reality,” ACM Trans. Graph. 36(4), 1 (2017). [CrossRef]  

26. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87 (2011). [CrossRef]  

27. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

28. T. Lin, T. Zhan, J. Zou, F. Fan, and S. Wu, “Maxwellian near-eye display with an expanded eyebox,” Opt. Express 28(26), 38616–38625 (2020). [CrossRef]  

29. S. B. Kim and J. H. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Lett. 43(4), 767–770 (2018). [CrossRef]  

30. J. Kim, Y. Jeong, M. Stengel, K. Akşit, R. Albert, B. Boudaoud, T. Greer, W. Lopes, Z. Majercik, P. Shirley, J. Spjut, M. McGuire, and D. Luebke, “Foveated AR: dynamically-foveated augmented reality display,” ACM Trans. Graph. 38(4), 1–15 (2019). [CrossRef]  

31. C. Yoo, M. Chae, S. Moon, and B. Lee, “Retinal projection type lightguide-based near-eye display with switchable viewpoints,” Opt. Express 28(3), 3116–3135 (2020). [CrossRef]  

32. C. Yoo, J. Jeong, and B. Lee, “Retinal Projection-based Near-eye Display with Polarization Grating for an Extended Eyebox,” Imaging and Applied Optics Congress, OSA Technical Digest (Optical Society of America, 2020), paper JTh2A.33.

33. S. R. Nersisyan, N. V. Tabiryan, D. M. Steeves, and B. R. Kimball, “The promise of diffractive waveplates,” Opt. Photonics News 21(3), 40–45 (2010). [CrossRef]  

34. C. Yoo, K. Bang, M. Chae, and B. Lee, “Extended-viewing-angle waveguide near-eye display with a polarization-dependent steering combiner,” Opt. Lett. 45(10), 2870–2873 (2020). [CrossRef]  

35. K. Ratnam, R. Konrad, D. Lanman, and M. Zannoli, “Retinal image quality in near-eye pupil-steered systems,” Opt. Express 27(26), 38289–38311 (2019). [CrossRef]  

36. S. G. de Groot and J. W. Gebhard, “Pupil Size as Determined by Adapting Luminance,” J. Opt. Soc. Am. A 42(7), 492–495 (1952). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       This video is to show the movement of viewpoints. It is a supplement to Figure 9.
Visualization 2       This video is to show that the images are clear at different points of view. It is a supplement to Figure 10.
Visualization 3       This video is to show that the proposed system can solve the vergence-accommodation conflict. It is a supplement to Figure 11.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Illustration of a PG with the diffraction angle for the first-order (θ) when the incident light with (a) RCP; (b) LCP; (c) LP. If the incident light is linearly polarized, the number of light beams is doubled.
Fig. 2.
Fig. 2. Illustration of eyebox expansion by PGs. The vertical position changes, while its propagation direction remains unchanged, when the incident light with (a) RCP; (b) LCP. (c) When the incident light beam with LP, there will be two parallel light beams leaving PG2.
Fig. 3.
Fig. 3. Schematic diagram of the proposed configuration.
Fig. 4.
Fig. 4. Partial optical path diagram of (a) HOE_in; (b) HOE_out; (c) PG1 and PG2 for the light beam converging at P1; (d) PG1 and PG2 for the light beam converging at P2. (e) The optical path diagram for real scenes. QWP: Quarter-wave plate.
Fig. 5.
Fig. 5. The spacing between PGs (h1) and the eye relief are determined by the diffraction angle of PGs (θ) under the assumption that the eye pupil diameter is 3 mm.
Fig. 6.
Fig. 6. Schematic diagram for the manufacture of (a) HOE_in; (b) HOE_out. (c) The samples of HOE_in and HOE_out attached to a waveguide; (d) focal spots on the eye pupil plane.
Fig. 7.
Fig. 7. Experimental setup of the proposed Maxwellian see-through NED.
Fig. 8.
Fig. 8. (a) A FOV measurement results mixed with a target paper; (b) focal spots on the eye pupil plane; (c) the diffraction order distribution of a PG.
Fig. 9.
Fig. 9. The distribution of viewpoints when the spacing between P11 and P12 is (a) 2 mm; (b) 3 mm; (c) 4 mm (Visualization 1).
Fig. 10.
Fig. 10. Virtual images captured at different viewpoints when the distribution of viewpoints is changed. The letter “d” means the spacing between P11 and P12, and the spacing between P21 and P22 (Visualization 2).
Fig. 11.
Fig. 11. Images at different focal distance (Visualization 3) and different viewpoints when the spacing between viewpoints is 3 mm.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

d = 2 h 1 tan ( θ ) ,
F O V = 2 arctan ( w 2 f ) .
E R F = h 2 = f h 1 ,
d = 2 h 1 tan ( θ ) .
d c = l 0 d .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.