Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Exploring angular-steering illumination-based eyebox expansion for holographic displays

Open Access Open Access

Abstract

Holography represents an enabling technology for next-generation virtual and augmented reality systems. However, it remains challenging to achieve both wide field of view and large eyebox at the same time for holographic near-eye displays, mainly due to the essential étendue limitation of existing hardware. In this work, we present an approach to expanding the eyebox for holographic displays without compromising their underlying field of view. This is achieved by utilizing a compact 2D steering mirror to deliver angular-steering illumination beams onto the spatial light modulator in alignment with the viewer’s eye movements. To facilitate the same image for the virtual objects perceived by the viewer when the eye moves, we explore an off-axis computational hologram generation scheme. Two bench-top holographic near-eye display prototypes with the proposed angular-steering scheme are developed, and they successfully showcase an expanded eyebox up to 8 mm × 8 mm for both VR- and AR-modes, as well as the capability of representing multi-depth holographic images.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In recent years, near-eye display technologies for virtual reality (VR) and augmented reality (AR) have garnered increasing attention [1], with most commercially available solutions achieving stereopsis through binocular parallax, while facing a major problem known as the vergence accommodation conflict (VAC). This issue results in visual fatigue and discomfort for users after prolonged usage [24]. A number of solutions have been reported to address the VAC issue, including light field [5,6], multi-focus [7,8], and Maxwellian-view or retinal projection techniques [9,10], while none of them is without limitations. Notably, holographic near-eye displays are a promising solution as they are capable of reconstructing wavefronts of 3D scenes with natural focus cues [1116]. However, due to the limited space-bandwidth product (SBP) of spatial light modulators (SLMs), holographic near-eye displays still grapple with the trade-off between field of view (FOV) and size of eyebox [1720]. Consequently, simultaneously realizing a wide FOV and a large eyebox remains a significant challenge for holographic near-eye displays.

In the efforts to expand the eyebox for holographic near-eye displays, two primary types of methods have been reported: exit-pupil replication and exit-pupil scanning. The former type aims to replicate the exit-pupil in one or two dimensions, ensuring at least one viewpoint falls within the pupil as the eye moves [21,22]. For instance, a lens-array-based holographic optical element (HOE) is used to replicate the exit-pupil, thereby expanding the eyebox to 10 mm $\times$ 10 mm, while realizing a large FOV of 60$^{\circ }$ [23]. The exit-pupil scanning methods normally use the eye-tracking and electrically-powered optical tuning to dynamically scan the exit-pupil on the pupil plane [24,25]. However, due to the limited angular modulation capability of SLMs, these methods cannot simultaneously achieve a wide FOV and a large eyebox. To solve this limitation, Jang et al. [26] recently developed a pupil-shifting method towards eyebox expansion for holographic near-eye displays. In their method, the laser beam is steered by an MEMS mirror and then reflected by a pupil-shifting holographic optical element (PSHOE) to generate the corresponding point source. After being reflected by the off-axis lens HOE combiner, the shifted exit-pupil is created at each time step. However, it is worthy noting that the temporally generated point sources are sparsely distributed which leads to sparse distribution for the exit-pupils on the pupil plane. As such, the interval between adjacent exit-pupils must be carefully designed to match the pupil size and to avoid the black gap between neighboring exit-pupils. Otherwise, the possible gap may result in ghost-image or blind-area artifact when the eye pupil moves.

In this work, we present an exit-pupil scanning method to expand the eyebox without compromising the FOV for holographic near-eye displays. Unlike the work by Jang et al. [26], we utilize a 2D steering mirror which alters the angle of the reference beam that illuminates the SLM in alignment with the viewer’s eye movement, allowing us to deliver a fully dynamic, flexible, temporal multiplexing of point sources. Our deployed 2D steering mirror has a minimum steering angle of 1.26$\times 10^{-3}$ degree. We note that when employing an eyepiece with a focal length of 50 mm, the minimum distance between neighboring exit-pupils is approximately 1.1 $\mu$m. Given the normal size of human’s eye pupil is around 2 – 5 mm, our proposed method avoids double-image or blind-area artifacts, thereby offering a continuous and smooth perception for the viewer. Furthermore, to ensure the reconstructed 3D scene remains stationary for dynamic viewing positions when the viewer’s eye moves, we develop a hologram generation scheme tailored for off-axis illumination. Eventually, we have successfully developed two bench-top prototypes for VR and AR modes and both prototypes have achieved an expanded eyebox of up to 8 mm$\times$8 mm without sacrificing the FOV.

2. Eyebox expansion through angular steering

Our method is mainly comprised of two parts: (1) an optical design to expand eyebox for holographic near-eye displays by deploying a 2D steering mirror in alignment with the movement of the viewer’s eye; (2) a hologram generation scheme tailored for off-axis illumination to ensure the virtual object remains stationary for dynamic viewing positions.

2.1 Optical design for eyebox expansion

Figure 1 shows the principle of how our proposed optical design with an angular steering device can achieve the eyebox expansion. Technically, an electrical-driven angular steering device is utilized to change the incident angle of the illumination beam to the SLM dynamically based on the tracked eye pupil location. Assume the SLM is illuminated by an incident beam with a tilted angle $\theta$. When $\theta$ is zero-degree (green lines in Fig. 1), the diffracted light from the hologram propagates for a certain distance and forms a hologram image in the intermediate target plane located between the front focal plane and the eyepiece. The eyepiece magnifies the holographic image on the intermediate target plane for the viewer. When the eye pupil moves $\Delta y$, the angular steering device steers the angle of the illumination beam with a tilted angle $\theta$ (orange lines in Fig. 1). After converging by the eyepiece, the hologram image enters the corresponding shifted eye pupil. Here, the hologram image in the intermediate target plane also shifts a distance of $\Delta$ with the same direction of pupil shifting, which is in line with the human eye characteristics. The FOV depends on both the SLM’s size and the eyepiece’s numerical aperture. In the meantime of expanding the eyebox, we can also steer the observed virtual images to different depths by dynamically adjusting the distance between the intermediate target plane and the eyepiece. The analysis of FOV and eyebox is detailed in the Supplement 1.

 figure: Fig. 1.

Fig. 1. Illustration of FOV offset for different viewpoints. The shift of the pupil position when the illumination light angle changes from 0 to $\theta$ results in a corresponding offset of the FOV from the green lines to orange lines. Herein, the cube is a reference object for the target (bunny) to be imaged in the real environment through the eyepiece. When the propagation distance of the hologram is set $d$, the object offset in the intermediate target plane is $\Delta$ and the corresponding steering angle of illumination is $\theta$.

Download Full Size | PDF

Notably, when $\theta$ changes, the position of the focal point formed by the reconstructed image also shifts accordingly, and the corresponding relationship is expressed as Eq. (1) in the following:

$$\theta = \arctan \frac{\Delta y}{f^{\prime} } .$$
Where, $\Delta y$ represents the position of the pupil, and $f^\prime$ represents the back focal length of the eyepiece. For a given eye-pupil location, we obtain the tilted angle $\theta$ and then utilize the angular steering device, a 2D steering mirror, to steer the illumination angle onto the SLM with $\theta$.
$$\Delta = d \tan \theta .$$

2.2 Off-axis hologram generation

For human perception, it is crucial to ensure the reconstructed virtual object image stays stationary in the world coordinate system, regardless of eye pupil movements. As a result, we seek to incorporate this offset into the hologram rendering for the proposed near-eye display. Essentially, this process is equivalent to enabling the off-axis diffraction propagation between parallel planes. Inspired by recently reported shifted angular spectrum methods (Shifted-ASM) that tackle this issue [2730], we develop a modified scheme for generating holograms subject to precise viewpoint locations.

Figure 2 shows the coordinate system used for the off-axis propagation. The diffractive image of interest coordinates $(\hat x,\hat y, z)$ are shifted by an angle $\theta$ from the target image of interest coordinates $(x, y, z)$. This shift from the target image can be denoted as Eq. (3):

$$\begin{bmatrix} \hat x \\ \hat y \end{bmatrix}=\begin{bmatrix} x \\ y \end{bmatrix}+ d\begin{bmatrix} \tan \theta _{x} \\ \tan \theta _{y} \end{bmatrix} ,$$
where $\theta _x$ and $\theta _y$ represent components of $\theta$ in $x$ and $y$ coordinates, respectively, and $d$ represents the distance between the source sampling plane and the origin sampling plane.

 figure: Fig. 2.

Fig. 2. Illustration of coordinate system for deriving the diffraction field by off-axis propagation, where the coordinates of the diffraction image of interest $(\hat x,\hat y, z)$ are shifted by an angle of $\theta$ from the coordinates of the target image of interest $(x, y, z)$. These two images are positioned at the center of the source sampling window and the shifted sampling window, respectively. The origin sampling window is obtained by propagating the source sampling window over a distance $d$ without the tilting angle ($\theta = 0$).

Download Full Size | PDF

Assume there is no shift ($\theta = 0$), the Zero-shifted diffractive field $E_O$ can be calculated as the source field $E_s$ multiplied by the propagation phase factor, $\exp \left (-i2\pi d\sqrt {\frac {1}{\lambda ^{2}}-f_{x}^{2}-f_{y}^{2} } \right )$. Thus, the field $E_O$ can be expressed in Eq. (4):

$${{E_O} \left( \hat x,\hat y,d \right) = {E_O}\left( x,y,d \right) = {\mathscr{F}^{- 1}} \left\{ \mathscr{F} \left[{E_s}\left( x,y,0 \right) \right] \exp \left({-}i2\pi d\sqrt{\frac{1}{\lambda^{2}}-f_{x}^{2}-f_{y}^{2} } \right) \right\}} ,$$
where $\mathscr {F} \left ( \cdot \right )$ and $\mathscr {F}^{- 1} \left ( \cdot \right )$ represent the Fourier transform and inverse Fourier transform operators, respectively.

To align the diffractive image with real objects in the environment as perceived from user viewpoints, In order to accurately align the diffractive image with real objects in the user’s perceived environment, it is necessary to account for the offset. Notably, the diffractive image of interest cannot be directly used as the sampling center. Using the shifted sampling window to derive the offset would result in that the reconstructed target image consistently appears at the center of FOV, deviating from its actual position in the real environment. As such, it is essential to employ the original instead of the shifted sampling window. The field of the origin sampling window $E_O$ can be expressed as Eq. (5) in the following:

$$\begin{aligned} {{E_O}\left( x,y,d \right)} =& {{\mathscr{F}^{- 1}} \left\{ \mathscr{F} \left[{E_s}\left( x,y,0 \right) \right] \right. }\\ &{\left.{\bullet}\exp \left[{-}i2\pi d\left(f_{x}\tan\theta _{x}+ f_{y}\tan\theta _{y} +\sqrt{\frac{1}{\lambda^{2}}-{f_{x}} ^{2}-{f_{y}} ^{2}} \right) \right] \right\}} , \end{aligned}$$

Then, incorporating the modulation of the incident field $m(x, y)$ to the source field, the propagation of the Expanded ASM can be expressed as Eq. (6) in the following:

$$\begin{aligned} {{E_O}\left( x,y,d \right)} =& {{\mathscr{F}^{- 1}} \left\{ \mathscr{F} \left[m(x,y){E_s}\left( x,y,0 \right) \right] \right. }\\ &{\left.{\bullet}\exp \left[{-}i2\pi d\left(f_{x}\tan\theta _{x}+ f_{y}\tan\theta _{y} +\sqrt{\frac{1}{\lambda^{2}}-{f_{x}} ^{2}-{f_{y}} ^{2}} \right) \right] \right\}} . \end{aligned}$$

In this work, we replace the conventional angular spectrum diffraction propagation model with the traditional Shifted-ASM and the proposed Expanded-ASM, respectively, and simulate scenarios with off-axis illumination. When the propagation distance is 5 cm, the holograms and corresponding reconstructed results with different tilted angles are shown in Fig. 3. It is observed that the Shifted-ASM manages to prevent the target image from shifting under various tilted angles. Thus, as the viewpoint location changes, the reconstructed target image consistently remains at the center of the FOV. In contrast, the proposed Expanded-ASM allows the target image to move throughout the entire background in accordance with the viewpoint movement. Consequently, from different viewpoints, the target image maintains its position in the real environment.

 figure: Fig. 3.

Fig. 3. Reconstruction results using the Shifted-ASM and Expanded-ASM with different tilted angles. (a) $0^{\circ }$; (b) $2^{\circ }$; (c) $4^{\circ }$; (d) $6^{\circ }$.

Download Full Size | PDF

2.3 Hologram generation pipeline

The computing pipeline of our proposed method for holographic near-eye display with eyebox expansion is illustrated in Fig. 4. For a given eye pupil location detected by the eye tracker, the tilted angle $\theta _1$ of the reference beam incident to the human eye can be calculated using Eq. (1). With the oblique angle $\theta _1$, the phase hologram for the SLM can be computed by using the Expanded ASM propagation (Eq. (6)) and the stochastic gradient descent (SGD) algorithm [17,3133]. Based on the magnification of the relay optics module, the steering angle $\theta _2$ of the mirror can be obtained. As such, the 2D steering mirror alters the illumination angle to the SLM while the corresponding phase hologram is loaded for display. Finally, after being modulated by the SLM and magnified by the eyepiece, the wavefront converges onto the corresponding eye pupil and forms a holographic image on the retina of the viewer’s eye. Notably, the above process is repeated to deliver the corresponding holographic image for observation subject to the eye pupil movement. Compared with existing exit pupil replication-based methods, our proposed method can efficiently expand the eyebox without compromising the FOV through the continuous scanning of the exit pupil. In such a way, there is no image overlapping or black gaps between adjacent replicated exit pupils.

 figure: Fig. 4.

Fig. 4. Processing pipeline of the proposed eyebox expanded hologram generation based on angular-steering illumination.

Download Full Size | PDF

In realizing a real-time tracking system, the holograms of the target image for distinct viewpoints are pre-calculated, yielding a comprehensive look-up table tailored to the extended eyebox configuration. Upon encountering a particular eye pupil position, we compute and apply the relevant steering angle to the 2D steering mirror through its control driver. Concurrently, we load the corresponding hologram from the established look-up table onto the SLM. This procedure is reiterated when the eye pupil position changes, ensuring the cyclical operation of the 2D steering mirror and SLM for continuous real-time tracking.

3. Implementation and results

We implement two bench-top holographic near-eye display setups for VR mode and AR mode, respectively, as shown in Fig. 5. Their hardware specifications are detailed in the Supplement 1.

 figure: Fig. 5.

Fig. 5. Setup diagrams and prototype photographs of proposed holographic near-eye displays with VR-mode (a) and AR-mode (b). FL, fiber-coupled laser; CL, collimating lens; BS, beam splitter; L1, lens 1; L2, lens 2; CM, concave mirror.

Download Full Size | PDF

3.1 VR-mode holographic near-eye display

As shown in Fig. 5(a), this VR-mode bench-top near-eye display prototype comprises a fiber-coupled laser, a collimating lens, a 2D steering mirror, a set of 4f relay optics, a BS, an SLM, and an eyepiece. A phase-only SLM (Holoeye PLUTO) with a resolution of 1,920$\times$1,080 and a pixel size of 8 $\mu$m is used. The light source is a fiber-coupled laser with a principal wavelength of 520 nm. The 2D steering mirror (Optotune MR-15-30) is capable of $\pm$25$^{\circ }$ deflection in both horizontal and vertical directions and a minimum deflection angle of $1.26\times 10^{-3}$ degree. As the diameter of the 2D steering mirror is 15 mm, which is slightly smaller than that of the SLM, a 4f relay optics module is placed in-between the steering mirror and the SLM to enlarge the beam size. In our experimental setup, the focal lengths of Lens 1 and Lens 2 of the 4f relay optics modeule are 100 mm and 150 mm, respectively. The size of the illumination to the SLM is theoretically increased to a diameter of 22.5 mm, so as to cover the effective area of the SLM. The intermediate target image formed by the diffraction is then magnified by the eyepiece and captured by a CCD camera. The CCD camera is mounted on a 2D translation stage that can move horizontally and vertically to simulate the eye-pupil movement.

When the location of the eye pupil, i.e., aperture of the camera lens, is given, the 2D steering mirror adjusts its azimuth to alter the corresponding incident angle of the illumination beam onto the SLM, thereby matching the pupil location. To expand the eyebox, we utilize an eyepiece with a focal length of 50 mm, leading to a maximum tilted angle for the beam of 4.57$^{\circ }$, corresponding to a 2D steering mirror tilt of 6.8$^{\circ }$. At the same time, the hologram for the SLM corresponding to the eye pupil position is computed and loaded to the SLM. This approach enables the camera positioned at the eye pupil to capture images specific to that particular viewpoint. In the experiment, a grid background with the text "SHANGHAI UNIVERSITY" is used as the target image. Figure 6(a) to (i) show the experimental results captured at different pupil locations revealing the successful expansion of the eyebox up to 8 mm $\times$ 8 mm. Notably, each viewpoint offers a diagonal FOV of approximately 20$^{\circ }$, although an amount of distortion can be observed at the edge of the eyebox. As illustrated in Fig. 1, each pupil location results in an offset on the FOV. Despite this varying FOV offset, the virtual object’s location, represented by the text "SHANGHAI UNIVERSITY", remains stationary under the world coordinate system across viewpoints.

 figure: Fig. 6.

Fig. 6. Experimental results of eyebox expansion for the proposed VR-mode holographic near-eye display. Images are captured from different viewpoints from (a) to (i). The camera is located at −4 mm to 4 mm in two dimensions.

Download Full Size | PDF

We have conducted additional tests to examine the reconstructed images at different depths with the VR-mode setup (Fig. 7). The text "SHANGHAI UNIVERSITY" and "VIRTUAL REALITY NEAR-EYE DISPLAY" are designed at 0.8D and 10D, respectively. D stands for diopter. The camera’s focus is adjusted to correspond with given depths during the acquisition. It can be easily observed that when the camera’s focus matches the depth of the displayed text, the text at this depth appears sharp while that at other depths appears blurred (Fig. 7). In this way, it shows that our proposed holographic near-eye display under VR mode is able to reconstruct 3D images effectively.

 figure: Fig. 7.

Fig. 7. Experimental results of reconstructing 3D images with the VR-mode holographic near-eye display. The camera is focused at 0.8D (a) and 10D (b).

Download Full Size | PDF

3.2 AR-mode holographic near-eye display

By replacing the eyepiece in Fig. 5(a) with a concave mirror and a BS, a bench-top holographic near-eye display prototype under AR mode is implemented (Fig. 5(b)). After modulation by the SLM, the intermediate target image is magnified by the concave mirror and subsequently reflected towards the eye pupil via the beam splitter (BS3), which serves as the optical combiner of this AR-mode holographic near-eye display. The camera representing the viewer’s eye is located around the focal plane of the concave mirror. The focal length of the utilized concave mirror is 75 mm, leading to a maximum tilted angle for the beam of 3.05$^{\circ }$, corresponding to a 2D steering mirror tilt of 4.6$^{\circ }$. And thus the FOV for each viewpoint is about 13.4$^{\circ }$ diagonally.

To verify the effectiveness of this AR-mode holographic near-eye display, we used a real cube as the reference and a grid background featuring the text "SHANGHAI UNIVERSITY" as the target virtual image. The camera is moved to 9 different locations for acquisition (Fig. 8). It can be observed that our display can achieve up to 8 mm$\times$ 8 mm eyebox while successfully retaining the virtual images the same location with reference to the real cube.

 figure: Fig. 8.

Fig. 8. Experimental results of the eyebox expansion tested on the AR-mode holographic near-eye display. Images are captured from different viewpoints from (a) to (i). The camera located at −4 mm to 4 mm in two dimensions.

Download Full Size | PDF

Like the VR-mode, to show the capability in delivering virtual images at different depths, we conducted another experiment with a kitty ornaments and a piece of white paper with texts located at 300 mm and 1,800 mm from the camera, respectively. The virtual content includes the SHU logo targeted at 300 mm from the camera and alternative virtual texts targeted at 1,800 mm. Figure 9 shows the captured images when the camera is focused at these two distances, respectively. When the camera is focused on the front kitty ornaments, the SHU logo is sharp while the virtual texts are blurry. Likewise, when the camera is focused on the rear white paper with texts, the virtual texts are sharp while the SHU logo looks blurry.

 figure: Fig. 9.

Fig. 9. Experimental results of reconstructing 3D images tested on the AR-mode holographic near-eye display. The camera is focused at 3.3D (a) and 0.56D (b).

Download Full Size | PDF

Although both implemented displays suffer from distortions, the distortion issue in the AR mode is better mitigated than that in the VR mode, especially for viewpoints situated at the eyebox’s periphery. This is due to the different eyepiece settings for VR and AR modes. In the VR mode (Fig. 5(a)), we apply an achromatic lens with a focal length of 50 mm and a diameter of 40 mm to serve as the eyepiece, while in the AR mode (Fig. 5(b)), we employ a concave mirror with a focal length of 75 mm and a diameter of 50 mm and a beam splitter as the eyepiece. Consequently, the numerical aperture of the eyepiece in the VR mode is greater than that in the AR mode. Accordingly, to achieve the same size of the expanded eyebox to 8 mm $\times$ 8 mm, the VR mode requires a greater tilted angle of the illumination onto the SLM.

It is noted that the eyeball’s standard rotation radius is about 12–14 mm. During object tracking, the eye pupil’s motion velocity usually stays below 30$^{\circ }$/s. In our VR and AR holographic near-eye displays, the allocated eyebox expands to 8 mm$\times$8 mm, with $\pm$6.8$^{\circ }$(VR) and $\pm$4.6$^{\circ }$(AR) steering angles across two dimensions. When transitioning diagonally, the eye movement spans around 44$^{\circ }$, which typically takes over 0.4s. Notably, our 2D steering mirror scans this range in under 0.4s in VR, and under 0.3s in AR. Furthermore, utilizing larger angular steering steps with the mirror reduces this time even further.

4. Discussion and conclusion

We have explored a method for eyebox expansion in holographic near-eye displays, employing a continuously steered exit-pupil. In our method, a 2D steering mirror is used to generate angular-variable illumination beams for the SLM. In addition, we have developed a tailored hologram generation scheme to retain the precise perceptual position of reconstructed objects from any viewpoint within the eyebox. With the proposed design, we have successfully implemented proof-of-concept VR and AR holographic near-eye display prototypes, both of which are able to expand the eyebox up to 8 mm $\times$ 8 mm without compromising the FOV. Furthermore, we have demonstrated the capability of displaying multi-depth hologram images for our method. We envision the explored eyebox expansion scheme present a promising solution for achieving wide FOV and large eyebox in holographic displays.

Limitations and future work. Although the proposed method effectively expands the size of eyebox, there is still room for improvement. The contrast is degraded with noise in the background when displaying virtual content at different distances for VR mode. For example, in Fig. 7, the target virtual image, "SHANGHAI UNIVERSITY" is located at 0.8D and the corresponding propagation distance from the SLM is 28 mm. Meanwhile, the virtual content of "VIRTUAL REALITY NEAR-EYE DISPLAY" is located at 10D and the corresponding propagation distance of 35 mm. However, existing phase-only SLMs cannot fully transfer the phase to the amplitude over short propagation distances, particularly for virtual content with black background. In future work, we aim to utilize the camera-in-the-loop concept to enhance the uniformity and quality of the hologram image within the expanded eyebox.

Furthermore, the existing hologram generation speed falls short of facilitating real-time tracking. We currently calculate holograms for various viewpoints in advance to create an extended eyebox look-up table, which proves insufficient for exhibiting dynamic scenes. In future work, we intend to explore advances in rapid hologram generation algorithms, potentially leveraging methodologies like deep learning-driven techniques.

Last but not the least, the form factor of the implemented VR and AR holographic near-eye displays is still bulky due to the angular steering illumination module with 4f relay optics. An approach to reducing the form factor would be optimizing a more compact mechanical structure for the relay optics. In addition, one could incorporate specialized optics, such as HOEs, to miniaturize the angular scanning illumination module. In the next, we plan to explore these two approaches to developing a wearable holographic near-eye display prototype with an eyeglasses-style form factor.

Funding

National Key Research and Development Program of China (2021YFB2802200); National Natural Science Foundation of China (62005154); Scientific and Innovative Action Plan of Shanghai (20ZR1420500); Ministry of Education - Singapore (MOE2019-TIF-0011); Research Grants Council of Hong Kong (ECS 27212822).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, systems and applications,” Multimed. Tools Appl. 51(1), 341–377 (2011). [CrossRef]  

2. C. Chang, K. Bang, G. Wetzstein, B. Lee, and L. Gao, “Toward the next-generation vr/ar optics: a review of holographic near-eye displays from a human-centric perspective,” Optica 7(11), 1563 (2020). [CrossRef]  

3. J. Xiong, E. L. Hsiang, Z. He, T. Zhan, and S. T. Wu, “Augmented reality and virtual reality displays: emerging technologies and future perspectives,” Light: Sci: Appl. 10(1), 216 (2021). [CrossRef]  

4. X. Xia, F. Y. Guan, Y. Cai, and N. M. Thalmann, “Challenges and advancements for ar optical see-through near-eye displays: A review,” Front. Virtual Real. 3, 1 (2022). [CrossRef]  

5. S. Lee, S. Lee, D. Kim, and B. Lee, “Distortion corrected tomographic near-eye displays using light field optimization,” Opt. Express 29(17), 27573 (2021). [CrossRef]  

6. X. Wang and H. Hua, “Depth-enhanced head-mounted light field displays based on integral imaging,” Opt. Lett. 46(5), 985 (2021). [CrossRef]  

7. J.-S. Lee, S.-J. Jeon, Y.-J. Kim, H.-S. Kim, W. J. Choi, and Y.-W. Choi, “Multi-variable focal lens system for extended depth of field in augmented reality display,” Opt. Eng. 61(05), 053102 (2022). [CrossRef]  

8. T. Hamasaki and Y. Itoh, “Varifocal occlusion for optical see-through head-mounted displays using a slide occlusion mask,” IEEE Trans. Visual. Comput. Graphics 25(5), 1961–1969 (2019). [CrossRef]  

9. M.-H. Choi, K.-S. Shin, J. Jang, W. Han, and J.-H. Park, “Waveguide-type maxwellian near-eye display using a pin-mirror holographic optical element array,” Opt. Lett. 47(2), 405 (2022). [CrossRef]  

10. A. Yoshikaie, R. Ogawa, T. Imamura, K. Ohki, K. Seno, Y. Ogawa, K. Abe, M. Takada, Y. Mamishin, A. Yajima, S. Inagaki, M. Ando, and S. Seino, “Full-color binocular retinal scan ar display with pupil tracking system,” in Optical Architectures for Displays and Sensing in AR, VR, and MR IV, vol. 12449 (SPIE, 2023), pp. 243–251.

11. S. Choi, M. Gopakumar, Y. Peng, J. Kim, and G. Wetzstein, “Neural 3d holography: Learning accurate wave propagation models for 3d holographic virtual and augmented reality displays,” ACM Trans. Graph. 40(6), 1–12 (2021). [CrossRef]  

12. J. Kim, M. Gopakumar, S. Choi, Y. Peng, W. Lopes, and G. Wetzstein, “Holographic glasses for virtual reality,” in ACM SIGGRAPH 2022 Conference Proceedings, (ACM, New York, NY, USA, 2022), SIGGRAPH ’22.

13. Z. Zhang, J. Liu, Q. Gao, X. Duan, and X. Shi, “A full-color compact 3d see-through near-eye display system based on complex amplitude modulation,” Opt. Express 27(5), 7023 (2019). [CrossRef]  

14. Y.-W. Zheng, D. Wang, Y.-L. Li, N.-N. Li, and Q.-H. Wang, “Holographic near-eye display system with large viewing area based on liquid crystal axicon,” Opt. Express 30(19), 34106 (2022). [CrossRef]  

15. F. Wang, T. Shimobaba, Y. Zhang, T. Kakue, and T. Ito, “Acceleration of polygon-based computer-generated holograms using look-up tables and reduction of the table size via principal component analysis,” Opt. Express 29(22), 35442 (2021). [CrossRef]  

16. D. Blinder, T. Birnbaum, T. Ito, and T. Shimobaba, “The state-of-the-art in computer generated holography for 3d display,” Light: Adv. Manufact. 3(3), 572–600 (2022). [CrossRef]  

17. Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, “Neural holography with camera-in-the-loop training,” ACM Trans. Graph. 39(6), 1–14 (2020). [CrossRef]  

18. D. Lee, K. Bang, S.-W. Nam, B. Lee, D. Kim, and B. Lee, “Expanding energy envelope in holographic display via mutually coherent multi-directional illumination,” Sci. Rep. 12(1), 6649 (2022). [CrossRef]  

19. G. Kuo, L. Waller, R. Ng, and A. Maimone, “High resolution étendue expansion for holographic displays,” ACM Trans. Graph. 39(4), 66 (2020). [CrossRef]  

20. N. Chen, C. Wang, and W. Heidrich, “Compact computational holographic display,” Front. Photonics 3, 1 (2022). [CrossRef]  

21. Z. Wang, X. Zhang, G. Lv, Q. Feng, A. Wang, and H. Ming, “Conjugate wavefront encoding: an efficient eyebox extension approach for holographic maxwellian near-eye display,” Opt. Lett. 46(22), 5623 (2021). [CrossRef]  

22. X. Zhang, Y. Pang, T. Chen, K. Tu, Q. Feng, G. Lv, and Z. Wang, “Holographic super multi-view maxwellian near-eye display with eyebox expansion,” Opt. Lett. 47(10), 2530 (2022). [CrossRef]  

23. X. Xia, Y. Guan, A. State, P. Chakravarthula, T.-J. Cham, and H. Fuchs, “Towards eyeglass-style holographic near-eye displays with statically expanded eyebox,” in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), (2020), pp. 312–319.

24. Y. Takaki and N. Fujimoto, “Flexible retinal image formation by holographic maxwellian-view display,” Opt. Express 26(18), 22985 (2018). [CrossRef]  

25. Z. Wang, X. Zhang, G. Lv, Q. Feng, H. Ming, and A. Wang, “Hybrid holographic maxwellian near-eye display based on spherical wave and plane wave reconstruction for augmented reality display,” Opt. Express 29(4), 4927 (2021). [CrossRef]  

26. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2018). [CrossRef]  

27. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17(22), 19662 (2009). [CrossRef]  

28. K. Matsushima, “Shifted angular spectrum method for off-axis numerical propagation,” Opt. Express 18(17), 18453 (2010). [CrossRef]  

29. H.-h. Son and K. Oh, “Light propagation analysis using a translated plane angular spectrum method with the oblique plane wave incidence,” J. Opt. Soc. Am. A 32(5), 949 (2015). [CrossRef]  

30. W. Zhang, H. Zhang, K. Matsushima, and G. Jin, “Shifted band-extended angular spectrum method for off-axis diffraction calculation,” Opt. Express 29(7), 10089 (2021). [CrossRef]  

31. T. Shimobaba, D. Blinder, T. Birnbaum, I. Hoshi, H. Shiomi, P. Schelkens, and T. Ito, “Deep-learning computational holography: A review,” Front. Photonics 3, 1 (2022). [CrossRef]  

32. P. Chakravarthula, Y. Peng, J. Kollin, H. Fuchs, and F. Heide, “Wirtinger holography for near-eye displays,” ACM Trans. Graph. 38(6), 1–13 (2019). [CrossRef]  

33. X. Shui, H. Zheng, X. Xia, F. Yang, W. Wang, and Y. Yu, “Diffraction model-informed neural network for unsupervised layer-based computer-generated holography,” Opt. Express 30(25), 44814 (2022). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Experimental details and analysis

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Illustration of FOV offset for different viewpoints. The shift of the pupil position when the illumination light angle changes from 0 to $\theta$ results in a corresponding offset of the FOV from the green lines to orange lines. Herein, the cube is a reference object for the target (bunny) to be imaged in the real environment through the eyepiece. When the propagation distance of the hologram is set $d$, the object offset in the intermediate target plane is $\Delta$ and the corresponding steering angle of illumination is $\theta$.
Fig. 2.
Fig. 2. Illustration of coordinate system for deriving the diffraction field by off-axis propagation, where the coordinates of the diffraction image of interest $(\hat x,\hat y, z)$ are shifted by an angle of $\theta$ from the coordinates of the target image of interest $(x, y, z)$. These two images are positioned at the center of the source sampling window and the shifted sampling window, respectively. The origin sampling window is obtained by propagating the source sampling window over a distance $d$ without the tilting angle ($\theta = 0$).
Fig. 3.
Fig. 3. Reconstruction results using the Shifted-ASM and Expanded-ASM with different tilted angles. (a) $0^{\circ }$; (b) $2^{\circ }$; (c) $4^{\circ }$; (d) $6^{\circ }$.
Fig. 4.
Fig. 4. Processing pipeline of the proposed eyebox expanded hologram generation based on angular-steering illumination.
Fig. 5.
Fig. 5. Setup diagrams and prototype photographs of proposed holographic near-eye displays with VR-mode (a) and AR-mode (b). FL, fiber-coupled laser; CL, collimating lens; BS, beam splitter; L1, lens 1; L2, lens 2; CM, concave mirror.
Fig. 6.
Fig. 6. Experimental results of eyebox expansion for the proposed VR-mode holographic near-eye display. Images are captured from different viewpoints from (a) to (i). The camera is located at −4 mm to 4 mm in two dimensions.
Fig. 7.
Fig. 7. Experimental results of reconstructing 3D images with the VR-mode holographic near-eye display. The camera is focused at 0.8D (a) and 10D (b).
Fig. 8.
Fig. 8. Experimental results of the eyebox expansion tested on the AR-mode holographic near-eye display. Images are captured from different viewpoints from (a) to (i). The camera located at −4 mm to 4 mm in two dimensions.
Fig. 9.
Fig. 9. Experimental results of reconstructing 3D images tested on the AR-mode holographic near-eye display. The camera is focused at 3.3D (a) and 0.56D (b).

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

θ = arctan Δ y f .
Δ = d tan θ .
[ x ^ y ^ ] = [ x y ] + d [ tan θ x tan θ y ] ,
E O ( x ^ , y ^ , d ) = E O ( x , y , d ) = F 1 { F [ E s ( x , y , 0 ) ] exp ( i 2 π d 1 λ 2 f x 2 f y 2 ) } ,
E O ( x , y , d ) = F 1 { F [ E s ( x , y , 0 ) ] exp [ i 2 π d ( f x tan θ x + f y tan θ y + 1 λ 2 f x 2 f y 2 ) ] } ,
E O ( x , y , d ) = F 1 { F [ m ( x , y ) E s ( x , y , 0 ) ] exp [ i 2 π d ( f x tan θ x + f y tan θ y + 1 λ 2 f x 2 f y 2 ) ] } .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.