Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical see-through holographic near-eye-display with eyebox steering and depth of field control

Open Access Open Access

Abstract

We propose an optical see-through holographic near-eye-display that can control the depth of field of individual virtual three-dimensional image and replicate the eyebox with dynamic steering. For optical see-through capability and eyebox duplication, a holographic optical element is used as an optical combiner where it functions as multiplexed tilted concave mirrors forming multiple copies of the eyebox. For depth of field control and eyebox steering, computer generated holograms of three-dimensional objects are synthesized with different ranges of angular spectrum. In optical experiment, it has been confirmed that the proposed system can present always-focused images with large depth of field and three-dimensional images at different distances with shallow depth of field at the same time without any time-multiplexing.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical see-through near-eye-display (NED) is a key device for augmented reality (AR) applications and attracts growing attention recently. One of the issues with conventional NEDs is the vergence-accommodation conflict (VAC) [1–8]. While the perceived distance, or vergence distance, varies according to the binocular disparity of the contents, the focal distance of individual eye, or accommodation distance, is fixed at the physical image plane of the display panel. This discrepancy makes it hard to achieve realistic optical matching of virtual images with real environment in AR applications and is known to be a cause of many side effects including eye fatigue and perceived resolution reduction [2,3].

Maxwellian NED is one possible approach that alleviates the VAC problem [9–15]. In Maxwellian configuration, the light from the display panel is focused at a spot in the eye pupil plane and projected on the eye retina. Since the effective eye pupil for the images is reduced to a spot, the depth of field is enlarged so that the image looks always focused regardless of the actual focal length of the eye lens. The elimination of the focal depth cue relieves the VAC problem partially [13], making the Maxwellian display attractive. One drawback of the Maxwellian display is that the eye box is limited by the eye pupil size. Although techniques that enlarge the eye box by steering the focal spot using a dynamic mirror [14] or by creating multiple focal spots using a holographic optical element (HOE) [15] have been reported, they require mechanical motion [14] or the position of the multiple focal spots is still fixed [15], which limits their usefulness.

Holographic NED is another approach. It reconstructs wavefront of 3D objects at arbitrary distances, solving the VAC problem completely. Motivated by this advantage, various researches have been conducted on holographic NEDs actively [16–21]. However, space bandwidth product (SBP) of currently available spatial light modulators (SLMs) is not sufficient, which translates to limited eye box in NED configurations.

In this paper, we propose a novel holographic NED. The proposed system uses a HOE functioning as multiplexed concave mirrors which replicates the eyebox, enabling the observation of the images in a wider range. Computer generated hologram (CGH) is created with different range of angular spectrums, controlling depth of field of displayed 3D object individually. Therefore it is possible to display a 3D scene, which consists of holographic 3D images at arbitrary distances with shallow depth of field like usual holographic displays and always-focused-images with large depth of field like usual Maxwellian displays at the same time without time-multiplexing. The angular spectrum control also enables focal spot steering in the Maxwellian view, making it robust to the eye pupil variation and eyeball rotation. Combination of digital holographic display technique with the analog hologram, i.e. HOE with multiplexed tilted concave mirrors contributes to the enlargement of the eyebox over conventional SBP limit of the SLM.

In the followings, the configuration of the proposed system is explained with optical experimental results verifying its feasibility.

2. Proposed method

2.1 System configuration

Figure 1 shows the configuration of the proposed system. A plane wave from a laser source is modulated by a CGH on the SLM, forming intermediate holographic images after the 4-f optics. The light from the intermediate holographic images is then diffracted by a reflection type HOE to present magnified virtual holographic images at a longer distance to user’s eye. The HOE is recorded such that a tilted plane wave from the SLM is diffracted to form multiple focal spots, functioning as multiplexed tilted concave mirrors. The separation between the HOE focal spots is set to be slightly larger than the eye pupil size. Around each focal spot, the eye box is formed with a size determined by the angular spectrum range used in the CGH synthesis. The light corresponding to the virtual holographic images is replicated by the HOE toward the multiple focal spots, and thus the eye located at any eye box can observe the displayed images.

 figure: Fig. 1

Fig. 1 Configuration of the proposed system.

Download Full Size | PDF

Note that the generation of the multiple focal spots by the HOE is not achieved by high order diffractions. The HOE is recorded with a single signal wave that converges at multiple focal spots. Thus the proposed system uses the first order diffraction of the HOE which generates multiple spots.

2.2 Depth of field control of individual holographic image

In the proposed system, depth of field of individual holographic image is controlled by the angular spectrum range used in the synthesis of the corresponding CGH. With the maximum angular spectrum range limited by the pixel pitch of the SLM, the effective eye box for that image has its maximum size and thus the depth of field becomes minimized around the designated distance of the holographic image. With the minimum angular spectrum range, the depth of field is maximized to give always-focused-images.

CGH synthesis with angular spectrum range control can be achieved using different types of the CGHs [22]. In our implementation, we use an analytic triangular mesh based CGH technique for direct control of the angular spectrum in the CGH synthesis. In the analytic triangular mesh based CGH, the 3D objects are modeled as a collection of triangle-shaped clear apertures which are illuminated by a carrier wave [23,24]. Suppose that global rx,y,z = [x,y,z]T and local rxl,yl,zl = [xl,yl,zl]T coordinates system are defined such that the hologram is at z = 0 plane and a single triangular aperture is at zl = 0 plane with one of its three vertices, i.e. rox,y,z is at the local coordinates origin. The angular spectrum of the single triangular aperture G(fx,y) is given by

G(fx,y){D(fx,y)Gr(AT[100010]R{fx,y,z[001/λ]})}ej2πfxl,yl,zlTcdet(A)fzlfz,
where fx,y and fx,y,z are 2 × 1 and 3 × 1 vectors of the spatial frequency in global coordinates system. fxl,yl,zl is the 3 × 1 spatial frequency vector in the local coordinates system which is related to the global spatial frequency fx,y,z by fxl,yl,zl = Rfx,y,z using 3 × 3 rotation matrix R. The two spatial frequency vectors fx,y,z and fxl,yl,zl satisfy |fx,y,z| = |fxl,yl,zl| = 1/λ where λ is the wavelength. c is the 3 × 1 vector given by rxl,yl,zl = Rrx,y,z + c. Gr is the analytic formula of the Fourier transform of a binary function gr which is one inside a reference triangle and zero outside. A is the 2 × 2 matrix relating the triangular aperture in the local plane to the reference triangle by gl(rxl,yl) = gr(Arxl,yl). And ⊗ represents convolution over fx,y.

The angular spectrum G(fx,y) of the holographic image can be manipulated by controlling D(fx,y) term in Eq. (1) which is defined by

D(fx,y)=a(fx,y)exp[j2πfx,y,zTrx,y,zo],
where a(fx,y) is the angular spectrum of the carrier wave. For a single plane carrier wave where a(fx,y) = δ(fx,y-νox,y), the angular spectrum of the holographic images G(fx,y) is concentrated around the spatial frequency of the plane wave νox,y with the spatial bandwidth of the 3D objects as shown in the left part of Fig. 2(a). In the eye pupil plane, the narrow angular spectrum of the holographic images translates to focal spots which are replicated by the HOE as shown in the right part of Fig. 2(a). The width of the focal spot is much smaller than the eye pupil and thus the virtual holographic images are seen with large depth of field, being always-focused regardless of the actual focal length of the user’s eye.

 figure: Fig. 2

Fig. 2 Angular spectrum range and eye box formation when the carrier wave used in the CGH synthesis is given by (a) a single plane wave, (b) a range of plane waves.

Download Full Size | PDF

To the contrary, when a range of plane waves are used as the carrier wave such that |a(fx,y)| = constant over νminx,y<fx,y<νmaxx,y, the angular spectrum of the holographic images is extended as shown in the left part of Fig. 2(b), and the eye boxes with extended width are formed by the HOE in the eye pupil plane as shown in the right part of Fig. 2(b). With sufficiently wide range of the plane carrier waves, the eye box width is larger than the eye pupil, giving minimum observable depth of field for the reconstruction.

Note that this angular spectrum control can be done for each triangular aperture of the objects or for each 3D object in the entire scene. Therefore, it is possible to reconstruct 3D images with different depth of field for each object, which will be demonstrated in the following experiment section.

2.3 Focal spot steering for always-focused image

In case of the single plane carrier wave, the displayed image is always-focused like usual Maxwellian displays. However, it is noteworthy that the proposed system can also steer the focal spots without any mechanical movement. The focal spot steering in the proposed system is simply done by synthesizing the CGH with different single plane carrier wave. Since a single plane wave is diffracted to form multiple focal spots by the HOE and their transverse position offset in the eye pupil plane is dependent on the spatial frequency vox,y of the plane carrier wave, the multiple focal spots can be steered maintaining their separation by changing vox,y in the CGH synthesis. Since the focal spot separation is set to be slightly larger than the eye pupil size, it is possible to project a single focal spot to the eye pupil once its position is detected by an eye tracking system. This feature makes the proposed system be more robust to the eye pupil size variation and the eye ball rotation over conventional Maxwellian displays [15].

2.4 HOE aberration compensation

In the proposed system, the HOE functions as concave mirrors which magnify the intermediate holographic images forming their virtual images at large distances. Due to the tilted configuration as shown in Fig. 1 and large magnification ratio, the formed virtual images suffer from large aberration. The proposed system pre-compensates this aberration to form the virtual images in their clear shape.

Figure 3 shows the coordinates system for the HOE and the SLM. The HOE is recorded such that the plane wave from the SLM which is tilted by an angle α with respect to the HOE plane is diffracted to form a spot at a distance f. For each point in the HOE plane (xhoe, yhoe, zhoe = 0), the free space grating vector K = [Kxhoe,Kyhoe,Kzhoe]T is formed such that

K=krks=k[sinα0cosα]kxhoe2+yhoe2+f2[xhoeyhoef],
where k = 2π/λ is the wave number in free space [17,25,26]. Note that the actual grating vector inside the HOE medium is different from the free space grating vector K in Eq. (3) due to the refractive index of the medium. Nevertheless, following analysis using the free space grating vector K is still valid for all wave vectors and phase distribution in free space.

 figure: Fig. 3

Fig. 3 Coordinates system for HOE and SLM.

Download Full Size | PDF

In the reconstruction, in order to form a virtual image point at (xohoe, yohoe, zohoe) as shown in Fig. 3, the wave vector ko = [ko,x,hoe, ko,y,hoe, ko,z,hoe]T of the diffracted light in free space should be given by

ko=k(xhoexhoeo)2+(yhoeyhoeo)2+zhoeo2[xhoe+xhoeoyhoe+yhoeozhoeo],
which leads to the corresponding wave vector ki of the incoming light from the SLM
ki,x,hoe=ko,x,hoe+Kx,hoe=k(xhoe+xhoeo)(xhoexhoeo)2+(yhoeyhoeo)2+zhoeo2+ksinα+kxhoexhoe2+yhoe2+f2,
and
ki,y,hoe=ko,y,hoe+Ky,hoe=k(yhoe+yhoeo)(xhoexhoeo)2+(yhoeyhoeo)2+zhoeo2+kyhoexhoe2+yhoe2+f2.
From Eqs. (5) and (6), we can find the corresponding phase distribution of the incoming light ϕ(xhoe,yhoe) in the HOE plane by
ϕ(xhoe,yhoe)=k(xhoexhoeo)2+(yhoeyhoeo)2+zhoeo2+kxhoe2+yhoe2+f2+(ksinα)xhoe,
where ∂ϕ/∂xhoe = ki,x,hoe and ∂ϕ/∂yhoe = ki,y,hoe are used in the derivation. The first two terms in the right hand side of Eq. (7) are nothing but a phase function of the single point image formation in an on-axis concave mirror of a focal length f. Thus only thing we need to consider to compensate the aberration caused by tilt is the third term (ksinα)xhoe which corresponds to a shift by sinα/λ along spatial frequency fx,hoe axis in the angular spectrum domain. For general intermediate holographic 3D images, the angular spectrum in the HOE plane with the tilt aberration compensation Ghoe(fx,hoe, fy,hoe) is related to the usual on-axis angular spectrum Ghoe,o(fx,hoe, fy,hoe) by Ghoe(fx,hoe, fy,hoe) = Ghoe,o(fx,hoe- sinα/λ, fy,hoe).

Once the angular spectrum Ghoe(fx,hoe, fy,hoe) is determined from the desired intermediate holographic images, the corresponding angular spectrum Gslm(fx,slm, fy,slm) in the SLM plane can be obtained from the geometric relation between two planes. More specifically, the angular spectrum in the SLM plane Gslm(fx,slm, fy,slm) is given by [22–24]

Gslm(fx,slm,fy,slm)=Ghoe(fx,hoe,fy,hoe)fz,hoefz,slmexp[j2πdfz,slm]=Ghoe,o(fx,hoesinαλ,fy,hoe)fz,hoefz,slmexp[j2πdfz,slm],
where
[fx,hoefy,hoefz,hoe]=[cosα0sinα010sinα0cosα][fx,slmfy,slmfz,slm],
and |[fx,slm, fy,slm, fz,slm]T| = |[fx,hoe, fy,hoe, fz,hoe]T| = 1/λ. To summarize, the angular spectrum in the SLM plane Gslm is obtained in a regular SLM spatial frequency grid (fx,slm, fy,slm) using Eq. (8). The Ghoe_o in Eq. (8) is evaluated using usual analytic triangular mesh based CGH technique with a consideration of the angular spectrum range explained in sections 2.2 and 2.3, for desired intermediate holographic images which will be magnified by the HOE. The angular spectrum in the SLM plane Gslm is finally Fourier-transformed to give final complex amplitude, i.e. hologram in the SLM plane.

2.5 Eyebox size and focal spot steering range

In the proposed system, the eyebox size and the focal spot steering range are determined by the range of the spatial frequency of the plane carrier wave. Denoting the pixel pitch of a SLM as p, the spatial frequency range supported by the SLM is given by −1/2p<fi,slm<1/2p where i = x or y. From Eqs. (8) and (9), the spatial frequency fx,hoe,o = fx,hoe-(sinα)/λ and fy,hoe,o = fy,hoe in the on-axis angular spectrum Ghoe,o is related to the spatial frequency fx,slm and fy,slm in the SLM by fx,hoe,o = fx,slmcosα + fz,slmsinα -(sinα)/λfx,slmcosα in x direction and fy,hoe,o = fy,slm in y direction. Therefore the spatial frequency range available for Ghoe,o is –(cosα)/2p< fx,hoe,o<(cosα)/2p and −1/2p< fy,hoe,o<1/2p in x and y directions, respectively. A plane wave with a spatial frequency fx,hoe,o, fy,hoe,o is focused in the eye pupil plane at x = fλfx,hoe,o, y = fλfy,hoe,o with a HOE focal length f. Therefore, the size of a single eyebox or the steering range of a focal spot in the eye pupil plane is finally given by

fλcosα2p<x<fλcosα2p,fλ2p<y<fλ2p.
Note that Eq. (10) represents the size of a single eyebox or the steering range of a single focal spot. Since the proposed system replicates the eyebox or the focal spot using the HOE, the overall range where the eye can be located is multiplied.

3. Experimental results

Figure 4 shows the experimental setup built on an optical table for the verification of the proposed method. A reflection type phase modulating SLM (model name: HOLOEYE LETO) of p = 6.4um pixel pitch and 1920 × 1080 resolution was used as the SLM and the light source was the 532nm green laser (model name: COHERENT Sapphire SF 532). The distance between the center of the HOE and the image of the SLM formed by the 4-f optics, i.e. d in Fig. 3 is 4cm. The angle between the SLM plane and the HOE plane, i.e. α in Fig. 3 is 45°. The HOE was recorded in a photopolymer to form 3 focal spots at a distance f = 5cm with 3mm separation for a collimated light from the SLM, using configuration shown in Fig. 5 which is similar to one explained in [15]. Three collimated beams with different directions are transformed to converging beams by a lens and illuminate the photopolymer simultaneously, working as a signal wave. The reference wave is another collimated beam which illuminates the photopolymer with 45° from different side. The recorded HOE has a circular shape with 15mm diameter. 15mm diameter and f = 5cm focal length of the HOE would allow 17° × 17° field of view (FoV). In our experimental setup, however, the SLM which has 12.3mm × 6.9mm size and is slanted by α = 45° in horizontal direction, limits the vertical FoV to around 8°. The experimentally confirmed FoV is 9.6° × 7.4°.

 figure: Fig. 4

Fig. 4 Experimental setup.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 HOE recording configuration.

Download Full Size | PDF

With the parameters of the experimental setup p = 6.4um, f = 5cm, α = 45°, and λ = 532nm, Eq. (10) indicates that the size of a single eyebox or the steering range of a single focal spot in the eye pupil plane of our experimental setup is given by −1.47mm<x<1.47mm and −2.08mm<y<2.08mm where the range in x direction is close to the focal spot separation 3mm. The vertical range, however, was deliberately reduced to half, i.e. −1.04mm<y<1.04mm in our experiment by using only lower half of the angular spectrum in order to separate the desired reconstruction from DC and conjugate terms as will be explained later.

For all experiments discussed below, a 3D teapot object which consists of 756 triangular meshes is used. The CGH, i.e. complex field of the teapot at different distances with different depth of fields is first synthesized, it is then encoded into a phase modulation pattern by numerically interfering it with a plane reference wave, and finally the encoded pattern is loaded to the SLM. The DC and other unwanted reconstructions caused by the encoding and the pixelated structure of the SLM are blocked by an aperture in the Fourier plane of the 4-f optics.

First, the aberration compensation explained in section 2.4 was verified. CGHs for a teapot object were synthesized such that the virtual image is formed at 0.3D (Diopter) distance from a camera located at the eye pupil plane, i.e. zhoe = -f = −5cm. Three CGHs were synthesized; one without any aberration compensation, another with only simple horizontal size reduction by cosα of the teapot object to account for the tilt of the HOE plane, and the other with the proposed aberration compensation explained in section 2.4. As can be seen in Fig. 6, the proposed aberration compensation presents clear image of the teapot object, demonstrating its validity. All experimental results presented below were obtained with application of the proposed aberration compensation.

 figure: Fig. 6

Fig. 6 Experimental results of aberration compensation. (a) No compensation, (b) simple horizontal size reduction, (c) compensation by the proposed method.

Download Full Size | PDF

Next, the depth of field control by adjusting the angular spectrum range in the CGH synthesis was verified. Figure 7 shows the amplitude of the angular spectrum and the corresponding light distribution in the eye pupil plane of the system for the teapot object. The light distribution in the eye pupil plane, or the eye box, was captured by placing a diffuser in the plane. Note that this diffuser was placed in the eye pupil plane only to visualize the light intensity distribution in that plane (or eyebox) and take its picture. In actual operation of the proposed system, the diffuser is removed, and an eye or a camera is placed instead. In the synthesis of the CGH, the upper half of the angular spectrum domain was not used to block the DC and the conjugate terms by the 4-f optics. In the captured light intensity distribution in the eye pupil plane shown in lower row of Fig. 7, it can be observed that the eye box is replicated three times due to the multiple concave mirror function of the HOE as expected, which enlarges the overall range where the eye can be located. Moreover, as the angular spectrum range of the CGH increases from the left to right part of Fig. 7, the corresponding light distribution in the eye pupil plane also increases around the HOE focal spots as illustrated in Fig. 2, which reduces the depth of field of the displayed virtual images. Note that in the low row of Fig. 7 showing the light intensity distribution in the eye pupil plane, it is observed that top left corner is brighter. We believe that it is the leakage light from the DC term which is not blocked perfectly by the aperture in the 4-f optics.

 figure: Fig. 7

Fig. 7 Angular spectrum range of the CGH and the corresponding light intensity in the eye pupil plane. Light intensity in the eye pupil plane was captured by placing a diffuser.

Download Full Size | PDF

Figure 8 shows the observed images for different depth of field. In Fig. 8(a), the CGH was synthesized with a single plane carrier wave, giving minimum range of the angular spectrum like the leftmost case in Fig. 7. In Fig. 8(a) and the associated movie, it can be observed that the depth of field of the virtual image is maximized as expected, showing always-focused images regardless of the focal length of the camera. In Fig. 8(b), the CGH containing two teapot objects at different distances was synthesized with a range of the angular spectrum. Thus, the depth of field becomes shallow, giving focused or blurred observation according to the focal length of the camera as shown in Fig. 8(b) and the associated movie.

 figure: Fig. 8

Fig. 8 Observed images with different focal length of the camera (a) when the CGH is synthesized with narrow angular spectrum range (see Visualization 1) (b) when the CGH is synthesized with wide angular spectrum range (see Visualization 2).

Download Full Size | PDF

Figure 9 shows the experimental results of the focal spot steering in case of the always-focused image. As explained in the section 2.3, the focal spots can be steered by changing the spatial frequency vox,y of the plane carrier wave in the CGH synthesis. Figure 9(a) and the associated movie was captured by placing a diffuser in the eye pupil plane. Although the high order diffractions due to the nonlinear phase modulation of the SLM are also visible, it can be confirmed that the designated focal spots can be steered in horizontal and vertical directions without any mechanical motion as expected. Figure 9(b) shows the observed images at two different camera pupil positions which represent two examples of the pupil position of a single eye. Figure 9(b) also confirms that the image can be seen when the focal spot falls inside the camera pupil. Therefore once the eye pupil position is detected by an external eye tracking system which is not included in our experimental setup, the focal spots can be steered to the eye pupil position so that the image is visible. Note that even though the steering range is limited by the pixel pitch of the SLM by Eq. (10), the focal spots are replicated by the HOE, and thus the system can present images to the user covering wider range of the eye pupil position.

 figure: Fig. 9

Fig. 9 Focal spot steering in (a) horizontal (see Visualization 3) and vertical (see Visualization 4) direction. (b) Observed images with different focal spot and camera pupil positions.

Download Full Size | PDF

Finally, Fig. 10 shows the experimental result of simultaneous display of multiple images with different distances and different depth of field. In Fig. 10, a single CGH containing complex amplitudes of three teapots was loaded to the SLM and the reconstruction was observed. In the complex amplitude synthesis, the angular spectrum ranges are adjusted to give shallow depth of field for two teapots at different distances and large depth of field for the other teapot. In Fig. 10, it can be observed that left two teapots with shallow depth of field are focused and blurred according to the camera focal distances while the right teapot with large depth of field is always clear. Therefore it is confirmed that the proposed system can present holographic 3D images at arbitrary distances with controllable depth of field for individual image without any need for mechanical motion or time multiplexing.

 figure: Fig. 10

Fig. 10 Observed images when two images at different distances with shallow depth of field and an image with large depth of field are displayed.

Download Full Size | PDF

Although the proposed method including eyebox replication by the HOE, focal spot steering, and depth of field control is successfully demonstrated by the experiment, there are still large room for the enhancement. The vignetting on the HOE by the relay optics and non-uniformity of the diffraction efficiency of the HOE could affect the uniformity of the reconstructed images within the FoV. Assessment and enhancement of the reconstructed image quality including the uniformity and the resolution requires optimization of the HOE recording procedure and display experimental setup which is a topic of further research. Speckle noise observed in the experimental results is another point for improvement. Speckle reduction techniques [23] which usually use time-multiplexing can be applied to the proposed system. Since the proposed system does not use time-multiplexing for depth of field control, focal spot steering, and eyebox replication, there is no extra time-multiplexing requirement caused by the proposed method when one applies previous time-multiplexing based speckle reduction techniques to the proposed system. Our current experimental setup is limited to a monochromatic reconstruction. Extension to a full color system would be possible by recording full color HOE and using RGB lasers [27,28]. Our experimental setup is built on an optical table and it is not compact. However, the configuration is not more complicate than usual holographic NEDs and thus miniaturization to a compact form factor would be possible like previous reports in [16], [20]. Finally, our setup does not include an eye tracking system which is required for actual applications of the proposed focal spot steering technique.

4. Conclusion

In this paper, we propose an optical see-through NED system that use a HOE combiner and a holographic 3D display module. The angular spectrum range of objects is adjusted in the CGH synthesis to control the depth of field of the reconstructed images and steer the focal spot in the eye pupil plane. The HOE that functions as multiplexed concave mirrors replicates the eye box initially created by the holographic display module to the number of the multiplexed concave mirrors, making the image visible in a wider range of eye pupil position. The aberration caused by the tilted reflective HOE configuration is analyzed and pre-compensated in the CGH synthesis. The optical experiment confirms that the proposed system can present holographic 3D images with shallow depth of field at arbitrary distances and always-focused images with large depth of field, simultaneously without any time-multiplexing. Duplication of the focal spots and their dynamic steering are also demonstrated experimentally, showing feasibility of the proposed method.

Funding

Institute for Information and Communications Technology Promotion (IITP), MSIT [2017-0-00417, Openholo library technology development for digital holographic contents and simulation]; Basic Science Research Program, NRF [NRF-2017R1A2B2011084]; ITRC Support Program, IITP, MSIT [2015-0-00448]

References

1. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]   [PubMed]  

2. G. Koulieris, B. Bui, M. Banks, and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36(4), 87 (2017). [CrossRef]  

3. T. Shibata, J. Kim, D. M. Hoffman, and M. S. Banks, “The zone of comfort: Predicting visual discomfort with stereo displays,” J. Vis. 11(8), 11 (2011). [CrossRef]   [PubMed]  

4. A. Maimone, G. Wetzstein, M. Hirsch, D. Lanman, R. Raskar, and H. Fuchs, “Focus 3D: compressive accommodation display,” ACM Trans. Graph. 32(5), 153 (2013). [CrossRef]  

5. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015). [CrossRef]  

6. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]   [PubMed]  

7. D. Kim, S. Lee, S. Moon, J. Cho, Y. Jo, and B. Lee, “Hybrid multi-layer displays providing accommodation cues,” Opt. Express 26(13), 17170–17184 (2018). [CrossRef]   [PubMed]  

8. S. Lee, J. Cho, B. Lee, Y. Jo, C. Jang, D. Kim, and B. Lee, “Foveated retinal optimization for see-through near-eye multi-layer displays,” IEEE Access 6(1), 2170–2180 (2018). [CrossRef]  

9. G. Westheimer, “The Maxwellian view,” Vision Res. 6(12), 669–682 (1966). [CrossRef]   [PubMed]  

10. M. Sugawara, M. Suzuki, and N. Miyauchi, “Retinal imaging laser eyewear with focus-free and augmented reality,” in SID Symposium Digest of Technical Papers (2016), pp. 164–167.

11. N. Fujimoto, and Y. Takaki, “Holographic Maxwellian-view display system,” in Digital Holography and Three-Dimensional Imaging (Optical Society of America 2017), paper Th3A.4.

12. T. Ando, K. Yamasaki, M. Okamoto, T. Matsumoto, and E. Shimizu, “Retinal projection display using holographic optical element,” Proc. SPIE 3956, 211–216 (2000). [CrossRef]  

13. R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 88 (2017). [CrossRef]  

14. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017). [CrossRef]  

15. S.-B. Kim and J.-H. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Lett. 43(4), 767–770 (2018). [CrossRef]   [PubMed]  

16. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]   [PubMed]  

17. H.-J. Yeom, H.-J. Kim, S.-B. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, and J.-H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]   [PubMed]  

18. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41(11), 2486–2489 (2016). [CrossRef]   [PubMed]  

19. Y. Sakamoto, “Head-mounted holographic display with correct depth-focusing for AR,” in Frontiers in Optics (Optical Society of America, 2017), paper FTu4C.2.

20. A. Mainmone, A. Georgiou, and J. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).

21. Q. Gao, J. Liu, T. Zhao, X. Duan, H. Ma, J. Duan, and P. Liu, “See-through and true 3D head-mounted display based on complex amplitude modulation,” in Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2017), paper M2B.3.

22. J.-H. Park, “Recent progresses in computer generated holography for three-dimensional scene,” J. Inf. Disp. 18(1), 1–12 (2017). [CrossRef]  

23. S.-B. Ko and J.-H. Park, “Speckle reduction using angular spectrum interleaving for triangular mesh based computer generated hologram,” Opt. Express 25(24), 29788–29797 (2017). [CrossRef]   [PubMed]  

24. M. Askari, S.-B. Kim, K.-S. Shin, S.-B. Ko, S.-H. Kim, D.-Y. Park, Y.-G. Ju, and J.-H. Park, “Occlusion handling using angular spectrum convolution in fully analytical mesh based computer generated hologram,” Opt. Express 25(21), 25867–25878 (2017). [CrossRef]   [PubMed]  

25. M.-L. Hsieh and K. Y. Hsu, “Grating detuning effect on holographic memory in photopolymers,” Opt. Eng. 40(10), 2125–2133 (2001). [CrossRef]  

26. S. Lee, B. Lee, J. Cho, C. Jang, J. Kim, and B. Lee, “Analysis and implementation of hologram lenses for see-through head-mounted display,” IEEE Photonics Technol. Lett. 29(1), 82–85 (2017). [CrossRef]  

27. S. I. Kim, C.-S. Choi, A. Morozov, S. Dubynin, G. Dubinin, J. An, S.-H. Lee, Y. Kim, K. Won, H. Song, H.-S. Lee, and S. Hwang, “Slim coherent backlight unit for holographic display using full color holographic optical elements,” Opt. Express 25(22), 26781–26791 (2017). [CrossRef]   [PubMed]  

28. R. Häussler, Y. Gritsai, E. Zschau, R. Missbach, H. Sahm, M. Stock, and H. Stolle, “Large real-time holographic 3D displays: enabling components and results,” Appl. Opt. 56(13), F45–F52 (2017). [CrossRef]   [PubMed]  

Supplementary Material (4)

NameDescription
Visualization 1       Always-focused-image with large depth of field presented by holographic near-eye-display. Camera focus is varying from 0.35Diopter to 3Diopter.
Visualization 2       Three-dimensional images with shallow depth of field presented by holographic near-eye-display. Top and bottom teapots are located at 0.35 Diopter and 3 Diopter, respectively. Camera focus is varying from 0.35Diopter to 3Diopter.
Visualization 3       Horizontal steering demonstration of focal spots in the eye pupil plane of holographic near-eye-display
Visualization 4       Vertical steering demonstration of focal spots in the eye pupil plane of holographic near-eye-display

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Configuration of the proposed system.
Fig. 2
Fig. 2 Angular spectrum range and eye box formation when the carrier wave used in the CGH synthesis is given by (a) a single plane wave, (b) a range of plane waves.
Fig. 3
Fig. 3 Coordinates system for HOE and SLM.
Fig. 4
Fig. 4 Experimental setup.
Fig. 5
Fig. 5 HOE recording configuration.
Fig. 6
Fig. 6 Experimental results of aberration compensation. (a) No compensation, (b) simple horizontal size reduction, (c) compensation by the proposed method.
Fig. 7
Fig. 7 Angular spectrum range of the CGH and the corresponding light intensity in the eye pupil plane. Light intensity in the eye pupil plane was captured by placing a diffuser.
Fig. 8
Fig. 8 Observed images with different focal length of the camera (a) when the CGH is synthesized with narrow angular spectrum range (see Visualization 1) (b) when the CGH is synthesized with wide angular spectrum range (see Visualization 2).
Fig. 9
Fig. 9 Focal spot steering in (a) horizontal (see Visualization 3) and vertical (see Visualization 4) direction. (b) Observed images with different focal spot and camera pupil positions.
Fig. 10
Fig. 10 Observed images when two images at different distances with shallow depth of field and an image with large depth of field are displayed.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

G( f x,y ){ D( f x,y ) G r ( A T [ 1 0 0 0 1 0 ]R{ f x,y,z [ 0 0 1/λ ] } ) } e j2π f xl,yl,zl T c det( A ) f zl f z ,
D( f x,y )=a( f x,y )exp[ j2π f x,y,z T r x,y,z o ],
K= k r k s =k[ sinα 0 cosα ] k x hoe 2 + y hoe 2 + f 2 [ x hoe y hoe f ],
k o = k ( x hoe x hoe o ) 2 + ( y hoe y hoe o ) 2 + z hoe o 2 [ x hoe + x hoe o y hoe + y hoe o z hoe o ],
k i,x,hoe = k o,x,hoe + K x,hoe = k( x hoe + x hoe o ) ( x hoe x hoe o ) 2 + ( y hoe y hoe o ) 2 + z hoe o 2 +ksinα+ k x hoe x hoe 2 + y hoe 2 + f 2 ,
k i,y,hoe = k o,y,hoe + K y,hoe = k( y hoe + y hoe o ) ( x hoe x hoe o ) 2 + ( y hoe y hoe o ) 2 + z hoe o 2 + k y hoe x hoe 2 + y hoe 2 + f 2 .
ϕ( x hoe , y hoe )=k ( x hoe x hoe o ) 2 + ( y hoe y hoe o ) 2 + z hoe o 2 +k x hoe 2 + y hoe 2 + f 2 +( ksinα ) x hoe ,
G slm ( f x,slm , f y,slm )= G hoe ( f x,hoe , f y,hoe ) f z,hoe f z,slm exp[ j2πd f z,slm ] = G hoe,o ( f x,hoe sinα λ , f y,hoe ) f z,hoe f z,slm exp[ j2πd f z,slm ],
[ f x,hoe f y,hoe f z,hoe ]=[ cosα 0 sinα 0 1 0 sinα 0 cosα ][ f x,slm f y,slm f z,slm ],
fλcosα 2p <x< fλcosα 2p , fλ 2p <y< fλ 2p .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.