Abstract

We propose a computer-generated hologram technique that reduces latency caused by hologram calculations in holographic near-to-eye displays. The proposed method applies a foveated rendering technique to a triangular mesh-based computer-generated hologram to reduce the computational load while maintaining the perceived image quality. Progressive update from low resolution to high resolution is achieved with minimal computational load by overlaying a new high-resolution occluding mesh patch on a low-resolution mesh model of the scene. The reduced latency for the first hologram generation with low reconstruction resolution and its smooth update to the high-resolution reconstructions using the proposed method was verified by numerical simulations and optical experiments.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Virtual reality (VR) and augmented reality (AR) are emerging field for academic research and industrial development. Near to eye displays (NEDs) are essential devices giving immersive and interactive experience in VR and AR applications. While most commercialized ones are stereoscopic NEDs which present a pair of two-dimensional (2D) images to user’s eyes, giving depth perception only by stereopsis, holographic NEDs which present holographic three-dimensional (3D) images with full depth cues to each eye attract growing attention recently [1–4]. One of the advantages of the holographic NEDs is that they solve the vergence-accommodation conflict (VAC), i.e. the mismatch between the vergence distance of two eyes and the focal distance of individual eye. They enable realistic overlay of virtual 3D images on real objects by providing true focus cue.

In the VR and AR applications using NEDs, minimizing display latency is one of the most important issues. The scene presented to a user should change instantly following the user’s movement without noticeable delay. Large delay not only reduces the immersive feeling of the users, but also induces visual-vestibular mismatch (VVM), which makes the users experience disorientation and motion sickness [5,6].

The display latency is particularly serious in holographic NEDs. In the holographic NEDs, the data to be rendered is not a simple stereoscopic pair of 2D images but are computer generated holograms (CGHs). The CGH calculation requires much higher computational load than the stereoscopic image rendering, which increases the latency of the holographic NEDs.

Foveated rendering is a technique that shortens the latency by reducing the computational load for the contents rendering [6–9]. Human eye has high resolution only in the central area of the field of view (FoV), or also called fovea, while having low resolution in the peripheral area. Foveated rendering technique reduces the scene resolution in the peripheral area, achieving shorter rendering time without degrading the perceived image quality. Proposal of the foveated rendering and its successful demonstration have been reported several times [6–9], but most of them are for usual stereoscopic NEDs.

The application of the foveated rendering to holographic NEDs as illustrated in Fig. 1 has been attempted recently. Hong et al. developed a foveated rendering technique for a point cloud based CGH [10]. In this technique, the resolution is controlled by varying the density of the scene points. Lowering the point density, however, creates vacant space between the points and thus additional processing is required to fill those vacant spaces. Sakamoto et al. developed an another foveated hologram technique using ray tracing based CGH [11]. In their technique, fewer number of rays are cast to the peripheral area to reduce the calculation time. The vacant area due to sparse ray casting is covered by reducing the sub-hologram size of individual ray which blurs the ray reconstruction, filling the vacant area.

 

Fig. 1 Foveated hologram concept.

Download Full Size | PPT Slide | PDF

Although the foveated hologram has been demonstrated in these previous reports, the requirement for the additional processing to fill the vacant area between the points or rays is a drawback. Also, these previous techniques only consider a single hologram calculation without addressing its progressive update from low resolution to high resolution, which could contribute to lowering the display latency while eventually presenting full resolution hologram.

In this paper, we propose a foveated hologram technique using triangular mesh based CGH. Instead of points or rays, the proposed technique controls the density of the triangular meshes, which only changes the level of detail of the scene without creating any vacant area like previous methods as illustrated in Fig. 2. The proposed technique also achieves the progressive update of the hologram to higher resolution by overlaying the high-resolution patch to the current low-resolution scene using occlusion handling technique in the mesh based CGH [12]. The proposed technique allows presenting an initial hologram to a user with a low latency and updating it to higher resolution holograms sequentially. In the followings, we explain the principle of the proposed method and present its verification using numerical simulations and optical experiments.

 

Fig. 2 Comparison between point density control and mesh density control.

Download Full Size | PPT Slide | PDF

2. Principle of the proposed technique

2.1. Overview

Figure 3 illustrates the proposed method. 3D scene is represented by triangular meshes which are configured to have locally controllable resolution. Initially, the mesh resolution is controlled to be high only around the current eye gaze point while it is low in the rest area. With this foveated 3D scene model, the hologram is synthesized using the triangular mesh based CGH technique. The triangular mesh based technique calculates the angular spectrums of individual triangular meshes, aggregates them in a hologram plane, and finally Fourier transforms them to yield the complex wave field of the entire scene [12–14]. Since we use the foveated scene model instead of the original full mesh resolution one, the total number of the triangular meshes is reduced. Therefore, the computation time is shortened, enabling the presentation of the initial hologram to the user with a less latency.

 

Fig. 3 Proposed method. (a) Desired reconstructions of the foveated and progressive update process, (b) update operation of the proposed method illustrated in reconstruction space, (c) update operation of the proposed method illustrated in hologram plane.

Download Full Size | PPT Slide | PDF

If the scene is static over a few frames, then the progressive update of the hologram to higher resolution following the gaze change is also allowed in the proposed technique. First, a high-resolution patch of the scene is prepared around the new gaze point. Then the hologram is updated to present the scene with this high-resolution patch. This procedure can be repeated over the frames following the gaze change, presenting the entire scene with high resolution eventually as shown in Fig. 3. Note that this progressive update can be applied to text or graphics information displays in AR applications where the information is usually static at least over a few frames. Also even in 3D movie case, the proposed progressive update can be applied to the stationary part like background scene or still objects, reducing overall computational cost and latency.

For the hologram update, there could be several approaches. In a naïve approach, one can replace the previous low-resolution patch with the high-resolution one to build an updated scene mesh model, and then synthesize a new hologram for the new model. This approach, however, requires processing of all meshes in the entire scene which now has a slightly higher resolution. Thus it consumes more computation time than the previous frame without any benefit. In another possible approach, one can calculate the hologram only for the low-resolution patch and substrate it from the original hologram. Then the hologram of the high-resolution patch can be added to complete the updated hologram. This approach does not require mesh by mesh processing of the entire scene, thus it can be performed much faster than the first approach. However, it still requires the processing of not only the meshes in the high-resolution patch but also the ones in the low-resolution patch. The proposed technique further reduces the number of the meshes processed in the hologram update. In the proposed technique, a new hologram is synthesized by considering the current hologram as a background wave field and the new high-resolution patch as its occluding mask as shown in Fig. 3. Since the previous hologram data is processed as a whole and only the new high-resolution patch is processed mesh by mesh basis, the hologram update time in the proposed technique is shorter than the previous approaches. In the following subsections, our implementation of the mesh model with locally controllable resolution and the occlusion based progressive hologram update is explained in more details.

2.2. Mesh resolution control

The proposed method requires mesh structure which allows progressive control of the resolution, or the level of detail of a 3D scene. This progressive mesh technique has been developed in computer graphics and various techniques have been reported. In our implementation, hierarchical mesh vertex representation reported in [15,16] was used. Figure 4 shows the simplified concept of the hierarchical mesh vertex representation. First the 3D scene is represented using triangular meshes in its highest resolution. The vertices and the associated face data are stored in the hierarchy as a level 1. Then the neighboring two vertices in the level 1 are collapsed to create a new vertex in the level 2 at their mid position as shown in Fig. 4(a). This process is repeated populating the hierarchy to higher levels, generating lower resolution representations of the scene.

 

Fig. 4 Hierarchical mesh vertex representation. (a) vertices hierarchy (b) local resolution control according to the eye gaze point.

Download Full Size | PPT Slide | PDF

With the completed hierarchy data, local resolution update of the mesh representation according to the gaze point is easily achieved as shown in Fig. 4(b). Suppose the scene is represented in the lowest resolution using the vertices in the highest level as shown in the top row of Fig. 4(b). When the eye gaze point is given, the highest level (lowest resolution) vertices around the gaze point are identified and they are traced back to the lowest level (highest resolution) vertices following the hierarchy. These lowest level (highest resolution) vertices are used in the scene representation instead of the original highest level (lowest resolution) vertices, giving locally high resolution around the eye gaze point as shown in the middle row of Fig. 4(b). This process is repeated over frames following the eye gaze point as shown in the last row of Fig. 4(b), achieving progressive update of the scene representation toward higher resolution.

2.3. Progressive hologram update

The proposed technique realizes the hologram update by overlaying the high-resolution patch around the new eye gaze point on top of the previous scene. In the hologram synthesis, the high-resolution patch is treated as an object occluding the corresponding low-resolution area in the scene. Therefore the proposed technique does not require explicit removal of the low-resolution part and associated hologram calculation. For the seamless occlusion, the high-resolution patch is selected including boundary vertices in the higher hierarchy level (lower resolution) as shown in the second and third rows of Fig. 4(b). This ensures that the boundary of the occluding high-resolution patch is the same as that of the occluded low-resolution patch, realizing smooth connection of the high-resolution patch to the surrounding area.

In the proposed technique, the hologram update by the occluding high-resolution patch is realized based on the occlusion handling technique reported in [12]. Suppose that the angular spectrum of the wave field corresponding to the scene in the previous frame is ASprev(fx,y) where fx,y = [fx, fy]T is a spatial frequency pair in the hologram plane. The angular spectrum for the updated scene ASupdated(fx,y) is then given by

ASupdated(fx,y)=ASprev(fx,y)+n=1NASn(fx,y)ASprev,n(fx,y),
where ASn(fx,y) is the angular spectrum of the wave field emanating from the n-th mesh in the high-resolution patch and ASprev,n(fx,y) is the part of the ASprev which corresponds to the wave field occluded by the n-th mesh. N is the total number of the meshes in the high-resolution patch. Following [12], ASn(fx,y)-ASprev,n(fx,y) in Eq. (1) can be obtained by
ASn(fx,y)ASprev,n(fx,y)=[{Dn(fx,y)ASprev(fx,y)Pn(fx,y)}Bn(fx,y)]En(fx,y),
where ⊗ represents a convolution and Dn(fx,y) is the convolution kernel which determines the angular surface appearance property of the n-th mesh, i.e. diffusiveness or specular reflectance. Pn(fx,y), Bn(fx,y) and En(fx,y) are the terms determined only by the geometric shape, orientation, and position of the n-th mesh as defined in the appendix. Therefore, for a given angular spectrum of the previous frame ASprev(fx,y), the updated angular spectrum ASupdated(fx,y) can be obtained by calculating Eq. (2) for each mesh in the high-resolution patch and aggregating them by Eq. (1). The updated hologram is then obtained by Fourier transform the ASupdated(fx,y).

Note that the mesh by mesh calculation is required only for the meshes in the high-resolution patch without a need to address the meshes in the previous frame. Also note that for each mesh in the high-resolution patch, the addition of the mesh into the scene and the occlusion of the low-resolution part by the mesh are performed simultaneously by a single convolution operation as shown in Eq. (2). Therefore the computational cost for the hologram update is similar to that of the hologram calculation only for the high-resolution patch. Finally, note that even though the proposed method updates the hologram by overlaying the high-resolution patch on the previous low-resolution scene, the high-resolution patch does not need to be in front of the corresponding low-resolution part of the scene in the physical depth order. As explained above, the high-resolution patch in the proposed method has a boundary that matches to the surrounding area of the current scene. The occlusion handling given by Eqs. (1) and (2) effectively eliminates the wave field of the current scene that passes through the high-resolution patch area even though the high-resolution patch is more concave than the corresponding area in the current scene such that some part of the high-resolution patch is behind the low-resolution part. Therefore, the hologram update using the proposed technique can be performed regardless of the depth order of the high-resolution patch and the corresponding area in the current scene.

3. Numerical simulation

We demonstrate the proposed method by numerical simulations. In the numerical simulations, we loaded the high-resolution scene mesh data and conducted pre-processing. This pre-processing generates the hierarchical vertex structure that has lower resolution representations of the scene in several levels as explained in section 2.2. An initial hologram is then generated with the lowest-resolution scene representation and the progressively updated holograms with higher local scene resolutions are generated successively as explained in section 2.3. The simulation was implemented using a software MATLAB on a PC.

Firstly, the simulation was performed for a single object (dragon) as shown in Fig. 5. The sampling grid of the hologram is 1000 × 1000 and the sampling pitch is 8um. The dragon is placed at 0.05m behind the hologram plane and it has 4999 meshes in its highest resolution representation in total.

 

Fig. 5 Holograms and corresponding reconstructions for a single object (a) with a normal plane carrier wave, and (b) with a random phase carrier wave.

Download Full Size | PPT Slide | PDF

Figure 5(a) shows the holograms and their numerical reconstructions when a plane wave normal to the hologram plane is used as a carrier wave in the hologram synthesis. In Fig. 5(a), the frame 0 is the initial frame which has the lowest-resolution object representation and the successive frames have locally updated high-resolution object representation around the assumed gaze area denoted by the yellow circle. The numerical reconstruction results in Fig. 5(a) clearly show that the resolution of the object is progressively enhanced following the gaze area, finally giving the full resolution representation in the last frame 4 successfully. Figure 5(b) shows the result when a wave with random phase distribution is used as the carrier wave in the hologram synthesis. The successful local resolution update of the hologram is also observed in the numerical reconstructions over the frames.

Table 1 shows the number of the meshes and computation time for each frame in Fig. 5. In Table 1, the number of the meshes in the frame 0 is the total number of the meshes of the entire object in its lowest-resolution representation, and those in the frames 1 to 4 are the number of the meshes in the high-resolution patch used for the update. In addition to the absolute computation time measured in seconds in our implementation, the relative time normalized by the corresponding computation time of the conventional method where all high-resolution meshes are used for the hologram synthesis without foveated and progressive update technique is also indicated in Table 1.

Tables Icon

Table 1. Computation time comparison between the proposed progressive method and conventional one-step method in Fig. 5 simulation.

From Table 1, it can be seen that the proposed technique successfully reduces the latency of the first hologram presentation, i.e. frame 0, to 0.16 a.u. for Fig. 5(a) or 0.15 a.u. for Fig. 5(b), in comparison with the 1 a.u. latency of the conventional method. The computation times for the successive frames are proportional to the number of the meshes to be updated as observed in Table 1. Note that the total computation time aggregated over all frames in the proposed method is 1.34 a.u. for Fig. 5(a) and 1.29 a.u. for Fig. 5(b) which is larger than 1 a.u. of the conventional method due to the computational overhead of the progressive update, but it is believed that the reduction of the initial frame latency has much higher significance in the holographic NED applications which generally requires instant response to the user action.

Figure 6 shows the holograms and the numerical simulation results when two objects in different depths are used. The sampling grid and pitch of the hologram are 1920 × 1080 and 8um, respectively. The dragon is placed at 0.05m behind the hologram plane, and the star array is placed at 0.15m. The numerical reconstruction results at each object plane in Fig. 6 again show the successful progressive update of the resolution clearly. Table 2 indicates that the latency of the first hologram presentation in this case is reduced to 0.28 a.u.

 

Fig. 6 Holograms and corresponding reconstructions for two objects (a) holograms and (b),(c) corresponding reconstructions of two objects (stars and dragon).

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Computation time comparison between the progressive method and conventional one-step method in Fig. 6 simulation.

4. Experimental demonstration

Optical experiment was also conducted to verify the holograms generated by the proposed method. Figure 7 shows the experimental setup. The collimated 532nm laser beam is modulated by the reflection-type spatial light modulator (SLM) of 1920 × 1080 pixel resolution and 8um pixel pitch and observed after the 4f optics with an aperture to block the unnecessary lights.

 

Fig. 7 Optical experiment setup.

Download Full Size | PPT Slide | PDF

The hologram data of two objects in different depths used in the numerical simulation shown in Fig. 6 was also used in the optical experiment. Figure 8 shows the reconstruction results at each frame with different camera focuses. The yellow circle represents the gaze area assumed in the hologram calculation. It can be confirmed from Fig. 8 that two objects are reconstructed at different depths as expected. The reconstruction resolution is also progressively updated over frames following the assumed gaze area to give full resolution reconstruction in the final frame successfully as in the numerical simulation, verifying the proposed technique.

 

Fig. 8 Optical reconstructions captured when the camera focus is at (a) dragon, and (b) stars.

Download Full Size | PPT Slide | PDF

We also conducted an optical experiment which demonstrates interactive update of the hologram by the proposed method. The hologram resolution, i.e. 1920 × 1080, and the objects, i.e. the dragon at the 0.05m and the star array at the 0.15m distance are the same as the previous experiment. In this interactive update demonstration, however, the gaze points were not simply assumed before the operation like the previous experiment, but they were given interactively by the user during the operation. Since the actual gaze-tracking system was not available at the time of the experiment, our implementation tracked the computer mouse pointer controlled by the user instead of the gaze. The movies of the hologram data loaded to the SLM and the optical reconstruction were captured during the operation using our MATLAB implementation and the camera, respectively. They were then merged into a single movie, i.e. Visualization 1 with side-by-side alignment for easier comparison. In the recorded experiment, the full density hologram was generated through 6 updates. A few snapshots of the recorded movie and the computation time for each update are shown in Fig. 9 and Table 3, respectively. Due to the limited computational capability and the non-optimized code of our current implementation, the individual update of the hologram was not instant but took a few seconds. However, this interactive demonstration clearly shows that the hologram is updated such that the reconstruction resolution around the target point, i.e. the tracked mouse pointer position, is enhanced over the previous update in a seamless manner, proving the effectiveness of the proposed method successfully.

 

Fig. 9 Result of the interactive update experiment (see Visualization 1).

Download Full Size | PPT Slide | PDF

Tables Icon

Table 3. Computation time in the interactive update experiment.

5. Conclusion

In this paper, we proposed the foveated CGH technique using triangular mesh based model and its progressive update over frames following gaze directions. The proposed technique controls the resolution or the level of detail of the reconstruction by the mesh density and thus it does not suffer from vacant area problems of the previous point cloud or ray tracing based methods. The progressive update of the hologram following the gaze direction enables the reduction of the initial frame latency while eventually achieving full resolution hologram presentation in the final frame without noticeable resolution loss. The proposed technique realizes the progressive update by overlaying the high-resolution mesh patch around the gaze direction to the hologram of the previous frame using the angular spectrum based occlusion method. The computation time for the progressive update is only proportional to the number of the meshes in the high-resolution patch, which makes the proposed technique be computationally efficient. The proposed technique was verified by the numerical simulations and optical experiments successfully, showing reduced first hologram generation latency and progressive update of the reconstruction resolution over frames.

Appendix Pn(fx,y), Bn(fx,y), and En(fx,y) in Eq. (2)

Figure 10 shows geometry of the n-th triangle constituting the high-resolution patch. Three coordinates systems, i.e. xyz global, xlylzl local, and xryr reference, are defined such the hologram and the triangle are located at z=0 plane and zl=0 plane, respectively and a reference triangle is defined in xryr plane. We select the xlylzl local coordinates system such that one of the vertices of the triangle rox,y,z=[xo,yo,zo]T coincides with the local coordinates origin for simplicity, giving a relation

rxl,yl,zl=R(rx,y,zrx,y,zo),
where R is the 3 × 3 rotation matrix, and the 3 × 1 vectors rx,y,z = [x,y,z]T and rxl,yl,zl = [xl,yl,zl]T are position vectors in the global and local coordinates systems, respectively.

 

Fig. 10 Geometry for mesh angular spectrum calculation.

Download Full Size | PPT Slide | PDF

The triangle is modelled as a unit-transmittance aperture that is illuminated by a carrier wave. The transmittance of the triangle is represented by a binary function gl(rxl,yl)=gl([xl,yl]T) which is defined to be 1 inside the triangle and 0 outside in the zl=0 plane. We also define another binary function gr(rxr,yr)=gr([xr,yr]T) in xryr plane which represents the reference triangle and used for the calculation of the angular spectrum of all triangles. The reference triangle is selected such that one of its vertices coincides with the origin of xryr plane. From the geometrical relations, we can relate gl(rxl,yl) and gr(rxr,yr) by gl(rxl,yl)= gr(Arxl,yl) using a 2×2 matrix A.

With this geometry, the terms Pn(fx,y), Bn(fx,y), and En(fx,y) in Eq. (2) are given by [12]

Pn(fx,y)=exp[j2πfx,y,zTrx,y,zo],
Bn(fx,y)=ASr(AT[100010]Rfx,y),
En(fx,y)=exp[j2πfx,y,zTrx,y,zo]det(A)fzlfz,
where fx,y,z = [fx, fy, fz]T in Eq. (4) is a 3 × 1 spatial frequency vector in the global coordinates system with fz = (1/λ2-fx2-fy2)0.5 for a wavelength λ and the fzl in Eq. (6) is the spatial frequency in local zl axis which is given by [fxl, fyl, fzl]T = Rfx,y,z. ASr in Eq. (5) is the Fourier transform of the reference triangle gr(rxr,yr) in the xryr plane which can be given by an analytic formula. Finally, det(A) in Eq. (6) means the determinant of the matrix A.

Funding

Basic Science Research Program, NRF (NRF-2017R1A2B2011084).

References

1. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]   [PubMed]  

2. H.-J. Yeom, H.-J. Kim, S.-B. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, and J.-H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]   [PubMed]  

3. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017). [CrossRef]  

4. J.-H. Park and S.-B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018). [CrossRef]   [PubMed]  

5. T. J. Buker, D. A. Vincenzi, and J. E. Deaton, “The effect of apparent latency on simulator sickness while using a see-through helmet-mounted display: reducing apparent latency with predictive compensation,” Hum. Factors 54(2), 235–249 (2012). [CrossRef]   [PubMed]  

6. R. Albert, A. Patney, D. Luebke, and J. Kim, “Latency requirements for foveated rendering in virtual reality,” ACM Trans. Appl. Percept. 14(4), 25 (2017). [CrossRef]  

7. B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D graphics,” ACM Trans. Graph. 31(6), 156 (2012). [CrossRef]  

8. A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016). [CrossRef]  

9. E. Arabadzhiyska, O. T. Tursun, K. Myszkowski, H.-P. Seidel, and P. Didyk, “Saccade landing position prediction for gaze-contingent rendering,” ACM Trans. Graph. 36(4), 50 (2017). [CrossRef]  

10. J. S. Hong, Y. M. Kim, S. H. Hong, C. S. Shin, and H. J. Kang, “Gaze contingent hologram synthesis for holographic head-mounted-display,” Proc. SPIE 9771, 97710K (2016). [CrossRef]  

11. L. Wei and Y. Sakamoto, “Fast calculation method with foveated rendering for computer-generated holograms using an angle-changeable ray-tracing method,” Appl. Opt. 58(5), A258–A266 (2019). [CrossRef]   [PubMed]  

12. M. Askari, S. B. Kim, K. S. Shin, S. B. Ko, S. H. Kim, D. Y. Park, Y. G. Ju, and J. H. Park, “Occlusion handling using angular spectrum convolution in fully analytical mesh based computer generated hologram,” Opt. Express 25(21), 25867–25878 (2017). [CrossRef]   [PubMed]  

13. J. H. Park, “Recent progresses in computer generated holography for three-dimensional scene,” J. Inform. Disp. 18(1), 1–12 (2017). [CrossRef]  

14. J. H. Park, S. B. Kim, H. J. Yeom, H. J. Kim, H. Zhang, B. Li, Y. M. Ji, S. H. Kim, and S. B. Ko, “Continuous shading and its fast update in fully analytic triangular-mesh-based computer generated hologram,” Opt. Express 23(26), 33893–33901 (2015). [CrossRef]   [PubMed]  

15. H. Hoppe, “Progressive meshes,” in Proceedings of SIGGRAPH (ACM SIGGRAPH, 1996), pp. 99–108. [CrossRef]  

16. H. Hoppe, “View-dependent refinement of progressive meshes,” in Proceedings of SIGGRAPH (ACM SIGGRAPH, 1997), pp. 189–198. [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014).
    [Crossref] [PubMed]
  2. H.-J. Yeom, H.-J. Kim, S.-B. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, and J.-H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015).
    [Crossref] [PubMed]
  3. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
    [Crossref]
  4. J.-H. Park and S.-B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018).
    [Crossref] [PubMed]
  5. T. J. Buker, D. A. Vincenzi, and J. E. Deaton, “The effect of apparent latency on simulator sickness while using a see-through helmet-mounted display: reducing apparent latency with predictive compensation,” Hum. Factors 54(2), 235–249 (2012).
    [Crossref] [PubMed]
  6. R. Albert, A. Patney, D. Luebke, and J. Kim, “Latency requirements for foveated rendering in virtual reality,” ACM Trans. Appl. Percept. 14(4), 25 (2017).
    [Crossref]
  7. B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D graphics,” ACM Trans. Graph. 31(6), 156 (2012).
    [Crossref]
  8. A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
    [Crossref]
  9. E. Arabadzhiyska, O. T. Tursun, K. Myszkowski, H.-P. Seidel, and P. Didyk, “Saccade landing position prediction for gaze-contingent rendering,” ACM Trans. Graph. 36(4), 50 (2017).
    [Crossref]
  10. J. S. Hong, Y. M. Kim, S. H. Hong, C. S. Shin, and H. J. Kang, “Gaze contingent hologram synthesis for holographic head-mounted-display,” Proc. SPIE 9771, 97710K (2016).
    [Crossref]
  11. L. Wei and Y. Sakamoto, “Fast calculation method with foveated rendering for computer-generated holograms using an angle-changeable ray-tracing method,” Appl. Opt. 58(5), A258–A266 (2019).
    [Crossref] [PubMed]
  12. M. Askari, S. B. Kim, K. S. Shin, S. B. Ko, S. H. Kim, D. Y. Park, Y. G. Ju, and J. H. Park, “Occlusion handling using angular spectrum convolution in fully analytical mesh based computer generated hologram,” Opt. Express 25(21), 25867–25878 (2017).
    [Crossref] [PubMed]
  13. J. H. Park, “Recent progresses in computer generated holography for three-dimensional scene,” J. Inform. Disp. 18(1), 1–12 (2017).
    [Crossref]
  14. J. H. Park, S. B. Kim, H. J. Yeom, H. J. Kim, H. Zhang, B. Li, Y. M. Ji, S. H. Kim, and S. B. Ko, “Continuous shading and its fast update in fully analytic triangular-mesh-based computer generated hologram,” Opt. Express 23(26), 33893–33901 (2015).
    [Crossref] [PubMed]
  15. H. Hoppe, “Progressive meshes,” in Proceedings of SIGGRAPH (ACM SIGGRAPH, 1996), pp. 99–108.
    [Crossref]
  16. H. Hoppe, “View-dependent refinement of progressive meshes,” in Proceedings of SIGGRAPH (ACM SIGGRAPH, 1997), pp. 189–198.
    [Crossref]

2019 (1)

2018 (1)

2017 (5)

R. Albert, A. Patney, D. Luebke, and J. Kim, “Latency requirements for foveated rendering in virtual reality,” ACM Trans. Appl. Percept. 14(4), 25 (2017).
[Crossref]

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
[Crossref]

M. Askari, S. B. Kim, K. S. Shin, S. B. Ko, S. H. Kim, D. Y. Park, Y. G. Ju, and J. H. Park, “Occlusion handling using angular spectrum convolution in fully analytical mesh based computer generated hologram,” Opt. Express 25(21), 25867–25878 (2017).
[Crossref] [PubMed]

J. H. Park, “Recent progresses in computer generated holography for three-dimensional scene,” J. Inform. Disp. 18(1), 1–12 (2017).
[Crossref]

E. Arabadzhiyska, O. T. Tursun, K. Myszkowski, H.-P. Seidel, and P. Didyk, “Saccade landing position prediction for gaze-contingent rendering,” ACM Trans. Graph. 36(4), 50 (2017).
[Crossref]

2016 (2)

J. S. Hong, Y. M. Kim, S. H. Hong, C. S. Shin, and H. J. Kang, “Gaze contingent hologram synthesis for holographic head-mounted-display,” Proc. SPIE 9771, 97710K (2016).
[Crossref]

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
[Crossref]

2015 (2)

2014 (1)

2012 (2)

T. J. Buker, D. A. Vincenzi, and J. E. Deaton, “The effect of apparent latency on simulator sickness while using a see-through helmet-mounted display: reducing apparent latency with predictive compensation,” Hum. Factors 54(2), 235–249 (2012).
[Crossref] [PubMed]

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D graphics,” ACM Trans. Graph. 31(6), 156 (2012).
[Crossref]

Albert, R.

R. Albert, A. Patney, D. Luebke, and J. Kim, “Latency requirements for foveated rendering in virtual reality,” ACM Trans. Appl. Percept. 14(4), 25 (2017).
[Crossref]

Arabadzhiyska, E.

E. Arabadzhiyska, O. T. Tursun, K. Myszkowski, H.-P. Seidel, and P. Didyk, “Saccade landing position prediction for gaze-contingent rendering,” ACM Trans. Graph. 36(4), 50 (2017).
[Crossref]

Askari, M.

Benty, N.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
[Crossref]

Buker, T. J.

T. J. Buker, D. A. Vincenzi, and J. E. Deaton, “The effect of apparent latency on simulator sickness while using a see-through helmet-mounted display: reducing apparent latency with predictive compensation,” Hum. Factors 54(2), 235–249 (2012).
[Crossref] [PubMed]

Deaton, J. E.

T. J. Buker, D. A. Vincenzi, and J. E. Deaton, “The effect of apparent latency on simulator sickness while using a see-through helmet-mounted display: reducing apparent latency with predictive compensation,” Hum. Factors 54(2), 235–249 (2012).
[Crossref] [PubMed]

Didyk, P.

E. Arabadzhiyska, O. T. Tursun, K. Myszkowski, H.-P. Seidel, and P. Didyk, “Saccade landing position prediction for gaze-contingent rendering,” ACM Trans. Graph. 36(4), 50 (2017).
[Crossref]

Drucker, S.

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D graphics,” ACM Trans. Graph. 31(6), 156 (2012).
[Crossref]

Finch, M.

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D graphics,” ACM Trans. Graph. 31(6), 156 (2012).
[Crossref]

Georgiou, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
[Crossref]

Guenter, B.

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D graphics,” ACM Trans. Graph. 31(6), 156 (2012).
[Crossref]

Hahn, J.

Hong, J. S.

J. S. Hong, Y. M. Kim, S. H. Hong, C. S. Shin, and H. J. Kang, “Gaze contingent hologram synthesis for holographic head-mounted-display,” Proc. SPIE 9771, 97710K (2016).
[Crossref]

Hong, S. H.

J. S. Hong, Y. M. Kim, S. H. Hong, C. S. Shin, and H. J. Kang, “Gaze contingent hologram synthesis for holographic head-mounted-display,” Proc. SPIE 9771, 97710K (2016).
[Crossref]

Hoppe, H.

H. Hoppe, “Progressive meshes,” in Proceedings of SIGGRAPH (ACM SIGGRAPH, 1996), pp. 99–108.
[Crossref]

H. Hoppe, “View-dependent refinement of progressive meshes,” in Proceedings of SIGGRAPH (ACM SIGGRAPH, 1997), pp. 189–198.
[Crossref]

Ji, Y. M.

Ji, Y.-M.

Ju, Y. G.

Kang, H. J.

J. S. Hong, Y. M. Kim, S. H. Hong, C. S. Shin, and H. J. Kang, “Gaze contingent hologram synthesis for holographic head-mounted-display,” Proc. SPIE 9771, 97710K (2016).
[Crossref]

Kaplanyan, A.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
[Crossref]

Kim, H.

Kim, H. J.

Kim, H.-J.

Kim, J.

R. Albert, A. Patney, D. Luebke, and J. Kim, “Latency requirements for foveated rendering in virtual reality,” ACM Trans. Appl. Percept. 14(4), 25 (2017).
[Crossref]

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
[Crossref]

Kim, M.

Kim, S. B.

Kim, S. H.

Kim, S.-B.

Kim, S.-H.

Kim, Y. M.

J. S. Hong, Y. M. Kim, S. H. Hong, C. S. Shin, and H. J. Kang, “Gaze contingent hologram synthesis for holographic head-mounted-display,” Proc. SPIE 9771, 97710K (2016).
[Crossref]

Ko, S. B.

Kollin, J. S.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
[Crossref]

Lefohn, A.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
[Crossref]

Li, B.

Luebke, D.

R. Albert, A. Patney, D. Luebke, and J. Kim, “Latency requirements for foveated rendering in virtual reality,” ACM Trans. Appl. Percept. 14(4), 25 (2017).
[Crossref]

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
[Crossref]

Maimone, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
[Crossref]

Moon, E.

Myszkowski, K.

E. Arabadzhiyska, O. T. Tursun, K. Myszkowski, H.-P. Seidel, and P. Didyk, “Saccade landing position prediction for gaze-contingent rendering,” ACM Trans. Graph. 36(4), 50 (2017).
[Crossref]

Park, D. Y.

Park, J. H.

Park, J.-H.

Patney, A.

R. Albert, A. Patney, D. Luebke, and J. Kim, “Latency requirements for foveated rendering in virtual reality,” ACM Trans. Appl. Percept. 14(4), 25 (2017).
[Crossref]

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
[Crossref]

Roh, J.

Sakamoto, Y.

Salvi, M.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
[Crossref]

Seidel, H.-P.

E. Arabadzhiyska, O. T. Tursun, K. Myszkowski, H.-P. Seidel, and P. Didyk, “Saccade landing position prediction for gaze-contingent rendering,” ACM Trans. Graph. 36(4), 50 (2017).
[Crossref]

Shin, C. S.

J. S. Hong, Y. M. Kim, S. H. Hong, C. S. Shin, and H. J. Kang, “Gaze contingent hologram synthesis for holographic head-mounted-display,” Proc. SPIE 9771, 97710K (2016).
[Crossref]

Shin, K. S.

Snyder, J.

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D graphics,” ACM Trans. Graph. 31(6), 156 (2012).
[Crossref]

Tan, D.

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D graphics,” ACM Trans. Graph. 31(6), 156 (2012).
[Crossref]

Tursun, O. T.

E. Arabadzhiyska, O. T. Tursun, K. Myszkowski, H.-P. Seidel, and P. Didyk, “Saccade landing position prediction for gaze-contingent rendering,” ACM Trans. Graph. 36(4), 50 (2017).
[Crossref]

Vincenzi, D. A.

T. J. Buker, D. A. Vincenzi, and J. E. Deaton, “The effect of apparent latency on simulator sickness while using a see-through helmet-mounted display: reducing apparent latency with predictive compensation,” Hum. Factors 54(2), 235–249 (2012).
[Crossref] [PubMed]

Wei, L.

Wyman, C.

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
[Crossref]

Yeom, H. J.

Yeom, H.-J.

Zhang, H.

ACM Trans. Appl. Percept. (1)

R. Albert, A. Patney, D. Luebke, and J. Kim, “Latency requirements for foveated rendering in virtual reality,” ACM Trans. Appl. Percept. 14(4), 25 (2017).
[Crossref]

ACM Trans. Graph. (4)

B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D graphics,” ACM Trans. Graph. 31(6), 156 (2012).
[Crossref]

A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 179 (2016).
[Crossref]

E. Arabadzhiyska, O. T. Tursun, K. Myszkowski, H.-P. Seidel, and P. Didyk, “Saccade landing position prediction for gaze-contingent rendering,” ACM Trans. Graph. 36(4), 50 (2017).
[Crossref]

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
[Crossref]

Appl. Opt. (1)

Hum. Factors (1)

T. J. Buker, D. A. Vincenzi, and J. E. Deaton, “The effect of apparent latency on simulator sickness while using a see-through helmet-mounted display: reducing apparent latency with predictive compensation,” Hum. Factors 54(2), 235–249 (2012).
[Crossref] [PubMed]

J. Inform. Disp. (1)

J. H. Park, “Recent progresses in computer generated holography for three-dimensional scene,” J. Inform. Disp. 18(1), 1–12 (2017).
[Crossref]

Opt. Express (5)

Proc. SPIE (1)

J. S. Hong, Y. M. Kim, S. H. Hong, C. S. Shin, and H. J. Kang, “Gaze contingent hologram synthesis for holographic head-mounted-display,” Proc. SPIE 9771, 97710K (2016).
[Crossref]

Other (2)

H. Hoppe, “Progressive meshes,” in Proceedings of SIGGRAPH (ACM SIGGRAPH, 1996), pp. 99–108.
[Crossref]

H. Hoppe, “View-dependent refinement of progressive meshes,” in Proceedings of SIGGRAPH (ACM SIGGRAPH, 1997), pp. 189–198.
[Crossref]

Supplementary Material (1)

NameDescription
» Visualization 1       Interactive demonstration of the progressive update of the foveated computer generated hologram

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Foveated hologram concept.
Fig. 2
Fig. 2 Comparison between point density control and mesh density control.
Fig. 3
Fig. 3 Proposed method. (a) Desired reconstructions of the foveated and progressive update process, (b) update operation of the proposed method illustrated in reconstruction space, (c) update operation of the proposed method illustrated in hologram plane.
Fig. 4
Fig. 4 Hierarchical mesh vertex representation. (a) vertices hierarchy (b) local resolution control according to the eye gaze point.
Fig. 5
Fig. 5 Holograms and corresponding reconstructions for a single object (a) with a normal plane carrier wave, and (b) with a random phase carrier wave.
Fig. 6
Fig. 6 Holograms and corresponding reconstructions for two objects (a) holograms and (b),(c) corresponding reconstructions of two objects (stars and dragon).
Fig. 7
Fig. 7 Optical experiment setup.
Fig. 8
Fig. 8 Optical reconstructions captured when the camera focus is at (a) dragon, and (b) stars.
Fig. 9
Fig. 9 Result of the interactive update experiment (see Visualization 1).
Fig. 10
Fig. 10 Geometry for mesh angular spectrum calculation.

Tables (3)

Tables Icon

Table 1 Computation time comparison between the proposed progressive method and conventional one-step method in Fig. 5 simulation.

Tables Icon

Table 2 Computation time comparison between the progressive method and conventional one-step method in Fig. 6 simulation.

Tables Icon

Table 3 Computation time in the interactive update experiment.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

A S updated ( f x,y )=A S prev ( f x,y )+ n=1 N A S n ( f x,y )A S prev,n ( f x,y ) ,
A S n ( f x,y )A S prev,n ( f x,y )=[ { D n ( f x,y )A S prev ( f x,y ) P n ( f x,y ) } B n ( f x,y ) ] E n ( f x,y ),
r xl,yl,zl =R( r x,y,z r x,y,z o ),
P n ( f x,y )=exp[ j2π f x,y,z T r x,y,z o ],
B n ( f x,y )=A S r ( A T [ 1 0 0 0 1 0 ]R f x,y ),
E n ( f x,y )= exp[ j2π f x,y,z T r x,y,z o ] det( A ) f zl f z ,

Metrics